What Happens to the S&P 500 if the Magnificent Seven Fail to Deliver on AI?

Mag 7 holding up the S&P 500 to the tune of almost 35% value of the entire S&P 500

The S&P 500 has never been so dependent on so few companies. The Magnificent Seven — Microsoft, Apple, Nvidia, Alphabet, Amazon, Meta and Tesla — now account for roughly one‑third of the entire index’s value – that’s 33% of the whole S&P 500 vlauation.

Their dominance is not simply a reflection of current earnings power; it is a collective bet on an AI‑centred future that investors assume will transform productivity, reshape industries and justify valuations that stretch far beyond historical norms.

If one, several, or all of these companies fail to deliver the AI revolution that markets have priced in, the consequences for the S&P 500 would be immediate, structural and potentially severe.

Mild

The mildest scenario is a stumble by one or two members. If Apple’s device strategy falters, or Tesla’s autonomy narrative weakens further for instance, the index absorbs the shock.

A 3–5% pullback is plausible, driven by mechanical index weighting rather than systemic fear. Investors already expect uneven performance within the group, and the remaining leaders could offset the disappointment.

Major

The more destabilising scenario is a collective slowdown among the AI infrastructure leaders – Microsoft, Nvidia and Alphabet. These firms sit at the centre of the global capex cycle.

If cloud AI demand proves slower, less profitable or more niche than expected, the market would be forced to reassess the entire economic promise of generative AI.

In this case, the S&P 500 could see a 10–15% correction as valuations compress, volatility spikes and passive flows unwind years of momentum.

Dramatic

The most dramatic outcome is a broad failure of the AI ‘sector’ itself. If the promised productivity gains do not materialise, if enterprise adoption stalls, or if regulatory and cost pressures erode margins, the S&P 500 would face a structural reset.

With a third of the index priced for exponential growth, a collective disappointment could trigger a decline of 20% or more.

This would not resemble a cyclical recession; it would be a leadership collapse similar to the dot‑com unwind, but with far greater concentration and far more passive capital tied to the winners.

The uncomfortable truth is that the S&P 500’s trajectory is now inseparable from the Magnificent Seven. If they deliver, the index continues to defy gravity. If they falter, the market must rebuild a new narrative — and a new set of leaders — from the ground up.

If the Magnificent Seven Lose Their Grip, Who Rises Next?

For years, the S&P 500 has been defined by the gravitational pull of the Magnificent Seven. Their dominance has shaped index performance, investor psychology and the entire narrative arc of global markets.

If these companies lose momentum — whether through slower AI adoption, regulatory pressure, margin compression or simple over‑expectation — leadership will not disappear.

It will rotate. And the beneficiaries are already hiding in plain sight.

Alternative investment to AI

The first and most obvious winners would be Energy and Utilities. As AI enthusiasm cools, investors tend to rediscover the appeal of tangible cash flow. Energy companies, with their dividends and pricing power, become natural refuges.

Utilities, often dismissed as dull, regain relevance as defensive anchors in a more volatile market. If AI‑driven data‑centre demand slows, the sector’s cost pressures ease, improving margins.

Next in line are Industrials and Infrastructure. A retreat from speculative tech would likely redirect capital towards physical productivity — logistics, construction, defence, electrification and manufacturing modernisation.

These sectors have been quietly compounding earnings while Silicon Valley has monopolised attention. If the market shifts from promise to proof, industrials become the new growth story.

Healthcare and Pharmaceuticals would also rise. Their earnings cycles are largely independent of AI hype, driven instead by demographics, innovation and regulatory frameworks. When tech stumbles, healthcare’s stability becomes a premium rather than an afterthought.

Biotech, in particular, benefits from capital rotation when investors seek uncorrelated growth.

Financials stand to gain as well. A correction in mega‑cap tech would rebalance passive flows, giving banks and insurers a larger share of index‑tracking capital. Higher rates and wider spreads already support the sector; a shift away from tech simply amplifies the effect.

Finally, Consumer Staples would reassert themselves. In a market recalibrating after an AI disappointment, investors gravitate towards predictable earnings. Food, beverages and household goods regain their defensive premium as volatility rises.

The broader truth is simple: if the Magnificent Seven falter, the S&P 500 does not collapse — it redistributes. Leadership moves from code to concrete, from speculative multiples to operational reality. The market has always found new champions. It will again.

OpenAI Missed Targets — and creates a mini–AI Shockwave – Will it become a Tsunami?

OpenAI wobble?

OpenAI’s reported failure to meet internal revenue and user‑growth targets has sent a sharp tremor through global tech markets, exposing just how dependent the wider AI sector has become on a single company’s momentum.

The Wall Street Journal report — which OpenAI has reportedly dismissed as “ridiculous” — suggested the firm is expanding more slowly than its own projections, raising questions about whether its vast compute‑spend commitments can be sustained. That alone was enough to trigger a sell‑off.

Slide

The steepest declines were concentrated among companies most financially tethered to OpenAI’s infrastructure demands. Oracle, which has a colossal $300 billion, five‑year cloud capacity agreement with the firm, fell more than 4%.

After the news story was released chipmakers followed OpenAI: Broadcom dropped over 4%, AMD slid more than 3%, Nvidia dipped around 1.5%, and CoreWeave — the highly leveraged neocloud provider — sank nearly 6%.

Even Qualcomm, which had recently enjoyed a lift from reports of collaboration with OpenAI on smartphone chips, slipped before recovering.

This is the first moment in the current AI cycle where a wobble at OpenAI has produced a synchronised pullback across the entire supply chain.

Investors are now confronting a question they have largely ignored: what if the sector’s flagship growth curve is not perfectly exponential? But my guess is, like all events at the moment, the market will likely overlook it.

Fragile

The reaction also exposes the fragility of AI‑linked valuations. Markets have priced the boom as if demand is both infinite and linear.

Any hint of deceleration — even one disputed by the company — forces a reassessment of the capital intensity underpinning the industry.

With Anthropic and Google’s Gemini gaining enterprise traction, OpenAI’s dominance is no longer assumed.

Still, several fund managers argue the broader AI investment cycle remains intact. The sell‑off looks less like a turning point and more like a reminder: when one company becomes the gravitational centre of an entire narrative, even a rumour can bend the orbit.

Big Tech’s Talent Exodus Fuels a New Wave of AI Startups

Big Tech AI Exodus

A quiet but decisive shift is under way in the global AI race: some of the most accomplished researchers at Meta, Google, OpenAI and other frontier labs are walking out of the biggest companies in the sector to build their own.

Trend

The trend has accelerated sharply over the past year, with new ventures raising extraordinary sums within months of being founded, as investors bet that smaller teams can move faster than the giants they left behind.

The motivations are remarkably consistent. Researchers say that the commercial pressure inside the largest AI labs has narrowed the scope of what they are allowed to explore.

Rush

With Big Tech locked into a high‑stakes contest to release ever‑larger models on tight schedules, entire areas of research — from new architectures to interpretability and agentic systems — are being deprioritised.

That creates an opening for smaller firms that can pursue ideas too experimental or too slow‑burn for corporate roadmaps.

Investors

Investors have responded with enthusiasm. Former Google DeepMind scientist David Silver secured a record $1.1 billion seed round for his new company, Ineffable Intelligence, while other ex‑DeepMind and ex‑Meta researchers are raising similar sums for ventures focused on reinforcement learning, continuous‑learning systems and autonomous labs.

In total, AI startups founded since early 2025 have already attracted nearly $19 billion in funding this year, putting them on track to surpass last year’s total.

Independence

Founders argue that independence gives them both speed and neutrality. Chip‑design startup Ricursive Intelligence, for example, says customers are more willing to trust a standalone company than a Big Tech competitor with its own hardware ambitions.

Many of these startups are also rebuilding their old teams, hiring colleagues from the very companies they left.

The result is a new competitive dynamic: Big Tech still dominates the AI landscape, but the frontier of innovation is increasingly being pushed by smaller, highly focused labs that believe they can out‑pace the giants – and with lower investment too.

DeepSeek releases preview of Open Source V4 AI Model

DeepSeek V4 AI

DeepSeek’s newly released V4 model marks a significant step forward in open‑source AI, combining long‑context capability with major architectural upgrades.

DeepSeek V4 arrives as a preview release, offering two variants — V4‑Pro and V4‑Flash — both designed to push the boundaries of efficiency and reasoning performance.

The headline feature is the one‑million‑token context window, enabling the model to process and retain far larger bodies of information than previous generations.

Positioning

This positions V4 as a strong contender in tasks requiring extended reasoning, research support, and complex agentic workflows.

The V4 series introduces a refined Hybrid Attention Architecture, combining compressed sparse and heavily compressed attention mechanisms to dramatically reduce computational overhead.

DeepSeek claims this approach cuts inference FLOPs and KV‑cache requirements to a fraction of those seen in earlier models, making long‑context operation more practical and cost‑effective.

V4‑Pro, the flagship model, includes a maximum reasoning‑effort mode, which the company says significantly advances open‑source reasoning performance and narrows the gap with leading closed‑source systems.

Meanwhile, V4‑Flash offers a more economical, faster alternative while retaining strong capability across everyday tasks.

Accelerating AI ambition

The release underscores China’s accelerating AI ambitions. DeepSeek’s earlier R1 model shook global markets with its low‑cost, high‑performance profile, and V4 continues that trajectory — now optimised for domestic chips and supported by growing local hardware ecosystems.

With open‑source availability and aggressive efficiency gains, DeepSeek V4 strengthens the company’s position as one of the most closely watched challengers in the global AI race.

And it’s far cheaper than its peers and not so power hungry either.

TSMC first-quarter profit rises 58%, beats estimates as AI demand holds steady

TSMC Profit Increase

TSMC’s 58% surge in first‑quarter profit is the clearest sign yet that the AI boom is no longer a cyclical uplift but a structural shift reshaping the entire semiconductor industry.

The Taiwanese chipmaker delivered record earnings, comfortably beating analyst expectations, as demand for advanced processors continued to outstrip supply.

Net income reportedly reached NT$572.48 billion, marking a fourth consecutive quarter of record profits, while revenue climbed to NT$1.134 trillion, driven overwhelmingly by high‑performance computing and AI‑related orders.

What stands out is the composition of that growth. Roughly three‑quarters of TSMC’s wafer revenue reportedly came from advanced nodes, with 3‑nanometre chips alone accounting for a quarter of shipments.

Nvidia

Nvidia has now overtaken Apple as TSMC’s largest customer, underscoring how AI accelerators have become the industry’s most valuable real estate.

TSMC’s executives described AI demand as “extremely robust”, with customers signalling multi‑year achievements rather than the usual stop‑start ordering cycle.

The company also moved to reassure investors over supply‑chain risks linked to the Middle East conflict, saying it has diversified sources for critical gases such as helium and hydrogen.

With capacity running hot and capital spending set to hit the top end of guidance, TSMC is positioning itself as the indispensable chipmaker in the AI era.

ASML raises 2026 guidance as AI chips demand remains strong

ASML guidance for 2026 raised

ASML’s decision to raise its 2026 guidance underlines a simple reality: demand for advanced AI chips is not easing, and the world’s most important semiconductor equipment maker remains at the centre of that surge.

The company signalled stronger-than-expected orders for its extreme ultraviolet (EUV) and next‑generation high‑NA systems, driven by chipmakers racing to expand capacity for AI accelerators, data‑centre processors and cutting‑edge logic nodes.

Bottleneck

The upgrade matters because ASML sits at the bottleneck of global chip production. Only a handful of firms can even buy its most advanced machines, and those firms – chiefly TSMC, Intel and Samsung – are all scaling up AI‑focused manufacturing.

Their capital expenditure plans have held firm despite broader economic uncertainty, suggesting that AI infrastructure is becoming a non‑discretionary investment rather than a cyclical one.

Two forces are driving the momentum. First, hyperscalers continue to pour billions into AI clusters, creating sustained demand for the most advanced lithography tools.

Long-term lock in

Second, geopolitical pressure to secure domestic chip capacity is pushing governments and manufacturers to lock in long‑term equipment orders.

ASML’s raised outlook reinforces the sense that the semiconductor cycle is diverging: consumer electronics remain patchy, but AI‑related manufacturing is entering a multi‑year expansion.

The key question now is whether supply can keep pace with the ambition of its customers.

TSMC’s 35% Revenue Surge Signals the New Centre of Gravity in Global Tech

TSMC revenue surges

Taiwan Semiconductor Manufacturing Company (TSMC) has delivered a striking 35% year‑on‑year jump in first‑quarter revenue, reaching a record NT$1.13 trillion.

The result underscores just how dramatically the centre of gravity in global technology has shifted towards advanced semiconductor manufacturing, with artificial intelligence now the defining force behind industry growth.

Relentless AI demand

TSMC’s performance is being powered by relentless demand for cutting‑edge chips from major clients such as Apple and Nvidia.

As AI infrastructure spending accelerates worldwide, the company has become one of the few manufacturers capable of producing the most sophisticated processors required for training and running large‑scale models.

March alone saw revenue climb more than 45%, highlighting the strength and urgency of this demand.

Ambition

Analysts suggest TSMC is on track to exceed its already ambitious 30% annual growth target, helped not only by volume but also by reported price increases for its most advanced nodes.

Even as smartphone and PC markets remain uneven, AI‑related orders are more than compensating.

With more companies—from hyperscalers to AI start‑ups—designing their own chips, TSMC’s strategic position looks increasingly unassailable.

Upcoming earnings and ASML’s results next week will offer further clues about the momentum behind the semiconductor sector’s AI‑driven boom.

Meta unveils new AI model in AI catchup

Meta's Muse Spark Agentic AI

Meta has unveiled Muse Spark, its first major artificial intelligence model since the company overhauled its AI strategy in response to the underwhelming reception of its previous Llama 4 models.

Developed by the newly formed Meta Superintelligence Labs under the leadership of Alexandr Wang, Muse Spark represents a deliberate shift towards smaller, faster, and more capable systems designed to compete directly with Google, OpenAI, and Anthropic.

Foundation

Muse Spark is positioned as the foundation of a new family of models internally known as Avocado. Meta reportedly describes it as “small and fast by design”, yet able to reason through complex questions in science, maths, and health — a notable claim given the company’s recent struggles to keep pace with rivals.

Early evaluations suggest the model performs competitively in language and visual understanding, though it still trails in coding and abstract reasoning.

Crucially, Muse Spark is deeply integrated into Meta’s ecosystem. It already powers the Meta AI app and website and will soon replace Llama across WhatsApp, Instagram, Facebook, Messenger, and Meta’s smart glasses.

Integrated

This rollout signals Meta’s intention to embed AI more tightly into everyday user interactions, from search and recommendations to multimodal tasks such as analysing photos or comparing products.

The company is also experimenting with new revenue streams by offering a private API preview to select partners — a departure from its previous open‑source approach.

Whether this shift will alienate developers who embraced the openness of Llama remains to be seen.

Meta frames Muse Spark as an early step toward “personal superintelligence”, an assistant that can understand the world alongside the user rather than waiting for typed instructions.

It’s an ambitious vision — and one that will be tested as the model expands globally and faces scrutiny over privacy, safety, and real‑world performance.

Oracle Cuts Deep as AI Pivot Forces a Reckoning

Oracle's AI Axe

Oracle is swinging hard at its own workforce as the company races to reposition itself as an AI‑infrastructure contender.

Thousands of roles are being eliminated, a drastic move that reflects the sheer financial pressure of trying to keep up with hyperscale rivals in the most capital‑intensive tech shift in decades.

The company’s share price has slumped 25% this year, with investors increasingly uneasy about soaring data‑centre spending and the heavy debt required to fund it.

Oracle has already raised $50 billion to bankroll new GPU‑ready facilities, but unlike Amazon or Microsoft, it lacks the cushion of vast cloud scale.

The result: a balance sheet under strain and a leadership team forced into tough decisions.

Future

Oracle’s remaining performance obligations have ballooned to more than half a trillion dollars, fuelled by major AI partnerships including a huge deal with OpenAI.

But those future revenues don’t solve today’s cash‑flow squeeze. Analysts estimate that cutting 20,000 to 30,000 jobs could free up as much as $10 billion — enough to keep the AI build‑out moving without further rattling the markets.

Oracle is betting that a leaner organisation now will buy it the runway to compete later. The question is whether the cuts arrive in time to match the speed of the AI race.

Stock rises.

Arm’s Bold Pivot: The AGI CPU Signals a New Era for British Chipmaking

ARM Agentic AI CPU

ARM has triggered one of the most dramatic shifts in its 35‑year history with the launch of its first in‑house data‑centre processor, the AGI CPU — a move that sent its shares surging 16% and reshaped expectations for the company’s future.

Long known for licensing energy‑efficient chip designs to the world’s biggest tech firms, ARM is now stepping directly into the silicon market, competing with the very customers that built its empire.

Major Tech Firms Using Arm Designs (AI & Mobile)

CompanyPrimary Use CaseArm-Based Technology
AppleMobile & on‑device AIA‑series (iPhone/iPad) and M‑series (Mac) chips
SamsungMobile, AI, IoTExynos processors
QualcommMobile & automotive AISnapdragon SoCs
GoogleAndroid ecosystem & edge AIPixel phones (Arm cores inside Tensor chips)
Amazon (AWS)Cloud compute & AI inferenceGraviton & Trainium/Inferentia (Arm Neoverse)
MetaAI infrastructureDeploying Arm-based AGI CPU
OpenAIAI inference & orchestrationEarly adopter of Arm AGI CPU
NvidiaAI data‑centre CPUsGrace CPU (Arm architecture)
OPPOMobile AIArm-based SoCs in Find series
vivoMobile AIArm-based SoCs in X‑series

Strong demand

The new AGI CPU is engineered for the rapidly expanding world of AI inference and agentic AI — workloads that demand vast CPU coordination rather than pure GPU horsepower.

Early demand appears strong. Meta has signed on as the first major customer, with OpenAI, Cloudflare and SAP also adopting the chip as they race to expand their AI infrastructure.

The financial implications are striking. ARM expects the AGI CPU alone to generate $15 billion in annual revenue by 2031, a figure that dwarfs the company’s 2025 revenue of $4 billion.

Significant shift

Analysts have described the announcement as the most significant strategic shift ARM has ever undertaken, noting that the revenue projections exceed even the most optimistic market estimates.

By moving into full chip production, ARM is broadening its market to include companies that previously had no interest in its traditional IP‑licensing model.

Executives say the chip will be competitively priced, offering an alternative for firms unable to build their own custom silicon.

For the UK, the launch marks a rare moment of industrial ambition in a sector dominated by American and Asian giants.

If ARM’s forecasts hold, the AGI CPU could become one of the most commercially successful chips ever produced by a British company — and a defining pillar of the AI age.

See more here about the new ARM AGI CPU

The Future of Agentic AI – Tools for Automation

Agentic AI

Agentic AI is rapidly shifting from a speculative idea to a practical force reshaping how work gets done.

Unlike traditional AI systems, which wait passively for instructions, agentic AI can plan, act, and adapt within defined boundaries.

It is not simply a smarter chatbot; it is a system capable of taking initiative, coordinating tasks, and pursuing goals on behalf of its user.

This evolution marks a profound turning point in how we think about automation, creativity, and human–machine collaboration.

Agentic AI colleagues

The first major change is the move from reaction to autonomy. Today’s AI assistants excel at answering questions or generating content, but they still rely on constant prompting.

Agentic AI, by contrast, can break down a complex objective into smaller steps, choose the best tools for each stage, and execute them with minimal oversight. This transforms AI from a passive helper into an active collaborator.

For individuals and small teams, it promises a level of operational leverage previously reserved for large organisations with dedicated staff.

A second shift lies in the emergence of multi‑modal competence. Agentic systems will not be confined to text. They will navigate interfaces, analyse documents, draft communications, and even orchestrate workflows across multiple platforms.

In effect, they will behave more like digital colleagues—capable of understanding context, maintaining continuity, and adapting to changing priorities. The result is a new category of labour: cognitive automation that complements rather than replaces human judgement.

However, the rise of agentic AI also raises important questions. Autonomy introduces risk. If an AI can take action, it must do so safely, transparently, and within clear constraints.

On guard

Guardrails will be essential—not only technical safeguards, but also cultural norms around delegation, accountability, and trust. The future will require a balance between empowering AI to act and ensuring humans remain firmly in control of outcomes.

Another challenge is the shifting nature of expertise. As agentic AI handles more administrative and procedural work, human value will increasingly lie in strategic thinking, creativity, and ethical decision‑making.

This is not a loss but a rebalancing. Freed from routine tasks, people can focus on higher‑order work that genuinely benefits from human insight.

The organisations that thrive will be those that treat AI not as a shortcut, but as a catalyst for deeper, more meaningful contribution.

Future use of agents

Looking ahead, the most exciting aspect of agentic AI is its potential to democratise capability. A single individual could run a publication, a business, or a research project with the operational efficiency of a small team.

Barriers to entry will fall. Innovation will accelerate. And the line between “solo creator” and “organisation” will blur.

Agentic AI is not the end of human agency; it is an extension of it. The future belongs to those who learn to work with these systems—setting direction, providing judgement, and letting AI handle some of the heavy lifting.

Far from replacing us, agentic AI may finally give us the space to think, create, and lead with clarity.

OpenClaw: The Fastest‑Growing AI Agent Is Reshaping Tech, Security, and Global Adoption

OpenClaw AI agents

OpenClaw has rapidly become one of the most influential developments in artificial intelligence, evolving from a small open‑source experiment into a global phenomenon reshaping how people interact with computers.

Launched in January 2026, the platform allows users to run autonomous AI agents locally on their own machines, giving them the power to organise files, write code, browse the web, and automate everyday digital tasks without relying on cloud services.

This local‑first design has been central to its explosive growth — and to the concerns now emerging around it.

One of the most striking cultural shifts has taken place in China, where OpenClaw has become a mainstream sensation.

AI Lobsters

Users refer to their agents as “AI lobsters,” a playful nod to the platform’s crustacean branding. Retirees, students, and professionals alike have begun “raising” these lobsters to help manage knowledge, streamline work, and perform practical tasks that traditional chatbots struggle with.

The trend has grown so quickly that crowds have gathered outside major tech offices in Beijing to install the software together, turning OpenClaw into a genuine grassroots movement.

This surge in popularity has also caught the attention of global markets. Chinese AI‑related stocks have risen sharply following comments from Nvidia CEO Jensen Huang, who described OpenClaw as “the next ChatGPT,” signalling its potential to redefine the agentic AI landscape.

Security

Companies building self‑evolving agents and cloud infrastructure around OpenClaw have seen double‑digit gains as investors position themselves for what appears to be the next major AI wave.

Yet OpenClaw’s power has also raised red flags. Because the agent runs locally and can control a user’s computer, enterprise IT teams have struggled to manage the security implications.

The platform’s ability to act autonomously — reading files, sending messages, and interacting with applications — has created a need for stronger guardrails, especially in corporate environments.

Nvidia’s NemoClaw

Nvidia has stepped in with NemoClaw, a new enterprise‑grade stack that adds privacy controls, security infrastructure, and vetted local models to OpenClaw through a single‑command installation.

The goal is to make autonomous agents more trustworthy and scalable without undermining the open‑source ethos that made OpenClaw successful.

OpenClaw’s own development continues at pace. The latest stable release, v2026.3.13, includes fixes for session handling, improved browser‑control mechanisms, and a shift away from legacy Chrome extensions towards direct attachment to existing browser sessions — a move designed to make agent operations safer and more reliable.

The future

In just a few months, OpenClaw has transformed from a niche project into a global force, driving cultural trends, market movements, and enterprise innovation.

Its trajectory suggests that autonomous, locally run agents may soon become a standard part of everyday computing — and the race to shape that future has only just begun.

Pentagon CTO warns Claude could ‘pollute’ defence supply chain

Anthropic and the U.S. military

The Pentagon’s Chief Technology Officer, Emil Michael, has apparently ignited a fresh debate over the role of commercial artificial intelligence in national security, arguing that Anthropic’s Claude models could “pollute” the U.S. defence supply chain.

I notice his comments came in an interview with CNBC, offer the clearest rationale yet for the Department of Defense’s decision to designate Anthropic as a supply chain risk — an extraordinary step previously reserved for foreign adversaries.

It seems the opinion is that Claude’s “policy preferences”, embedded through Anthropic’s constitutional training approach, create an unacceptable misalignment with the Pentagon’s operational needs.

Risk

It was reported that any AI system whose underlying values diverge from defence priorities risks producing ineffective outputs, whether in decision‑support tools, equipment design, or battlefield logistics.

We can’t have a company that has a different policy preference baked into the model… pollute the supply chain so our warfighters are getting ineffective weapons [and] ineffective protection,” he was reported to have said.

Anthropic has responded forcefully, suing the Trump administration and calling the designation “unprecedented and unlawful”.

The company argues that the move jeopardises hundreds of millions of dollars in contracts and mischaracterises the nature of its technology.

Claude in the ecosystem?

It also notes that Claude continues to be used within parts of the U.S. military ecosystem, including by major defence contractors such as Palantir, underscoring the practical difficulty of an immediate transition away from its models.

Michael insists the decision is not punitive and emphasises that only a small fraction of Anthropic’s business comes from government work.

Nonetheless, the designation forces contractors to certify they are not using Claude in Pentagon‑related projects, setting up a potentially lengthy and politically charged dispute over how value‑aligned AI must be before it is allowed anywhere near defence infrastructure.

The episode highlights a broader tension: as AI systems become more opinionated by design, governments are increasingly asking whether “alignment” is a technical question — or a geopolitical one.

Anthropic reportedly chats to the Pentagon again

AI and defence use

Anthropic’s decision to reopen negotiations with the Pentagon marks a striking reversal after a very public rupture, and it underscores how central advanced AI has become to U.S. defence strategy.

The talks reportedly collapsed amid a dispute over how Claude, Anthropic’s flagship model, could be used inside military systems.

Reports indicate that the Pentagon had pushed for broad permissions, including deployment in surveillance environments and potentially autonomous weapons systems.

Safety resistance

Anthropic resisted on safety grounds. The company had sought explicit guarantees that its models would not be used for mass surveillance or lethal decision‑making, a red line that triggered the breakdown in relations.

The fallout was immediate. The Pentagon signalled it would drop Anthropic from existing programmes, despite the company’s role in a major defence contract that had already placed Claude inside classified networks.

That escalation raised the prospect of a formal blacklist, a move that would have reverberated across the wider U.S. technology sector.

For Anthropic, the stakes were equally high: losing access to government work would not only cut off a significant customer but also risk isolating the company at a moment when rivals such as OpenAI and Google are deepening their defence ties.

Compromise?

Yet both sides appear to recognise the cost of a prolonged standoff. According to multiple reports, CEO Dario Amodei has reportedly returned to the table in an effort to craft a compromise deal that preserves Anthropic’s safety commitments while allowing the Pentagon to continue using its technology.

Boundaries

Discussions are now likely focused on defining acceptable boundaries for military use — a task made more urgent by the accelerating integration of AI into intelligence analysis, battlefield logistics and autonomous systems.

This renewed dialogue is more than a corporate dispute: it is a test case for how democratic governments and frontier AI labs negotiate power, ethics and national security.

The outcome will shape not only Anthropic’s future but also the norms governing military AI in the years ahead.

OpenAI Moves Swiftly to Fill Federal AI Vacuum

Anthropic and OpenAI AI systems

Following the abrupt federal ban on Anthropic’s Claude models, OpenAI has moved quickly to position itself as the primary replacement across U.S. government departments.

With Claude now designated a supply‑chain risk, agencies are likely scrambling to reconfigure AI workflows — and OpenAI’s systems appear to be emerging as the default alternative.

Integration

The company’s flagship GPT‑4.5 and its agentic development tools have reportedly already been integrated into several defence and civilian systems, according to some observers.

OpenAI’s reported longstanding compatibility with government‑approved platforms, including Azure and OpenRouter, has smoothed the transition. Unlike Anthropic, OpenAI has historically offered more flexible deployment options.

Industry analysts note that OpenAI’s recent hires — including agentic systems pioneer Peter Steinberger (OpenClaw) — signal a deeper push into autonomous task execution, a capability highly prized by defence and intelligence agencies.

The company’s agent frameworks are being trialled for logistics, simulation, and multilingual analysis, with early results described as “mission‑ready.”

Friction

However, the shift is not without friction. It has been reported that some federal teams have built Claude‑specific workflows, particularly in legal, policy, and ethics‑driven domains where Anthropic’s safety constraints were seen as a feature, not a limitation.

Replacing those systems with GPT‑based models requires careful recalibration to avoid unintended consequences.

OpenAI’s rise also raises broader questions about vendor concentration. With Anthropic sidelined and Google’s Gemini models still undergoing federal evaluation – OpenAI now dominates the landscape — a position that may invite scrutiny from oversight bodies concerned about resilience and competition.

Still, for now, OpenAI appears to be the primary beneficiary of the Claude ban. In the vacuum left by Anthropic, OpenAI will be attempting to fill the space.

OpenAI vs Anthropic: Safety vs Autonomy in Federal AI

OpenAI’s agentic tools are likely filling the vacuum left by Anthropic’s ban, offering flexible deployment and autonomous task execution prized by defence and intelligence agencies.

While Claude prioritised safety constraints and ethical guardrails, OpenAI’s GPT‑based systems should offer broader operational freedom.

This shift reflects a deeper philosophical divide: Anthropic’s models were designed to resist misuse, while OpenAI’s are engineered for adaptability and control.

As federal agencies recalibrate, the tension between safety‑first design and unrestricted autonomy is becoming the defining fault line in U.S. government AI strategy.

How long will it be before Anthropic is invited back to the table?

IBM Shares Slide as AI Threatens Its Legacy Stronghold

AI and IBM

When artificial intelligence first ignited investor enthusiasm, it lifted almost every major technology stock.

The narrative was simple: AI would transform industries, boost productivity and unlock vast new revenue streams.

Yet as the cycle matures, markets are becoming more selective. In recent weeks, shares of IBM have drifted lower, illustrating how the ‘AI effect’ can cut both ways.

At first glance, IBM should be a prime beneficiary. The company has spent years repositioning itself around hybrid cloud infrastructure, data analytics and enterprise AI solutions.

Its Watson platform has been refreshed with generative AI tools designed to automate customer service, streamline software development and enhance business decision-making. Management has repeatedly emphasised AI as a core growth engine.

Market Expectations

However, the market’s expectations have shifted. Investors are increasingly rewarding companies that sit at the very heart of AI infrastructure — those supplying advanced semiconductors, high-performance computing capacity and hyperscale cloud services.

These businesses are reporting visible surges in AI-related demand, often accompanied by sharp revenue acceleration and expanding margins.

By contrast, IBM’s AI exposure is embedded within broader consulting and software operations, making its growth trajectory appear steadier rather than explosive.

This distinction matters in a momentum-driven environment. When earnings updates fail to deliver dramatic upside surprises, shares can quickly lose favour.

Less AI Effect

IBM’s results have shown progress in software and recurring revenue, but they have not reflected the kind of dramatic AI-driven uplift seen elsewhere in the sector. For some investors, that raises questions about competitive positioning and pricing power.

There is also a perception issue. Despite its reinvention efforts, IBM still carries the legacy image of a mature technology conglomerate rather than a cutting-edge AI disruptor.

In a market captivated by bold innovation stories, narrative can influence valuation just as much as fundamentals.

If capital flows concentrate in a handful of high-growth AI names, diversified players may struggle to keep pace in share price performance.

AI Tension

Yet the sell-off may also highlight a deeper tension within the AI theme. Enterprise adoption of AI tools tends to be gradual, cautious and closely tied to measurable productivity gains.

IBM’s strategy is built around long-term integration rather than short-term hype. While that approach may lack immediate fireworks, it could prove more durable as corporate clients prioritise reliability, governance and cost control.

For now, though, the AI effect is amplifying investor discrimination. In a market eager for rapid transformation, IBM’s more measured path has translated into weaker share performance — a reminder that not all AI exposure is valued equally.

Further discussion

IBM has found itself on the wrong side of the artificial intelligence boom, with its shares tumbling more than 13% after Anthropic unveiled a new capability that directly targets one of the company’s most enduring revenue pillars: COBOL modernisation.

The sell‑off reflects a broader market anxiety that AI is beginning to erode long‑protected niches in enterprise technology, and IBM has become the latest high‑profile casualty.

For decades, IBM has been synonymous with mainframe computing and the maintenance of vast COBOL‑based systems that underpin global finance, government services, airlines, and retail transactions.

These systems are notoriously complex, expensive to update, and dependent on a shrinking pool of specialist developers.

Premium Brand

That scarcity has long worked in IBM’s favour, allowing it to charge a premium for modernisation and support.

Anthropic’s announcement threatens to upend that equation. Its Claude Code tool, the company claims, can automate the most time‑consuming and costly parts of understanding and restructuring legacy COBOL environments.

Tasks that once required teams of analysts months to complete—mapping dependencies, documenting workflows, identifying risks—can now be accelerated dramatically through AI‑driven analysis.

The implication is clear: modernising legacy systems may no longer require the same level of human expertise, nor the same level of spending.

Investors reacted swiftly. IBM’s share price fell to $223.35, extending a year‑to‑date decline of more than 24% – recovering later to $229.39

IBM one-year chart as of 24th February 2026

The drop reflects not only concerns about lost revenue, but also the fear that IBM’s competitive moat—built on decades of institutional reliance on COBOL—may be eroding faster than expected.

The timing has amplified market jitters. Only days earlier, cybersecurity stocks were hit by another Anthropic announcement: Claude Code Security, a feature designed to scan codebases for vulnerabilities.

AI Mood Logic

The rapid expansion of AI into specialised technical domains has created a ‘sell first, ask questions later’ mood across the market, with investors increasingly wary of companies whose business models depend on labour‑intensive or legacy‑bound processes.

For IBM, the challenge now is to demonstrate that it can harness AI rather than be displaced by it.

The company has invested heavily in its own AI initiatives, but the latest market reaction suggests investors are unconvinced that these efforts will offset the threat to its traditional strongholds.

The AI revolution is reshaping the technology landscape at speed. IBM’s sharp decline is a reminder that even the industry’s oldest giants are not insulated from disruption—and that the next wave of AI competition may hit the most established players hardest.

But remember, this is IBM we are talking about.

Explainer

What is COBOL?

COBOL is an old but remarkably durable programming language created in the late 1950s to run business, finance, and government systems, and it’s still powering much of the world’s banking and administrative infrastructure today.

It was designed to read almost like plain English, making it easier for non‑technical managers to understand, and its stability means many core systems have never been replaced.

OpenClaw Creator Peter Steinberger Joins OpenAI as Agent Race Accelerates

OpenAI and OpenClaw link up

OpenAI has made a decisive move in the fast‑evolving world of autonomous AI agents by hiring Peter Steinberger. He is the Austrian developer behind the viral open‑source project OpenClaw.

The announcement, made by CEO Sam Altman, signals a strategic push towards building more capable personal AI agents. These agents are designed to complete more meaningful tasks for its users.

Steinberger’s creation, OpenClaw—previously known as Clawdbot and Moltbot—rose to prominence for its ability to automate real digital tasks.

Rapid Adoption

Its rapid adoption highlighted a growing appetite for AI systems that move beyond conversation and into practical execution.

Altman reportedly described Steinberger as ‘a genius with a lot of amazing ideas about the future’. He also emphasised that agentic systems will soon become central to OpenAI’s product ecosystem.

Crucially, OpenClaw it was reported, will not be absorbed into a closed platform. Instead, it will reportedly continue as an open‑source project under an independent foundation, with OpenAI providing support.

This approach preserves the community‑driven development model that helped the tool gain traction. This allows Steinberger to focus on advancing agent capabilities within OpenAI’s broader framework.

Steinberger

In a blog post, Steinberger reportedly explained that although OpenClaw could have grown into a large standalone company, he was more motivated by the opportunity to ‘change the world‘ rather than build another corporate venture.

His move comes amid intensifying competition in the agent space. Major tech firms are racing to define the next generation of AI assistants capable of coordinating complex tasks across multiple platforms.

OpenAI’s decision to bring Steinberger onboard underscores the company’s belief that autonomous agents will shape the next phase of AI adoption.

With OpenClaw remaining open and Steinberger now leading internal development, the stage is set for rapid innovation in personal AI systems

Alibaba’s Qwen 3.5 Marks a Strategic Shift Toward AI Agents

Qwen 3.5 AI agent

Alibaba has unveiled Qwen 3.5, its latest large language model series, signalling a decisive shift in China’s increasingly competitive AI landscape.

Released on the eve of the Chinese New Year, the new model arrives with both open‑weight and hosted versions, giving developers the option to run the system on their own infrastructure or through Alibaba’s cloud platform.

The company emphasises that Qwen 3.5 delivers improved performance and lower operating costs compared with earlier iterations, while introducing ‘native multimodal capabilities’ that allow it to process text, images, and video within a single system.

Ability

What sets Qwen 3.5 apart is its focus on agentic behaviour — the ability for AI systems to take actions, complete multi‑step tasks, and operate with minimal human supervision.

This trend has accelerated globally following recent releases from Anthropic and other U.S. based developers, prompting Chinese firms to respond rapidly.

Alibaba says Qwen 3.5 is compatible with popular open‑source agent frameworks such as OpenClaw, which has surged in adoption among developers seeking more autonomous AI tools.

Capable

The open‑weight version features 397 billion parameters, fewer than Alibaba’s previous flagship model, yet the company claims significant gains in reasoning and benchmark performance.

It also supports 201 languages and dialects — a notable expansion that reflects Alibaba’s ambition to position Qwen as a global‑ready platform rather than a purely domestic competitor.

With rivals like ByteDance and Zhipu AI launching their own upgraded models, Qwen 3.5 underscores how China’s AI race is evolving from chatbot development to full‑scale autonomous agents — a shift that could reshape software markets and business models worldwide.

China’s AI Tech Surge Puts Pressure on America’s AI Dominance

Robots line up for AI battle

For much of the modern AI era, the United States has held a clear advantage in frontier research, compute infrastructure, and commercial deployment.

Silicon Valley’s combination of elite talent, abundant capital, and world‑class semiconductor design created an environment where breakthroughs could scale at extraordinary speed.

Challenge

That dominance, however, is no longer uncontested. China’s accelerating push into advanced AI is reshaping the global technological landscape and posing the most credible challenge yet to America’s leadership.

China’s strategy is not built on a single breakthrough but on coordinated national effort. Beijing has spent years aligning universities, state‑backed funds, and private‑sector giants around a shared objective: achieving self‑sufficiency in critical technologies and becoming a global AI powerhouse.

Competitive

Companies such as Huawei, Baidu, Alibaba and Tencent are now producing increasingly competitive large models, while domestic chipmakers are narrowing the performance gap with U.S. suppliers despite export controls.

Crucially, China’s AI ecosystem benefits from scale and cost advantages that the U.S. cannot easily replicate.

Massive data availability, lower energy costs, and vertically integrated supply chains allow Chinese firms to train and deploy models at prices that appeal to developing economies.

For many countries, especially those already reliant on Chinese infrastructure, adopting a Chinese AI stack is becoming a pragmatic economic choice rather than a geopolitical statement.

Investment returns?

This shift is occurring just as U.S. tech giants embark on unprecedented spending cycles. Hyperscalers are pouring hundreds of billions of dollars into data centres, specialised chips, and model training.

The U.S. and its massive BIG Tech Spending Spree – Feeding the AI Habit

While this investment underscores America’s determination to stay ahead, it also raises questions about sustainability.

Investors are increasingly asking whether such vast capital expenditure can deliver long‑term returns in a world where China is offering cheaper, rapidly improving alternatives.

The emerging reality is not one of immediate American decline but of a genuinely multipolar AI landscape. The U.S. still leads in foundational research, top‑tier talent, and cutting‑edge semiconductor design.

Yet China’s rise represents a powerful economy that has mounted a serious challenge to the technological frontier.

The global AI race is no longer defined by a single centre of gravity. Instead, two competing ecosystems — one market‑driven, one reportedly state‑directed — are shaping the future of intelligent technology.

The outcome will influence not only economic power but the digital architecture of much of the world.

Can Hyperscalers Really Justify Their Colossal AI Capex?

Hyperscalers AI investment

The world’s largest cloud providers are engaged in one of the most expensive technological races in history.

Amazon, Microsoft, Meta and Alphabet are collectively on track to spend as much as $700 billion on AI‑related capital expenditure this year — a figure that rivals the GDP of mid‑sized nations and has understandably rattled investors.

The question now dominating markets is simple: can hyperscalers justify this level of spending, and should analysts remain so bullish on their stocks?

A Binary Bet on the Future of AI

The scale of investment has shifted the AI build‑out from a strategic growth initiative to what some analysts describe as a binary corporate bet. As some analysts suggest, the leap in capex — up roughly 60% year‑on‑year — means the payoff must be both rapid and substantial.

If monetisation fails to keep pace, the consequences could be of severe concern.

This is compounded by the fact that hyperscalers are now consuming nearly all of their operating cash flow to fund AI infrastructure, compared with a decade‑long average of around 40%. That shift alone explains the recent market jitters.

Why Analysts Remain Upbeat

Despite the turbulence, many analysts still argue the long‑term fundamentals remain intact. One reason is that hyperscalers are pre‑selling data‑centre capacity before it is even built, effectively locking in revenue ahead of deployment.

That dynamic supports the bullish view that AI demand is not only real but accelerating.

There is also a belief that as AI tools become embedded across consumer and enterprise workflows, willingness to pay will rise sharply.

If that scenario plays out, today’s eye‑watering capex could look prescient rather than reckless.

The Real Risk: Timelines

The challenge is timing. Much of the infrastructure being deployed — from chips to data‑centre hardware — has a useful life of just three to five years.

That gives hyperscalers a narrow window to recoup investment before the next upgrade cycle hits.

Without clearer monetisation strategies and firmer payback timelines, investor anxiety is likely to persist.

AI capex justification?

Hyperscalers can justify their AI capex — but only if demand scales as quickly as they expect and monetisation becomes more transparent.

Analysts may be right to stay bullish, but the margin for error is shrinking. In the coming quarters, clarity will matter as much as capital.

Baidu brings OpenClaw AI to its search app, unlocking new tools for 700 million users

Baidu and OpenClaw link up

Baidu has begun integrating the fast‑rising AI agent OpenClaw directly into its flagship search app, opening the door for 700 million monthly users to access advanced task‑automation tools just ahead of China’s Lunar New Year holiday.

The move marks one of the company’s most significant consumer‑facing upgrades in years, as competition intensifies among Chinese tech giants racing to commercialise AI at scale.

Until now, OpenClaw — an Austrian‑developed, open‑source agent — was primarily accessed through chat platforms such as WhatsApp and Telegram.

Baidu rollout

Baidu’s rollout means users who opt in will be able to message the agent within the search app to handle everyday digital tasks, from scheduling and file organisation to writing code.

The company is also extending OpenClaw’s capabilities across its wider ecosystem, including e‑commerce and cloud services.

The timing is strategic. Lunar New Year is one of the most competitive periods for user acquisition in China’s internet sector, and Baidu’s rivals are also accelerating their AI deployments.

Alibaba, for example, has woven its Qwen chatbot into platforms such as Taobao and Fliggy, enabling end‑to‑end shopping journeys without leaving the app — a shift that has already generated more than 120 million consumer orders in a six‑day period this month.

Popularity surge

OpenClaw’s surge in popularity reflects a broader trend: AI agents are moving beyond conversational novelty and into practical automation, capable of navigating apps, managing email and performing multi‑step online tasks.

Yet the rapid adoption has also drawn warnings from cybersecurity firms, including CrowdStrike, about the risks of granting such agents deep access to enterprise systems.

For Baidu, the integration signals a clear intent to keep pace with global AI leaders while reinforcing its dominance in China’s search market.

For users, it marks the arrival of a more hands‑on, task‑driven AI era — one embedded directly into the tools they already rely on daily, with instant access to millions of users.

Alphabet’s 100‑Year Bond: Ambition, Appetite and Anxiety in the AI Debt Boom

Alphabet's 100-year Sterling Bond for pensions

Alphabet’s decision to issue a 100-year sterling bond has captured the attention of global markets, not only because of its rarity but also because of what it signals about the escalating competition in artificial intelligence.

100 year sterling bond

A century-long bond denominated in pounds is an extraordinary financing move, particularly for a technology company.

It reflects both investor confidence in Alphabet’s long-term prospects and the scale of capital now required to compete in the AI era.

On the surface, the benefits are clear. Locking in funding for 100 years at today’s rates provides financial certainty. Alphabet can secure vast sums of capital without facing refinancing risk for generations.

In an industry defined by rapid change and enormous upfront costs — from data centres and semiconductor procurement to specialised AI chips and energy infrastructure — patient capital is invaluable.

Sterling

The sterling denomination also diversifies Alphabet’s funding base beyond U.S. dollar markets, potentially appealing to European institutional investors seeking stable, long-duration assets.

The bond may also be interpreted as a strategic signal. By committing to long-term financing, Alphabet demonstrates confidence in its ability to generate cash flows well into the next century.

It reinforces the company’s image as a durable, infrastructure-like enterprise rather than a volatile technology stock.

For investors such as pension funds and insurers, a 100-year instrument from a highly rated issuer can offer predictable returns in a world where long-term yield is scarce.

Cyclical

However, the move is not without shortcomings. Committing to fixed debt obligations over such an extended horizon reduces flexibility. While Alphabet currently enjoys strong balance sheet metrics, the technology sector is notoriously cyclical.

A century is an eternity in innovation terms. Business models, regulatory frameworks and geopolitical dynamics may shift dramatically.

Future generations of management will inherit the obligation, regardless of whether today’s AI investments deliver the expected returns.

More broadly, the bond feeds concern about a debt-fuelled AI arms race. As technology giants pour tens of billions into AI research, chip design and cloud infrastructure, borrowing is becoming an increasingly prominent tool.

If rivals respond with similar long-dated issuance, the sector’s leverage could rise meaningfully. In a downturn or if AI monetisation disappoints; heavy debt burdens could amplify financial strain.

Ultimately, Alphabet’s 100-year sterling bond embodies both ambition and risk. It underlines the immense capital demands of the AI revolution while raising questions about whether today’s competitive fervour is encouraging companies to stretch their balance sheets too far in pursuit of technological dominance.

Systemic anxiety

The deeper anxiety is systemic. With Oracle, Amazon, Microsoft and others also scaling up borrowing, total tech‑sector issuance is projected to hit $3 trillion over five years.

Some analysts warn this resembles a late‑cycle credit boom, where investors chase thematic excitement rather than sober fundamentals.

Alphabet’s century bond may be a masterstroke of timing — or a marker of excess.

Either way, it crystallises the tension at the heart of the AI revolution: extraordinary promise, financed by extraordinary debt.

Why a Sterling Bond?

Alphabet issued its 100‑year sterling bond to tap deep UK demand for ultra‑long‑dated assets, especially from pension funds seeking to match long‑term liabilities.

The sterling market offered strong appetite, with orders reportedly reaching nearly ten times the £1 billion on offer.

It also formed part of Alphabet’s broader multi‑currency fundraising drive to finance massive AI‑related capital spending, including data‑centre expansion.

Issuing in sterling diversified its investor base, reduced reliance on U.S. dollar markets, and signalled confidence in its long‑term stability as a quasi‑infrastructure‑scale business.

It’s all debt; however you look at it!

Alibaba Steps Into ‘Physical AI’ With New Robotics Model

AI robotics model

China’s Alibaba has taken a decisive step into the fast‑emerging field of ‘physical AI’ with the launch of a new foundation model designed specifically to power real‑world robots.

The model, known as RynnBrain*, marks one of the company’s most ambitious moves since restructuring its cloud and research divisions, and signals China’s intention to compete directly with the United States in embodied artificial intelligence.

Unlike traditional large language models, which operate entirely in digital environments, RynnBrain is built to interpret and act within the physical world.

It combines vision, language and spatial reasoning, enabling robots to recognise objects, understand their surroundings and plan multi‑step actions.

DAMO Acadamy

In demonstrations released by Alibaba’s DAMO Academy, the model guided a robot through tasks such as identifying fruit and sorting it into containers — a deceptively simple exercise that requires sophisticated perception and motor control.

The company describes RynnBrain as a ‘general‑purpose embodied intelligence model’, capable of supporting a wide range of robotic applications, from warehouse automation to domestic assistance.

Crucially, Alibaba has opted to open‑source the model, a strategic decision that invites global developers to build on its capabilities and accelerates the creation of a broader ecosystem around Chinese robotics research.

Physical AI

The timing is significant. Over the past year, major technology firms including Google, Nvidia and OpenAI have begun to emphasise physical AI as the next frontier of artificial intelligence.

The shift reflects a growing belief that the most transformative applications of AI will not be confined to screens, but will instead involve machines that can navigate, manipulate and collaborate within human environments.

Alibaba’s entry adds competitive pressure to a field already heating up. While U.S. companies currently dominate embodied AI research, China has made robotics a national priority, viewing it as a strategic industry with implications for manufacturing, logistics and economic resilience.

RynnBrain

By releasing RynnBrain openly, Alibaba positions itself as both a contributor to global research and a catalyst for domestic innovation.

The launch also highlights a broader trend: the convergence of AI models with physical systems. As robots become more capable and more affordable, the line between software intelligence and mechanical action is beginning to blur.

RynnBrain is an early example of this shift — a model designed not just to understand language or images, but to translate that understanding into purposeful action.

Whether Alibaba’s approach will reshape the global robotics landscape remains to be seen, but the message is clear: the race to build the brains of future machines is accelerating, and China intends to be at the forefront.

Other Major Players in Physical AI

Physical AI — AI that can perceive, reason and act in the real world — has become the next strategic battleground for global tech giants. Alibaba is far from alone.

Several companies are racing to build the ‘general‑purpose robot brain’.

Below are the most significant players.

1. Google DeepMind

Focus: Embodied AI, robotics‑ready multimodal model’s Key systems:

RT‑2 (Robotic Transformer)

Gemini‑based robotics extensions

Google has been working on robotics for over a decade. RT‑2 was one of the first models to show that a language model could directly control a robot arm, interpret objects, and perform multi‑step tasks.

DeepMind is now integrating robotics capabilities into the Gemini family.

2. OpenAI

Focus: General‑purpose embodied intelligence Key systems:

OpenAI Robotics (revived internally)

Vision‑language‑action research

OpenAI paused robotics in 2020 but has quietly restarted the programme. Their models are being trained to understand video, track objects and perform physical tasks. They are also working with hardware partners to test embodied versions of their models.

3. Nvidia

Focus: The infrastructure layer for physical AI Key systems:

  • Nvidia Isaac (robotics platform)
  • Cosmos models
  • Omniverse simulation

Nvidia is not building consumer robots; it is building the entire ecosystem for everyone else. Its simulation tools, training environments and robotics‑ready AI models are becoming the backbone of the industry.

4. Tesla

Focus: Humanoid robotics Key system:

  • Optimus (Tesla Bot)

Tesla is training its robot using the same AI stack as its autonomous driving system. The company claims Optimus will eventually perform factory and household tasks.

It is one of the most visible attempts to build a general‑purpose humanoid robot.

5. Amazon

Focus: Warehouse automation and domestic robotics Key systems:

  • Proteus (autonomous warehouse robot)
  • Astro (home robot)

Amazon is integrating multimodal AI into its logistics robots and experimenting with home assistants that can navigate physical spaces.

6. Figure AI

Focus: General‑purpose humanoid robots’ Key system:

  • Figure 01

Backed by OpenAI, Microsoft and Nvidia, Figure is developing a humanoid robot designed to perform everyday tasks.

Their recent demos show robots manipulating objects and responding to natural language instructions.

7. Boston Dynamics

In partnership with Google’s DeepMind Boston Dynamics is also building a ‘foundation model intelligence’ robot brain.

The Big Picture

Alibaba is entering a field dominated by U.S. companies, but the global race is wide open. Physical AI is becoming the next strategic platform — the equivalent of smartphones in the 2000s or cloud computing in the 2010s.

*RynnBrain explained

RynnBrain is Alibaba’s open‑source ‘physical AI‘ framework designed to give robots far more capable real‑world intelligence, enabling them to plan, navigate, and manipulate objects across dynamic environments such as factories and homes.

Developed by the company’s DAMO Academy, it competes directly with Google’s Gemini Robotics and Nvidia’s Cosmos‑Reason models, with Alibaba claiming stronger benchmark performance.

The system is released openly on platforms like GitHub and Hugging Face, offered in configurations from lightweight 2‑billion‑parameter models to advanced mixture‑of‑experts variants, and includes specialised versions—Plan, Nav, and CoP—targeting manipulation, navigation, and spatial reasoning respectively.

Its launch signals China’s ambition to lead global robotics and embodied AI development.

The Rise of OpenClaw and the New Era of AI Agents

Agent AI

A new generation of artificial intelligence is taking shape, and at its centre sits OpenClaw — a fast‑evolving framework that embodies the shift from monolithic AI models to agile, task‑driven agents.

While large language models once dominated the conversation, the momentum has clearly moved toward systems that can reason, plan, and act with far greater autonomy. OpenClaw is emerging as one of the most intriguing examples of this transition.

Appeal

OpenClaw’s appeal lies in its modular design. Instead of relying on a single, all‑purpose model, it orchestrates multiple specialised components that collaborate to complete complex workflows.

This mirrors how real teams operate: one agent may handle research, another may draft content, and a third may evaluate quality or flag risks. The result is a system that behaves less like a tool and more like a coordinated digital workforce.

Defining trend

This shift is not happening in isolation. Across the industry, AI agents are becoming the defining trend. Companies are racing to build systems that can manage inboxes, run businesses, write and deploy code, or even negotiate with other agents.

The ambition is no longer to create a chatbot that answers questions, but an autonomous entity capable of executing multi‑step tasks with minimal human intervention.

OpenClaw stands out because it embraces openness and experimentation. Developers can plug in their own models, customise behaviours, and build agent ‘stacks’ tailored to specific industries.

Adoption

Early adopters in media, finance, and logistics are already exploring how these agents can streamline research, automate reporting, or coordinate supply‑chain decisions.

The promise is efficiency, but also creativity: agents that can generate ideas, test them, and refine them without constant supervision.

Of course, the rise of agentic AI brings challenges. Questions around safety, reliability, and accountability are becoming more urgent. An agent that can act independently must also be constrained responsibly.

Challenge

The industry is now grappling with how to balance autonomy with oversight, ensuring that these systems remain aligned with human goals and values.

Even with these concerns, the trajectory is unmistakable. OpenClaw and its peers represent a decisive step toward AI that is not merely reactive but proactive — capable of taking initiative, managing complexity, and collaborating with humans in more meaningful ways.

As these systems mature, they are likely to reshape not just how we work, but how we think about intelligence itself.

If you want to explore how this trend could influence your editorial or creative workflows, I’m ready to dive deeper with you.

Is This a Make‑or‑Break Year for OpenAI?

Where is OpenAI's profit?

OpenAI enters 2026 in a paradoxical position: simultaneously one of the fastest‑growing technology companies in history and one of the most financially strained.

With annualised revenue now exceeding $20 billion, the company has clearly proven global demand for generative AI. Yet the central question remains unresolved: where is the profit, and is this the year OpenAI must prove its business model is sustainable?

The company’s revenue trajectory has been extraordinary. Annual recurring revenue rose from $2 billion in 2023 to $6 billion in 2024, before leaping past $20 billion in 2025.

This growth reflects the rapid embedding of ChatGPT into enterprise workflows and the expansion of compute capacity, which has roughly tripled each year. But the same infrastructure powering this boom is also the source of OpenAI’s financial dilemma.

Costs

Compute costs have ballooned at a rate that rivals — and in some projections exceeds — revenue growth. Analysts estimate cumulative losses could reach $143 billion by 2029 if current spending patterns continue.

The company’s burn rate, driven by massive GPU procurement and long‑term energy commitments, has been described as ‘immense’ even by industry standards Benzinga.

OpenAI’s long‑term infrastructure deals, totalling more than 26 gigawatts of future compute capacity, underline the scale of its ambition — and its financial exposure.

To counterbalance these costs, OpenAI is experimenting with new revenue streams, including the introduction of advertising within ChatGPT for U.S. users.

This marks a strategic shift from pure subscription and enterprise licensing toward a more diversified, consumer‑scale monetisation model.

Make or break?

So is 2026 a make‑or‑break year? In many ways, yes. OpenAI has proven demand, scale, and cultural impact. What it has not yet proven is that generative AI can be profitable at planetary scale.

This year will test whether the company can convert extraordinary growth into a sustainable business — or whether its costs will continue to outpace even its most impressive revenue milestones.

When Markets Lean Too Heavily on High Flyers

The AI trade

The recent rebound in technology shares, led by Google’s surge in artificial intelligence optimism, offered a welcome lift to investors weary of recent market sluggishness.

Yet beneath the headlines lies a more troubling dynamic: the increasing reliance on a handful of mega‑capitalisation firms to sustain broader equity gains.

Breadth

Markets thrive on breadth. A healthy rally is one in which gains are distributed across sectors, signalling confidence in the wider economy. When only one or two companies shoulder the weight of investor sentiment, the picture becomes distorted.

Google’s AI announcements may well justify enthusiasm, but the fact that its performance alone can swing indices highlights a fragility in the current market structure.

This concentration risk is not new. In recent years, the so‑called ‘Magnificent Seven‘ technology giants have dominated returns, masking weakness in smaller firms and traditional industries.

While investors cheer the headline numbers, the underlying reality is that many sectors remain subdued. Manufacturing, retail, and even parts of the financial industry are not sharing equally in the rally.

Over Dependence

Over‑dependence on highflyers creates two problems. First, it exposes markets to sudden shocks: if sentiment turns against one of these giants, indices can tumble disproportionately.

Second, it discourages capital from flowing into diverse opportunities, stifling innovation outside the tech elite.

For long‑term stability, investors and policymakers alike should be wary of celebrating narrow gains. A resilient market requires participation from a broad base of companies, not just the fortunes of a few.

Google’s success in AI is impressive, but true economic strength will only be evident when growth spreads beyond the marquee names.

Until then, the market remains vulnerable, propped up by giants whose shoulders, however broad, cannot carry the entire economy indefinitely.