TSMC’s 58% surge in first‑quarter profit is the clearest sign yet that the AI boom is no longer a cyclical uplift but a structural shift reshaping the entire semiconductor industry.
The Taiwanese chipmaker delivered record earnings, comfortably beating analyst expectations, as demand for advanced processors continued to outstrip supply.
Net income reportedly reached NT$572.48 billion, marking a fourth consecutive quarter of record profits, while revenue climbed to NT$1.134 trillion, driven overwhelmingly by high‑performance computing and AI‑related orders.
What stands out is the composition of that growth. Roughly three‑quarters of TSMC’s wafer revenue reportedly came from advanced nodes, with 3‑nanometre chips alone accounting for a quarter of shipments.
Nvidia
Nvidia has now overtaken Apple as TSMC’s largest customer, underscoring how AI accelerators have become the industry’s most valuable real estate.
TSMC’s executives described AI demand as “extremely robust”, with customers signalling multi‑year achievements rather than the usual stop‑start ordering cycle.
The company also moved to reassure investors over supply‑chain risks linked to the Middle East conflict, saying it has diversified sources for critical gases such as helium and hydrogen.
With capacity running hot and capital spending set to hit the top end of guidance, TSMC is positioning itself as the indispensable chipmaker in the AI era.
ASML’s decision to raise its 2026 guidance underlines a simple reality: demand for advanced AI chips is not easing, and the world’s most important semiconductor equipment maker remains at the centre of that surge.
The company signalled stronger-than-expected orders for its extreme ultraviolet (EUV) and next‑generation high‑NA systems, driven by chipmakers racing to expand capacity for AI accelerators, data‑centre processors and cutting‑edge logic nodes.
Bottleneck
The upgrade matters because ASML sits at the bottleneck of global chip production. Only a handful of firms can even buy its most advanced machines, and those firms – chiefly TSMC, Intel and Samsung – are all scaling up AI‑focused manufacturing.
Their capital expenditure plans have held firm despite broader economic uncertainty, suggesting that AI infrastructure is becoming a non‑discretionary investment rather than a cyclical one.
Two forces are driving the momentum. First, hyperscalers continue to pour billions into AI clusters, creating sustained demand for the most advanced lithography tools.
Long-term lock in
Second, geopolitical pressure to secure domestic chip capacity is pushing governments and manufacturers to lock in long‑term equipment orders.
ASML’s raised outlook reinforces the sense that the semiconductor cycle is diverging: consumer electronics remain patchy, but AI‑related manufacturing is entering a multi‑year expansion.
The key question now is whether supply can keep pace with the ambition of its customers.
Taiwan Semiconductor Manufacturing Company (TSMC) has delivered a striking 35% year‑on‑year jump in first‑quarter revenue, reaching a record NT$1.13 trillion.
The result underscores just how dramatically the centre of gravity in global technology has shifted towards advanced semiconductor manufacturing, with artificial intelligence now the defining force behind industry growth.
Relentless AI demand
TSMC’s performance is being powered by relentless demand for cutting‑edge chips from major clients such as Apple and Nvidia.
As AI infrastructure spending accelerates worldwide, the company has become one of the few manufacturers capable of producing the most sophisticated processors required for training and running large‑scale models.
March alone saw revenue climb more than 45%, highlighting the strength and urgency of this demand.
Ambition
Analysts suggest TSMC is on track to exceed its already ambitious 30% annual growth target, helped not only by volume but also by reported price increases for its most advanced nodes.
Even as smartphone and PC markets remain uneven, AI‑related orders are more than compensating.
With more companies—from hyperscalers to AI start‑ups—designing their own chips, TSMC’s strategic position looks increasingly unassailable.
Upcoming earnings and ASML’s results next week will offer further clues about the momentum behind the semiconductor sector’s AI‑driven boom.
Meta has unveiled Muse Spark, its first major artificial intelligence model since the company overhauled its AI strategy in response to the underwhelming reception of its previous Llama 4 models.
Developed by the newly formed Meta Superintelligence Labs under the leadership of Alexandr Wang, Muse Spark represents a deliberate shift towards smaller, faster, and more capable systems designed to compete directly with Google, OpenAI, and Anthropic.
Foundation
Muse Spark is positioned as the foundation of a new family of models internally known as Avocado. Meta reportedly describes it as “small and fast by design”, yet able to reason through complex questions in science, maths, and health — a notable claim given the company’s recent struggles to keep pace with rivals.
Early evaluations suggest the model performs competitively in language and visual understanding, though it still trails in coding and abstract reasoning.
Crucially, Muse Spark is deeply integrated into Meta’s ecosystem. It already powers the Meta AI app and website and will soon replace Llama across WhatsApp, Instagram, Facebook, Messenger, and Meta’s smart glasses.
Integrated
This rollout signals Meta’s intention to embed AI more tightly into everyday user interactions, from search and recommendations to multimodal tasks such as analysing photos or comparing products.
The company is also experimenting with new revenue streams by offering a private API preview to select partners — a departure from its previous open‑source approach.
Whether this shift will alienate developers who embraced the openness of Llama remains to be seen.
Meta frames Muse Spark as an early step toward “personal superintelligence”, an assistant that can understand the world alongside the user rather than waiting for typed instructions.
It’s an ambitious vision — and one that will be tested as the model expands globally and faces scrutiny over privacy, safety, and real‑world performance.
SpaceX is edging towards what could become the most significant stock market debut in modern history, with expectations that its initial public offering may surpass a valuation of $1 trillion.
A confidential filing with U.S. regulators marks a pivotal moment for the company, signalling its readiness to transition from a privately held aerospace leader to one of the world’s most valuable publicly traded firms.
Record breaking valuation
The anticipated valuation reflects SpaceX’s dominance in commercial spaceflight, satellite deployment and global broadband through its rapidly expanding Starlink network.
Its reusable rocket technology has already reshaped launch economics, and the company’s growing influence across defence, communications and space infrastructure has strengthened investor confidence.
Analysts suggest the timing of the IPO is driven by the escalating cost of SpaceX’s long‑term ambitions, including deep‑space exploration and large‑scale satellite expansion.
Company integration
The recent integration of Elon Musk’s AI venture, xAI, into SpaceX has further broadened the company’s technological footprint, reinforcing expectations that substantial new capital will be required to sustain its momentum.
If market appetite matches current projections, SpaceX’s listing could set a new benchmark for tech‑driven valuations — and potentially position Musk as the first individual to see their net worth approach the trillion‑dollar threshold.
Oracle is swinging hard at its own workforce as the company races to reposition itself as an AI‑infrastructure contender.
Thousands of roles are being eliminated, a drastic move that reflects the sheer financial pressure of trying to keep up with hyperscale rivals in the most capital‑intensive tech shift in decades.
The company’s share price has slumped 25% this year, with investors increasingly uneasy about soaring data‑centre spending and the heavy debt required to fund it.
Oracle has already raised $50 billion to bankroll new GPU‑ready facilities, but unlike Amazon or Microsoft, it lacks the cushion of vast cloud scale.
The result: a balance sheet under strain and a leadership team forced into tough decisions.
Future
Oracle’s remaining performance obligations have ballooned to more than half a trillion dollars, fuelled by major AI partnerships including a huge deal with OpenAI.
But those future revenues don’t solve today’s cash‑flow squeeze. Analysts estimate that cutting 20,000 to 30,000 jobs could free up as much as $10 billion — enough to keep the AI build‑out moving without further rattling the markets.
Oracle is betting that a leaner organisation now will buy it the runway to compete later. The question is whether the cuts arrive in time to match the speed of the AI race.
For years, Chinese AI founders comforted themselves with a simple fiction: that geography could outrun politics.
Move the holding company to Singapore, hire a few local staff, raise money from Silicon Valley, and the gravitational pull of Beijing’s regulatory state would somehow weaken. Manus was the poster child of that belief — until it wasn’t.
Meta’s $2 billion acquisition was supposed to be the triumphant proof that “Singapore washing” worked. Instead, Beijing’s sudden intervention has exposed it as a mirage.
Review
The Chinese government’s review of the deal — and the exit bans placed on Manus’ co‑founders — is more than a bureaucratic hurdle.
It is a declaration that the origin of a technology matters more than the passport of the company that later owns it.
The symbolism is striking. Manus built its early code in China, then attempted to transplant its identity offshore. But Beijing is now signalling that code, data and talent are not so easily detached from their birthplace.
The message to founders is blunt: you cannot simply shed China like an old skin.
Timing
For META, the timing is awkward. More than 100 Manus employees have already been folded into its Singapore office, and the company insists the deal complies with the law.
Yet the spectre of an unwinding hangs over the transaction — a reminder that even the world’s largest tech firms are not insulated from geopolitical weather.
The deeper story, though, is about the shrinking space for neutrality. The U.S.–China tech rivalry has moved beyond chips and compute into the realm of corporate identity itself.
Where a company is born, where its engineers sit, where its early investors come from — all now carry political charge.
Manus is not just a case study. It is a warning flare. In an era where innovation crosses borders but regulation does not, the idea of a clean escape route is fading fast.
ARM has triggered one of the most dramatic shifts in its 35‑year history with the launch of its first in‑house data‑centre processor, the AGI CPU — a move that sent its shares surging 16% and reshaped expectations for the company’s future.
Long known for licensing energy‑efficient chip designs to the world’s biggest tech firms, ARM is now stepping directly into the silicon market, competing with the very customers that built its empire.
Major Tech Firms Using Arm Designs (AI & Mobile)
Company
Primary Use Case
Arm-Based Technology
Apple
Mobile & on‑device AI
A‑series (iPhone/iPad) and M‑series (Mac) chips
Samsung
Mobile, AI, IoT
Exynos processors
Qualcomm
Mobile & automotive AI
Snapdragon SoCs
Google
Android ecosystem & edge AI
Pixel phones (Arm cores inside Tensor chips)
Amazon (AWS)
Cloud compute & AI inference
Graviton & Trainium/Inferentia (Arm Neoverse)
Meta
AI infrastructure
Deploying Arm-based AGI CPU
OpenAI
AI inference & orchestration
Early adopter of Arm AGI CPU
Nvidia
AI data‑centre CPUs
Grace CPU (Arm architecture)
OPPO
Mobile AI
Arm-based SoCs in Find series
vivo
Mobile AI
Arm-based SoCs in X‑series
Strong demand
The new AGI CPU is engineered for the rapidly expanding world of AI inference and agentic AI — workloads that demand vast CPU coordination rather than pure GPU horsepower.
Early demand appears strong. Meta has signed on as the first major customer, with OpenAI, Cloudflare and SAP also adopting the chip as they race to expand their AI infrastructure.
The financial implications are striking. ARM expects the AGI CPU alone to generate $15 billion in annual revenue by 2031, a figure that dwarfs the company’s 2025 revenue of $4 billion.
Significant shift
Analysts have described the announcement as the most significant strategic shift ARM has ever undertaken, noting that the revenue projections exceed even the most optimistic market estimates.
By moving into full chip production, ARM is broadening its market to include companies that previously had no interest in its traditional IP‑licensing model.
Executives say the chip will be competitively priced, offering an alternative for firms unable to build their own custom silicon.
For the UK, the launch marks a rare moment of industrial ambition in a sector dominated by American and Asian giants.
If ARM’s forecasts hold, the AGI CPU could become one of the most commercially successful chips ever produced by a British company — and a defining pillar of the AI age.
Agentic AI is rapidly shifting from a speculative idea to a practical force reshaping how work gets done.
Unlike traditional AI systems, which wait passively for instructions, agentic AI can plan, act, and adapt within defined boundaries.
It is not simply a smarter chatbot; it is a system capable of taking initiative, coordinating tasks, and pursuing goals on behalf of its user.
This evolution marks a profound turning point in how we think about automation, creativity, and human–machine collaboration.
Agentic AI colleagues
The first major change is the move from reaction to autonomy. Today’s AI assistants excel at answering questions or generating content, but they still rely on constant prompting.
Agentic AI, by contrast, can break down a complex objective into smaller steps, choose the best tools for each stage, and execute them with minimal oversight. This transforms AI from a passive helper into an active collaborator.
For individuals and small teams, it promises a level of operational leverage previously reserved for large organisations with dedicated staff.
A second shift lies in the emergence of multi‑modal competence. Agentic systems will not be confined to text. They will navigate interfaces, analyse documents, draft communications, and even orchestrate workflows across multiple platforms.
In effect, they will behave more like digital colleagues—capable of understanding context, maintaining continuity, and adapting to changing priorities. The result is a new category of labour: cognitive automation that complements rather than replaces human judgement.
However, the rise of agentic AI also raises important questions. Autonomy introduces risk. If an AI can take action, it must do so safely, transparently, and within clear constraints.
On guard
Guardrails will be essential—not only technical safeguards, but also cultural norms around delegation, accountability, and trust. The future will require a balance between empowering AI to act and ensuring humans remain firmly in control of outcomes.
Another challenge is the shifting nature of expertise. As agentic AI handles more administrative and procedural work, human value will increasingly lie in strategic thinking, creativity, and ethical decision‑making.
This is not a loss but a rebalancing. Freed from routine tasks, people can focus on higher‑order work that genuinely benefits from human insight.
The organisations that thrive will be those that treat AI not as a shortcut, but as a catalyst for deeper, more meaningful contribution.
Future use of agents
Looking ahead, the most exciting aspect of agentic AI is its potential to democratise capability. A single individual could run a publication, a business, or a research project with the operational efficiency of a small team.
Barriers to entry will fall. Innovation will accelerate. And the line between “solo creator” and “organisation” will blur.
Agentic AI is not the end of human agency; it is an extension of it. The future belongs to those who learn to work with these systems—setting direction, providing judgement, and letting AI handle some of the heavy lifting.
Far from replacing us, agentic AI may finally give us the space to think, create, and lead with clarity.
OpenClaw has rapidly become one of the most influential developments in artificial intelligence, evolving from a small open‑source experiment into a global phenomenon reshaping how people interact with computers.
Launched in January 2026, the platform allows users to run autonomous AI agents locally on their own machines, giving them the power to organise files, write code, browse the web, and automate everyday digital tasks without relying on cloud services.
This local‑first design has been central to its explosive growth — and to the concerns now emerging around it.
One of the most striking cultural shifts has taken place in China, where OpenClaw has become a mainstream sensation.
AI Lobsters
Users refer to their agents as “AI lobsters,” a playful nod to the platform’s crustacean branding. Retirees, students, and professionals alike have begun “raising” these lobsters to help manage knowledge, streamline work, and perform practical tasks that traditional chatbots struggle with.
The trend has grown so quickly that crowds have gathered outside major tech offices in Beijing to install the software together, turning OpenClaw into a genuine grassroots movement.
This surge in popularity has also caught the attention of global markets. Chinese AI‑related stocks have risen sharply following comments from Nvidia CEO Jensen Huang, who described OpenClaw as “the next ChatGPT,” signalling its potential to redefine the agentic AI landscape.
Security
Companies building self‑evolving agents and cloud infrastructure around OpenClaw have seen double‑digit gains as investors position themselves for what appears to be the next major AI wave.
Yet OpenClaw’s power has also raised red flags. Because the agent runs locally and can control a user’s computer, enterprise IT teams have struggled to manage the security implications.
The platform’s ability to act autonomously — reading files, sending messages, and interacting with applications — has created a need for stronger guardrails, especially in corporate environments.
Nvidia’s NemoClaw
Nvidia has stepped in with NemoClaw, a new enterprise‑grade stack that adds privacy controls, security infrastructure, and vetted local models to OpenClaw through a single‑command installation.
The goal is to make autonomous agents more trustworthy and scalable without undermining the open‑source ethos that made OpenClaw successful.
OpenClaw’s own development continues at pace. The latest stable release, v2026.3.13, includes fixes for session handling, improved browser‑control mechanisms, and a shift away from legacy Chrome extensions towards direct attachment to existing browser sessions — a move designed to make agent operations safer and more reliable.
The future
In just a few months, OpenClaw has transformed from a niche project into a global force, driving cultural trends, market movements, and enterprise innovation.
Its trajectory suggests that autonomous, locally run agents may soon become a standard part of everyday computing — and the race to shape that future has only just begun.
The Pentagon’s Chief Technology Officer, Emil Michael, has apparently ignited a fresh debate over the role of commercial artificial intelligence in national security, arguing that Anthropic’s Claude models could “pollute” the U.S. defence supply chain.
I notice his comments came in an interview with CNBC, offer the clearest rationale yet for the Department of Defense’s decision to designate Anthropic as a supply chain risk — an extraordinary step previously reserved for foreign adversaries.
It seems the opinion is that Claude’s “policy preferences”, embedded through Anthropic’s constitutional training approach, create an unacceptable misalignment with the Pentagon’s operational needs.
Risk
It was reported that any AI system whose underlying values diverge from defence priorities risks producing ineffective outputs, whether in decision‑support tools, equipment design, or battlefield logistics.
“We can’t have a company that has a different policy preference baked into the model… pollute the supply chain so our warfighters are getting ineffective weapons [and] ineffective protection,” he was reported to have said.
Anthropic has responded forcefully, suing the Trump administration and calling the designation “unprecedented and unlawful”.
The company argues that the move jeopardises hundreds of millions of dollars in contracts and mischaracterises the nature of its technology.
Claude in the ecosystem?
It also notes that Claude continues to be used within parts of the U.S. military ecosystem, including by major defence contractors such as Palantir, underscoring the practical difficulty of an immediate transition away from its models.
Michael insists the decision is not punitive and emphasises that only a small fraction of Anthropic’s business comes from government work.
Nonetheless, the designation forces contractors to certify they are not using Claude in Pentagon‑related projects, setting up a potentially lengthy and politically charged dispute over how value‑aligned AI must be before it is allowed anywhere near defence infrastructure.
The episode highlights a broader tension: as AI systems become more opinionated by design, governments are increasingly asking whether “alignment” is a technical question — or a geopolitical one.
Anthropic’s decision to reopen negotiations with the Pentagon marks a striking reversal after a very public rupture, and it underscores how central advanced AI has become to U.S. defence strategy.
The talks reportedly collapsed amid a dispute over how Claude, Anthropic’s flagship model, could be used inside military systems.
Reports indicate that the Pentagon had pushed for broad permissions, including deployment in surveillance environments and potentially autonomous weapons systems.
Safety resistance
Anthropic resisted on safety grounds. The company had sought explicit guarantees that its models would not be used for mass surveillance or lethal decision‑making, a red line that triggered the breakdown in relations.
The fallout was immediate. The Pentagon signalled it would drop Anthropic from existing programmes, despite the company’s role in a major defence contract that had already placed Claude inside classified networks.
That escalation raised the prospect of a formal blacklist, a move that would have reverberated across the wider U.S. technology sector.
For Anthropic, the stakes were equally high: losing access to government work would not only cut off a significant customer but also risk isolating the company at a moment when rivals such as OpenAI and Google are deepening their defence ties.
Compromise?
Yet both sides appear to recognise the cost of a prolonged standoff. According to multiple reports, CEO Dario Amodei has reportedly returned to the table in an effort to craft a compromise deal that preserves Anthropic’s safety commitments while allowing the Pentagon to continue using its technology.
Boundaries
Discussions are now likely focused on defining acceptable boundaries for military use — a task made more urgent by the accelerating integration of AI into intelligence analysis, battlefield logistics and autonomous systems.
This renewed dialogue is more than a corporate dispute: it is a test case for how democratic governments and frontier AI labs negotiate power, ethics and national security.
The outcome will shape not only Anthropic’s future but also the norms governing military AI in the years ahead.
Qualcomm is accelerating its push into artificial intelligence and robotics, signalling a strategic shift that could redefine the company’s future beyond smartphones.
Executives now describe robotics as a core growth pillar, with chief executive Cristiano Amon reportedly forecasting that intelligent machines will become a “larger opportunity” for the business within the next two years.
Expanding from Mobile Chips to Physical AI
For decades, Qualcomm’s dominance has rested on its mobile processors, which power much of the global smartphone market.
The company is now repurposing that expertise for what it calls physical AI—robots capable of perceiving, reasoning, and acting autonomously in real‑world environments.
This transition reflects a broader industry trend: as generative AI matures, attention is shifting from digital assistants to embodied systems that can perform physical tasks.
Qualcomm’s new robotics architecture, unveiled recently, is designed as a full‑stack platform. It combines high‑efficiency system‑on‑chips, safety‑certified compute modules, and advanced on‑device AI models.
The aim is to give robot manufacturers a scalable foundation, whether they are building compact consumer devices or full‑size humanoids for industrial use.
Dragonwing Becomes the Flagship
At the centre of this strategy is the Dragonwing line of processors. The latest model, the Dragonwing IQ10, targets industrial automation and advanced humanoid robots.
It has reportedly been engineered to run complex AI models locally, reducing reliance on cloud connectivity and improving safety, responsiveness, and energy efficiency.
Qualcomm showcased these capabilities at recent industry events, where robots powered by Dragonwing chips demonstrated dexterity, mobility, and real‑time decision‑making.
The company’s ambition places it in direct competition with Nvidia, which currently dominates AI compute for robotics, and with a growing cohort of start‑ups building specialised hardware for autonomous machines.
Why Robotics Matters Now
Three factors underpin Qualcomm’s renewed focus
Diversifying revenue as smartphone markets plateau and competition intensifies.
Leveraging its edge‑AI strengths, particularly in low‑power, high‑performance chips suited to mobile robots.
Rising industrial demand, with logistics, retail, and manufacturing sectors adopting automation at scale.
The robotics push also complements Qualcomm’s automotive and PC AI strategies, creating a broader ecosystem of connected, intelligent devices.
A Critical Two Years Ahead
Qualcomm’s challenge now is to convert impressive demonstrations into commercial deployments.
If successful, the company could become a foundational supplier for the emerging era of physical AI—an era in which robots move from novelty to necessity.
Following the abrupt federal ban on Anthropic’s Claude models, OpenAI has moved quickly to position itself as the primary replacement across U.S. government departments.
With Claude now designated a supply‑chain risk, agencies are likely scrambling to reconfigure AI workflows — and OpenAI’s systems appear to be emerging as the default alternative.
Integration
The company’s flagship GPT‑4.5 and its agentic development tools have reportedly already been integrated into several defence and civilian systems, according to some observers.
OpenAI’s reported longstanding compatibility with government‑approved platforms, including Azure and OpenRouter, has smoothed the transition. Unlike Anthropic, OpenAI has historically offered more flexible deployment options.
Industry analysts note that OpenAI’s recent hires — including agentic systems pioneer Peter Steinberger (OpenClaw) — signal a deeper push into autonomous task execution, a capability highly prized by defence and intelligence agencies.
The company’s agent frameworks are being trialled for logistics, simulation, and multilingual analysis, with early results described as “mission‑ready.”
Friction
However, the shift is not without friction. It has been reported that some federal teams have built Claude‑specific workflows, particularly in legal, policy, and ethics‑driven domains where Anthropic’s safety constraints were seen as a feature, not a limitation.
Replacing those systems with GPT‑based models requires careful recalibration to avoid unintended consequences.
OpenAI’s rise also raises broader questions about vendor concentration. With Anthropic sidelined and Google’s Gemini models still undergoing federal evaluation – OpenAI now dominates the landscape — a position that may invite scrutiny from oversight bodies concerned about resilience and competition.
Still, for now, OpenAI appears to be the primary beneficiary of the Claude ban. In the vacuum left by Anthropic, OpenAI will be attempting to fill the space.
OpenAI vs Anthropic: Safety vs Autonomy in Federal AI
OpenAI’s agentic tools are likely filling the vacuum left by Anthropic’s ban, offering flexible deployment and autonomous task execution prized by defence and intelligence agencies.
While Claude prioritised safety constraints and ethical guardrails, OpenAI’s GPT‑based systems should offer broader operational freedom.
This shift reflects a deeper philosophical divide: Anthropic’s models were designed to resist misuse, while OpenAI’s are engineered for adaptability and control.
As federal agencies recalibrate, the tension between safety‑first design and unrestricted autonomy is becoming the defining fault line in U.S. government AI strategy.
How long will it be before Anthropic is invited back to the table?
A sweeping federal ban on Anthropic’s technology has rapidly become one of the most consequential developments in U.S. government technology policy, following President Donald Trump’s order that all federal agencies — including the Pentagon — must immediately cease using the company’s AI systems.
The directive, issued on 27th February 2026, came just ahead of a Pentagon deadline demanding that Anthropic lift safety restrictions on its Claude models to allow unrestricted military use.
The confrontation with the Pentagon
The dispute escalated after Anthropic reportedly refused Defence Department demands to remove guardrails that limit how its AI can be used.
It was reported that CEO Dario Amodei stated the company “cannot in good conscience accede” to requirements that would weaken its safety policies, prompting a public standoff.
President Trump reportedly responded by ordering every federal agency to “immediately cease” using Anthropic’s technology, declaring that the government “will not do business with them again.”
Agencies heavily reliant on the company’s tools, including the Department of Defense, have been granted six months to phase out their use.
Defence Secretary Pete Hegseth reportedly went further, designating Anthropic a national‑security “supply‑chain risk”.
This action could prevent military contractors from working with the company and marks the first time such a label has been applied to a major U.S. AI firm.
Impact across government and industry
The ban affects every federal department, from defence and intelligence to civilian agencies.
Contractors supplying AI‑enabled systems must now ensure their tools do not rely on Anthropic’s models, forcing rapid audits and potential redesigns.
AI generated image
Rival AI providers have already begun positioning themselves to fill the gap, with some announcing new Pentagon partnerships within hours of the ban.
The designation as a supply‑chain risk also carries legal and commercial consequences. Anthropic has argued the move is “legally unsound,” but the ruling stands, effectively placing the company on a federal blacklist.
Political debate
The decision has triggered intense debate across the technology sector. Supporters argue that the government must retain full authority over military AI applications.
Critics warn that forcing companies to abandon safety constraints could set a dangerous precedent.
The ban highlights a deepening fault line in U.S. AI governance: the struggle to balance national‑security imperatives with the ethical frameworks developed by leading AI firms.
As agencies begin disentangling themselves from Anthropic’s systems, the long‑term implications for federal procurement, AI safety norms, and the future of military‑AI collaboration remain unresolved.
MiniMax’s M2.5 model has emerged as the unexpected frontrunner in China’s latest wave of artificial intelligence releases, earning a clear endorsement from analysts.
While much of the recent global conversation has fixated on DeepSeek’s rapid evolution, China has quietly produced five new frontier‑level models in recent weeks.
Widening choice
Among them—Alibaba’s Qwen 3.5, ByteDance’s Seedance 2.0, Zhipu’s latest offerings, DeepSeek’s V3.2, and MiniMax’s M2.5—it is MiniMax that reportedly has captured institutional attention.
Some analysts reportedly cite its performance, pricing, and commercial readiness as the reasons it stands apart.
MiniMax, which listed publicly in Hong Kong in January, released M2.5 in mid‑February 2026. The model rivals Anthropic’s Claude Opus 4.6 in capability while costing a fraction of the price—an advantage that has driven a surge of developer adoption.
Data from OpenRouter reportedly shows developers increasingly choosing M2.5 over DeepSeek’s V3.2 and even several U.S. based models.
Analysts argue that this combination of competitive performance and aggressive pricing positions MiniMax as the Chinese model with the strongest global commercial potential.
Productive and less expensive
The model’s technical profile reinforces that view. M2.5 is designed for real‑world productivity, with strengths in coding, agentic tool use, search, and office workflows.
It reportedly scores around 80.2% on SWE‑Bench Verified and outperforms leading Western models—including Claude Opus 4.6, GPT‑5.2, and Gemini 3 Pro—on tasks involving web search and office automation, all while operating at ten to twenty times lower cost.
MiniMax describes the model as delivering “intelligence too cheap to meter,” a claim supported by its lightweight Lightning variant, which generates 100 tokens per second and can run continuously for an hour at roughly one dollar.
This shift signals a broader trend: China’s AI race is no longer defined by a single breakout model. Instead, a competitive ecosystem is emerging, with MiniMax demonstrating that cost‑efficient frontier performance can reshape developer behaviour and enterprise planning.
For global markets, UBS’s preference suggests that investors are beginning to look beyond headline‑grabbing releases and toward models with sustainable commercial trajectories.
Comparison of China’s Five New AI Models
Model
Developer
Key Strengths
Performance Notes
Pricing Position
MiniMax M2.5
MiniMax
Coding, agentic tasks, office automation
Rivals Claude Opus 4.6; 80.2% SWE‑Bench Verified; outperforms GPT‑5.2 and Gemini 3 Pro on search/office tasks
Extremely low cost; “too cheap to meter”
DeepSeek V3.2
DeepSeek
Reasoning, general chat
Strong but losing developer share to M2.5
Low‑cost but not as aggressive as MiniMax
Alibaba Qwen 3.5
Alibaba
Enterprise integration, multilingual capability
Part of Alibaba’s expanding Qwen family
Competitive mid‑range
ByteDance Seedance 2.0
ByteDance
Video generation
Focused on multimodal creativity
Premium creative‑tool pricing
Zhipu (latest models)
Zhipu AI
Knowledge tasks, enterprise AI
Continues Zhipu’s push into LLM infrastructure
Mid‑range enterprise
MiniMax M2.5 leads China’s AI surge with performance rivalling Claude Opus and Gemini 1.5 Pro, yet at a fraction of the cost.
It excels in coding, search, and office automation, scoring 80.2% on SWE‑Bench Verified. DeepSeek V3.2 offers strong reasoning but lags in developer adoption.
Compared to ChatGPT-4, Claude 2.1, and Gemini 1.5, China’s models are closing the gap in capability, with MiniMax M2.5 now outperforming Western leaders on several benchmarks—especially in speed and cost efficiency.
Comparison of leading Chinese and Western AI models
(SWE‑Bench Verified — latest public leaderboard, early 2026) guide data
Model
Developer
Primary Strengths
SWE‑Bench Verified
Notes
Claude 4.6 Opus
Anthropic
High‑end reasoning, long‑context reliability
76–77%
Current top performer on independent coding benchmarks.
Nvidia’s earnings didn’t disappoint on the numbers — they were spectacular — but Wall Street was disappointed by the guidance, the pricing signals, and the shift in the AI‑chip cycle, which is why the stock fell despite a blowout quarter.
Nvidia’s latest quarterly results were, on the surface, extraordinary. Revenue surged, margins remained enviably high and demand for its AI chips continued to reshape the global technology landscape.
Yet the company’s shares fell sharply, dragging broader markets with them. The reaction reflects a deeper unease on Wall Street: not about what Nvidia has achieved, but about what comes next.
The company delivered a blowout quarter, but investors were looking for something even more explosive.
Cooling expectations after a year of euphoria
Nvidia has become the defining stock of the AI boom, and with that status comes a valuation that assumes relentless acceleration.
This quarter’s guidance, while strong, suggested growth is beginning to normalise. Investors who had priced in another step-change in demand instead saw signs of a company settling into a more sustainable—though still impressive—trajectory.
In a market conditioned to expect perpetual hyper‑growth, “very strong” can feel like a disappointment.
Fears of peak pricing power
A second concern is whether Nvidia’s extraordinary pricing power is nearing its peak. The company’s flagship AI chips have commanded eye‑watering prices, but cloud providers and enterprise customers are now signalling resistance.
Competitors are improving, and hyperscalers are accelerating development of their own silicon.
Some analysts are asking – whether the industry has already seen the high‑water mark for Nvidia’s margins, a question that goes straight to the heart of the stock’s valuation.
China remains a structural drag
Regulatory constraints continue to weigh on Nvidia’s China business. The company has not yet been able to meaningfully sell its U.S. approved AI chips into the market, and executives have warned that local rivals could fill the gap.
China was once a major contributor to Nvidia’s data‑centre revenue; now it is a source of uncertainty. Investors are increasingly factoring in the possibility that this revenue may not return in its previous form.
A crowded trade unwinds
Finally, Nvidia’s sell‑off reflects positioning as much as fundamentals. The stock has been one of the most crowded trades in global markets.
When expectations are stretched, even exceptional results can trigger profit‑taking. The pullback spilled into broader indices, with Asia‑Pacific markets trading mixed as investors digested the slump.
Nvidia remains the central force in the AI hardware boom, but Wall Street is beginning to ask harder questions about sustainability, competition and the next phase of growth.
The question of whether China can overtake the United States in artificial intelligence has shifted from speculative debate to a central geopolitical storyline.
What once looked like a distant rivalry is now a tightly contested race, shaped by compute constraints, divergent industrial strategies, and the growing importance of AI deployment rather than pure research supremacy.
Chinese Technology
China’s progress over the past few years has been impossible to ignore. A wave of domestic model developers has emerged, producing systems that—while not yet at the absolute frontier—are increasingly competitive.
Their rapid ascent has unsettled assumptions about a permanent American lead. Analysts now argue that a significant share of the world’s population could be running on a Chinese technology stack within a decade, particularly across regions where cost, accessibility, and political alignment matter more than brand prestige or cutting‑edge performance.
Yet China’s momentum is not without friction. The country’s biggest structural challenge remains compute.
Export controls have sharply limited access to the most advanced GPUs, creating a ceiling on how far and how fast Chinese labs can scale their largest models.
Even leading Chinese developers openly acknowledge that they operate with fewer resources than their American counterparts.
AI Investment Research
This gap matters: frontier AI research is still heavily dependent on vast compute budgets, and the United States retains a decisive advantage in both semiconductor technology and hyperscale infrastructure.
But China has turned constraint into strategy. Rather than chasing brute‑force scale, its labs have doubled down on efficiency—pioneering quantisation techniques, optimised inference pipelines, and compute‑lean architectures that deliver strong performance at lower cost.
In a world where enterprises increasingly care about value rather than theoretical peak capability, this approach is resonating.
Open‑weight Chinese models, in particular, are eroding the commercial moat of closed‑source American systems by offering capable alternatives that organisations can run cheaply on their own hardware.
Power Hungry
Energy is another under‑appreciated factor. China’s massive expansion of power generation—adding more capacity in four years than the entire U.S. grid—gives it a long‑term advantage in scaling data‑centre infrastructure.
AI is an energy‑hungry technology, and the ability to deploy at national scale may prove as important as breakthroughs in model design.
Still, the United States retains formidable strengths. It leads in advanced chips, frontier‑model research, and global cloud platforms.
American firms continue to attract enormous investment and maintain deep relationships with governments and enterprises worldwide. These advantages are not easily replicated.
The most realistic outcome is not a single winner but a universal AI landscape. China will dominate in some regions and layers of the stack; the U.S. will lead in others.
Translation of AI Power
The race is no longer about who builds the ‘best’ model, but who can translate artificial intelligence into economic and strategic power at scale.
China may not ‘win’ outright—but it no longer needs to. It only needs to be close enough to reshape the global balance of technological influence.
And on that front, the race is already far tighter than many expected.
When artificial intelligence first ignited investor enthusiasm, it lifted almost every major technology stock.
The narrative was simple: AI would transform industries, boost productivity and unlock vast new revenue streams.
Yet as the cycle matures, markets are becoming more selective. In recent weeks, shares of IBM have drifted lower, illustrating how the ‘AI effect’ can cut both ways.
At first glance, IBM should be a prime beneficiary. The company has spent years repositioning itself around hybrid cloud infrastructure, data analytics and enterprise AI solutions.
Its Watson platform has been refreshed with generative AI tools designed to automate customer service, streamline software development and enhance business decision-making. Management has repeatedly emphasised AI as a core growth engine.
Market Expectations
However, the market’s expectations have shifted. Investors are increasingly rewarding companies that sit at the very heart of AI infrastructure — those supplying advanced semiconductors, high-performance computing capacity and hyperscale cloud services.
These businesses are reporting visible surges in AI-related demand, often accompanied by sharp revenue acceleration and expanding margins.
By contrast, IBM’s AI exposure is embedded within broader consulting and software operations, making its growth trajectory appear steadier rather than explosive.
This distinction matters in a momentum-driven environment. When earnings updates fail to deliver dramatic upside surprises, shares can quickly lose favour.
Less AI Effect
IBM’s results have shown progress in software and recurring revenue, but they have not reflected the kind of dramatic AI-driven uplift seen elsewhere in the sector. For some investors, that raises questions about competitive positioning and pricing power.
There is also a perception issue. Despite its reinvention efforts, IBM still carries the legacy image of a mature technology conglomerate rather than a cutting-edge AI disruptor.
In a market captivated by bold innovation stories, narrative can influence valuation just as much as fundamentals.
If capital flows concentrate in a handful of high-growth AI names, diversified players may struggle to keep pace in share price performance.
AI Tension
Yet the sell-off may also highlight a deeper tension within the AI theme. Enterprise adoption of AI tools tends to be gradual, cautious and closely tied to measurable productivity gains.
IBM’s strategy is built around long-term integration rather than short-term hype. While that approach may lack immediate fireworks, it could prove more durable as corporate clients prioritise reliability, governance and cost control.
For now, though, the AI effect is amplifying investor discrimination. In a market eager for rapid transformation, IBM’s more measured path has translated into weaker share performance — a reminder that not all AI exposure is valued equally.
Further discussion
IBM has found itself on the wrong side of the artificial intelligence boom, with its shares tumbling more than 13% after Anthropic unveiled a new capability that directly targets one of the company’s most enduring revenue pillars: COBOL modernisation.
The sell‑off reflects a broader market anxiety that AI is beginning to erode long‑protected niches in enterprise technology, and IBM has become the latest high‑profile casualty.
For decades, IBM has been synonymous with mainframe computing and the maintenance of vast COBOL‑based systems that underpin global finance, government services, airlines, and retail transactions.
These systems are notoriously complex, expensive to update, and dependent on a shrinking pool of specialist developers.
Premium Brand
That scarcity has long worked in IBM’s favour, allowing it to charge a premium for modernisation and support.
Anthropic’s announcement threatens to upend that equation. Its Claude Code tool, the company claims, can automate the most time‑consuming and costly parts of understanding and restructuring legacy COBOL environments.
Tasks that once required teams of analysts months to complete—mapping dependencies, documenting workflows, identifying risks—can now be accelerated dramatically through AI‑driven analysis.
The implication is clear: modernising legacy systems may no longer require the same level of human expertise, nor the same level of spending.
Investors reacted swiftly. IBM’s share price fell to $223.35, extending a year‑to‑date decline of more than 24% – recovering later to $229.39
IBM one-year chart as of 24th February 2026
The drop reflects not only concerns about lost revenue, but also the fear that IBM’s competitive moat—built on decades of institutional reliance on COBOL—may be eroding faster than expected.
The timing has amplified market jitters. Only days earlier, cybersecurity stocks were hit by another Anthropic announcement: Claude Code Security, a feature designed to scan codebases for vulnerabilities.
AI Mood Logic
The rapid expansion of AI into specialised technical domains has created a ‘sell first, ask questions later’ mood across the market, with investors increasingly wary of companies whose business models depend on labour‑intensive or legacy‑bound processes.
For IBM, the challenge now is to demonstrate that it can harness AI rather than be displaced by it.
The company has invested heavily in its own AI initiatives, but the latest market reaction suggests investors are unconvinced that these efforts will offset the threat to its traditional strongholds.
The AI revolution is reshaping the technology landscape at speed. IBM’s sharp decline is a reminder that even the industry’s oldest giants are not insulated from disruption—and that the next wave of AI competition may hit the most established players hardest.
But remember, this is IBM we are talking about.
Explainer
What is COBOL?
COBOL is an old but remarkably durable programming language created in the late 1950s to run business, finance, and government systems, and it’s still powering much of the world’s banking and administrative infrastructure today.
It was designed to read almost like plain English, making it easier for non‑technical managers to understand, and its stability means many core systems have never been replaced.
For much of the past three years, the so‑called Magnificent Seven – Apple, Microsoft, Alphabet, Amazon, Meta, Tesla and Nvidia – have powered US equities to repeated record highs.
Their sheer scale, earnings strength and centrality to the AI boom turned them into a market narrative as much as an investment theme.
But as 2026 unfolds, the question is no longer whether they can keep leading the market higher, but whether the idea of treating them as a single trade still makes sense.
The short answer is closer to: the trade isn’t dead, but the era of effortless, broad‑based mega‑cap dominance is fading.
Mag 7 fatigue
The first sign of fatigue is the breakdown in cohesion. Last year, only a minority of the seven outperformed the wider S&P 500, a sharp contrast to the near‑uniform surges of 2023 and early 2024.
Nvidia and Alphabet continue to benefit from the structural demand for AI infrastructure and cloud‑driven productivity gains. Others, however, appear to be wrestling with slower growth, regulatory pressure or strategic resets.
Apple faces a maturing hardware cycle, Tesla is contending with intensifying global competition, and Meta’s spending plans continue to divide investors.
Mag 7 trade – which company is missing?
Divergence
This divergence matters. For years, investors could simply buy the group and let the rising tide of AI enthusiasm and index concentration do the work.
That simplicity has evaporated. Stock‑picking is back, and the market is finally distinguishing between companies with accelerating earnings power and those relying on past momentum.
At the same time, market breadth is improving. Capital is rotating into industrials and defensive sectors as investors seek exposure to areas that have lagged the mega‑cap rally. However, AI is affecting software stocks, law and financial sectors.
Healthy future
This broadening is healthy: it reduces concentration risk and signals that the U.S. economy is no longer dependent on a handful of tech giants to sustain equity performance.
Yet it would be premature to declare the Magnificent Seven irrelevant. Their combined earnings growth is still expected to outpace the rest of the index, and their role in AI, cloud computing and digital infrastructure remains foundational.
Change
What has changed is the nature of the trade. These are no longer seven interchangeable vehicles for tech exposure; they are seven distinct stories with diverging trajectories.
The Magnificent Seven haven’t left the stage. They have likely stopped performing in unison – and for investors, that marks the beginning of a more nuanced, more selective chapter.
China’s humanoid robotics sector has undergone a startling transformation over the past year, shifting from online punchline to global headline.
At the 2026 Spring Festival Gala — the world’s most‑watched television broadcast — a troupe of Chinese-built humanoids delivered a polished sequence of kung fu routines. These were synchronised with dancing skills and acrobatic flips.
A performance that sharply contrasted with their awkward public outings just twelve months earlier.
From failure to back flips – in one year
In early 2025, China’s humanoids were better known for wobbling through folk dances and collapsing mid‑marathon.
Clips of stumbles and system failures circulated widely, fuelling scepticism about whether the country’s robotics ambitions were more hype than substance.
Yet the past year has seen a rapid tightening of engineering, manufacturing and AI integration — and the results are now impossible to ignore.
Analysts note that China’s advantage is structural as much as technical. The country controls a nearly vertically integrated robotics supply chain, from rare earths and high‑performance magnets to batteries and actuators.
Unitree scales up
This ecosystem has enabled companies such as Unitree to scale production at a pace Western rivals struggle to match, while keeping prices dramatically lower.
Unitree’s G1 humanoid, for example, carries a base price of around $13,500, far below the expected near‑term pricing of Tesla’s Optimus platform.
The Gala performance reportedly showcased more than choreography. The robots demonstrated improved dexterity, balance and tool‑handling — capabilities that hint at real industrial potential.
Analysts argue that flips and weapon routines are impressive, but the true economic value lies in tasks requiring fine motor control, endurance and the ability to chain multiple actions together.
These are the areas where humanoids could eventually reshape logistics, manufacturing and even frontline service roles.
Hurdles remain
Still, significant hurdles remain. Reliability in messy, human‑centred environments is far from solved, and the underlying AI models — the systems that allow robots to reason, adapt and plan — remain the decisive battleground.
As one analyst reportedly put it, the robot ‘will only be as useful as its model’, a reminder that physical prowess alone won’t deliver the productivity revolution China hopes for.
Even so, the past year marks a turning point. What was once a source of online mockery has become a showcase of national ambition.
If China maintains its current momentum, the global robotics race may be entering a new, more competitive phase — and this time, the world is paying attention.
Top Chinese Humanoid Robots and What They Do
China’s humanoid robotics industry has exploded in scale and ambition, with hundreds of domestic models now in development or deployment — many designed for real-world tasks, research and emerging commercial use.
The Unitree G1 is built for agility and athletic performance and was featured in high-profile public displays.
Its advanced motors, balance systems and AI control allow dynamic motion — from kung fu to flips — making it a popular research and entertainment platform.
• Use: demonstrations, research, potential service and logistics applications • Production goals: Unitree aims to ship up to 20,000 robots in 2026, a dramatic increase from 5,500 in 2025.
2. AgiBot Series
AgiBot has several humanoid designs oriented toward industrial and laboratory tasks, such as vehicle inspections or precision work, using RGB-D cameras and lidar sensors.
• RAISE A1 — tall, capable of 7 km/h walking and heavy lifting • Yuanzheng A2 — bipedal, sensor-driven for fine manipulation • Lingxi X1 — open-source design to support wider development
3. Diverse 2026 Models Across Industries
China’s ecosystem now includes many specialised humanoids, each targeting different sectors:
• Dr02 (DEEP Robotics) – industrial-grade, all-weather use • L7 (Robot Era) – versatile and modular for logistics/research • Walker S2 (UBTECH) – continuous operation on factory floors • Forerunner K2 (Kepler Robotics) – precision tasks with advanced sensors • XMAN-R1 (Keenon Robotics) – service automation and collaborative work • Stardust Smart S1 (Astribot) – agile and adaptable for commercial interaction
Each of these models shows how far Chinese makers have moved past basic balance and walking, toward real manipulation and decision-making.
Capabilities: From Tools to Interaction
Modern Chinese humanoids are increasingly about practical capability, not just spectacle:
Tool handling Research and industrial models are designed to grip, carry and operate tools, approaching tasks like part assembly or quality checks in controlled environments.
Sensor integration Latest designs combine lidar, cameras, IMUs and advanced control software — giving robots robust perception for navigation and object manipulation.
AI and language interaction Efforts are underway to combine large language models with robot control systems — enabling natural language instructions and more flexible task execution.
Who’s Using Them?
While many humanoids remain in research or industrial contexts today, interest is rising rapidly:
✔️ Research and development labs ✔️ Corporate facilities (testing automation) ✔️ Robotics education and exhibitions ✔️ Early service roles in retail and hospitality
Consumer demand in China has surged since high-visibility events like the Spring Festival Gala, and delivery dates for popular models are being pushed out due to pre-orders.
China’s humanoid robot landscape in 2026 spans high-performance showpieces, industrial task specialists and service-ready platforms.
With thousands of units shipped and ambitious production plans underway, the country is rapidly evolving from prototype demonstrations to tangible real-world deployment.
A quiet but consequential shift is taking place across the global technology landscape: quantum computing is no longer a distant scientific ambition but an emerging commercial reality.
A new wave of breakthroughs is accelerating timelines, and data‑centre operators — already strained by the explosive growth of AI workloads — are being forced to rethink their infrastructure from the ground up.
The latest reporting highlights how this ‘quantum moment’ is reshaping priorities across the sector.
Advancements in Quantum computing
For years, quantum computing has been framed as a long‑term bet, with practical applications perpetually a decade away. That narrative is now being challenged.
Advances in qubit stability, error‑correction techniques and *photonic architectures are pushing the field closer to machines capable of solving commercially meaningful problems.
Industry leaders increasingly argue that hybrid quantum–classical systems will begin appearing inside data centres before the end of the decade, creating a new class of high‑value workloads.
This shift is happening at a time when data centres are already under unprecedented strain. The rapid adoption of generative AI has driven demand for power, cooling and specialised silicon to levels few operators anticipated.
Layered complexity
Quantum computing adds a new layer of complexity: these machines require ultra‑stable environments, extreme cooling and highly specialised networking.
As a result, data‑centre design is entering a new phase, with operators exploring everything from cryogenic‑ready layouts to quantum‑secure communication links.
The strategic implications are significant. Hyperscalers are positioning themselves early, investing in quantum‑safe encryption, photonic interconnects and experimental quantum modules that can be slotted into existing facilities.
Objective
The goal is to ensure that when quantum hardware becomes commercially viable, the supporting infrastructure is already in place.
This mirrors the early days of cloud computing, when capacity was built ahead of demand — a gamble that ultimately paid off.
Yet uncertainty remains. Some analysts caution that full‑scale commercialisation could still be decades away, pointing to slow revenue growth and persistent engineering challenges.
Even so, the direction of travel is clear: quantum computing is moving out of the lab and into the strategic planning of the world’s largest data‑centre operators.
If AI defined the last wave of infrastructure investment, quantum may define the next. And for an industry already racing to keep up, the clock has started ticking.
Explainer
What are Photonic Architectures?
Photonic architectures in quantum computing refer to systems that use light particles (photons) as the fundamental units of quantum information — instead of electrons or superconducting circuits.
These architectures are gaining traction because photons offer several unique advantages:
Key Features of Photonic Quantum Architectures
Feature
Description
Qubits via photons
Quantum bits are encoded in properties of light, such as polarisation or phase.
Room-temperature operation
Unlike superconducting systems, photonic setups often don’t require cryogenic cooling.
Low noise and decoherence
Photons are less prone to environmental interference, improving stability.
Modularity and scalability
Photonic systems can be built using modular optical components, ideal for scaling.
OpenAI has made a decisive move in the fast‑evolving world of autonomous AI agents by hiring Peter Steinberger. He is the Austrian developer behind the viral open‑source project OpenClaw.
The announcement, made by CEO Sam Altman, signals a strategic push towards building more capable personal AIagents. These agents are designed to complete more meaningful tasks for its users.
Steinberger’s creation, OpenClaw—previously known as Clawdbot and Moltbot—rose to prominence for its ability to automate real digital tasks.
Rapid Adoption
Its rapid adoption highlighted a growing appetite for AI systems that move beyond conversation and into practical execution.
Altman reportedly described Steinberger as ‘a genius with a lot of amazing ideas about the future’. He also emphasised that agentic systems will soon become central to OpenAI’s product ecosystem.
Crucially, OpenClaw it was reported, will not be absorbed into a closed platform. Instead, it will reportedly continue as an open‑source project under an independent foundation, with OpenAI providing support.
This approach preserves the community‑driven development model that helped the tool gain traction. This allows Steinberger to focus on advancing agent capabilities within OpenAI’s broader framework.
Steinberger
In a blog post, Steinberger reportedly explained that although OpenClaw could have grown into a large standalone company, he was more motivated by the opportunity to ‘change the world‘ rather than build another corporate venture.
His move comes amid intensifying competition in the agent space. Major tech firms are racing to define the next generation of AIassistants capable of coordinating complex tasks across multiple platforms.
OpenAI’s decision to bring Steinberger onboard underscores the company’s belief that autonomous agents will shape the next phase of AI adoption.
With OpenClaw remaining open and Steinberger now leading internal development, the stage is set for rapid innovation in personal AI systems
Nvidia has formally severed its financial ties with Arm Holdings, selling the final tranche of its shares and closing the book on one of the semiconductor industry’s most ambitious — and ultimately unsuccessful — takeover attempts.
Regulatory filings reportedly show the chipmaker disposed of roughly 1.1 million Arm shares during the fourth quarter, a holding valued at around $140 million based on Arm’s recent market price.
Sale of entire ARM stake
The move brings Nvidia’s ownership of the British chip‑architecture specialist to zero, marking a symbolic end to a saga that began in 2020 when Nvidia launched a bold $40 billion bid to acquire Arm.
That deal, which would have reshaped the global semiconductor landscape, collapsed under intense regulatory scrutiny and resistance from major industry players concerned about competition and neutrality.
Despite the divestment, the relationship between the two companies is far from over. Nvidia remains a major licensee of Arm’s instruction‑set technology, which underpins its current and next‑generation CPU designs.
Strategic move
Analysts note that the sale appears to be strategic housekeeping rather than a shift in technological direction, especially given Nvidia’s rapid expansion across data‑centre, AI, and edge‑computing markets.
Arm’s shares initially wobbled on news of the disposal but quickly stabilised, even edging higher as investors interpreted Nvidia’s exit as a clearing of legacy baggage rather than a signal of weakening confidence in Arm’s long‑term prospects.
The company, now primarily owned by SoftBank, continues to push ahead with its growth strategy following its public listing.
For Nvidia, the sale represents a clean break from a failed acquisition that once promised to redefine the industry.
For Arm, it marks another step in its evolution as an independent powerhouse at the centre of global chip design. The strategic paths of both companies however, remain intertwined
Alibaba has unveiled Qwen 3.5, its latest large language model series, signalling a decisive shift in China’s increasingly competitive AI landscape.
Released on the eve of the Chinese New Year, the new model arrives with both open‑weight and hosted versions, giving developers the option to run the system on their own infrastructure or through Alibaba’s cloud platform.
The company emphasises that Qwen 3.5 delivers improved performance and lower operating costs compared with earlier iterations, while introducing ‘native multimodal capabilities’ that allow it to process text, images, and video within a single system.
Ability
What sets Qwen 3.5 apart is its focus on agentic behaviour — the ability for AI systems to take actions, complete multi‑step tasks, and operate with minimal human supervision.
This trend has accelerated globally following recent releases from Anthropic and other U.S. based developers, prompting Chinese firms to respond rapidly.
Alibaba says Qwen 3.5 is compatible with popular open‑source agent frameworks such as OpenClaw, which has surged in adoption among developers seeking more autonomous AI tools.
Capable
The open‑weight version features 397 billion parameters, fewer than Alibaba’s previous flagship model, yet the company claims significant gains in reasoning and benchmark performance.
It also supports 201 languages and dialects — a notable expansion that reflects Alibaba’s ambition to position Qwen as a global‑ready platform rather than a purely domestic competitor.
With rivals like ByteDance and Zhipu AI launching their own upgraded models, Qwen 3.5 underscores how China’s AI race is evolving from chatbot development to full‑scale autonomous agents — a shift that could reshape software markets and business models worldwide.
For much of the modern AI era, the United States has held a clear advantage in frontier research, compute infrastructure, and commercial deployment.
Silicon Valley’s combination of elite talent, abundant capital, and world‑class semiconductor design created an environment where breakthroughs could scale at extraordinary speed.
Challenge
That dominance, however, is no longer uncontested. China’s accelerating push into advanced AI is reshaping the global technological landscape and posing the most credible challenge yet to America’s leadership.
China’s strategy is not built on a single breakthrough but on coordinated national effort. Beijing has spent years aligning universities, state‑backed funds, and private‑sector giants around a shared objective: achieving self‑sufficiency in critical technologies and becoming a global AI powerhouse.
Competitive
Companies such as Huawei, Baidu, Alibaba and Tencent are now producing increasingly competitive large models, while domestic chipmakers are narrowing the performance gap with U.S. suppliers despite export controls.
Crucially, China’s AI ecosystem benefits from scale and cost advantages that the U.S. cannot easily replicate.
Massive data availability, lower energy costs, and vertically integrated supply chains allow Chinese firms to train and deploy models at prices that appeal to developing economies.
For many countries, especially those already reliant on Chinese infrastructure, adopting a Chinese AI stack is becoming a pragmatic economic choice rather than a geopolitical statement.
Investment returns?
This shift is occurring just as U.S. tech giants embark on unprecedented spending cycles. Hyperscalers are pouring hundreds of billions of dollars into data centres, specialised chips, and model training.
The U.S. and its massive BIG Tech Spending Spree – Feeding the AI Habit
While this investment underscores America’s determination to stay ahead, it also raises questions about sustainability.
Investors are increasingly asking whether such vast capital expenditure can deliver long‑term returns in a world where China is offering cheaper, rapidly improving alternatives.
The emerging reality is not one of immediate American decline but of a genuinely multipolar AI landscape. The U.S. still leads in foundational research, top‑tier talent, and cutting‑edge semiconductor design.
Yet China’s rise represents a powerful economy that has mounted a serious challenge to the technological frontier.
The global AI race is no longer defined by a single centre of gravity. Instead, two competing ecosystems — one market‑driven, one reportedly state‑directed — are shaping the future of intelligent technology.
The outcome will influence not only economic power but the digital architecture of much of the world.