A sweeping federal ban on Anthropic’s technology has rapidly become one of the most consequential developments in U.S. government technology policy, following President Donald Trump’s order that all federal agencies — including the Pentagon — must immediately cease using the company’s AI systems.
The directive, issued on 27th February 2026, came just ahead of a Pentagon deadline demanding that Anthropic lift safety restrictions on its Claude models to allow unrestricted military use.
The confrontation with the Pentagon
The dispute escalated after Anthropic refused Defence Department demands to remove guardrails that limit how its AI can be used.
CEO Dario Amodei reportedly stated the company “cannot in good conscience accede” to requirements that would weaken its safety policies, prompting a public standoff.
President Trump reportedly responded by ordering every federal agency to “immediately cease” using Anthropic’s technology, declaring that the government “will not do business with them again.”
Agencies heavily reliant on the company’s tools, including the Department of Defense, have been granted six months to phase out their use.
Defence Secretary Pete Hegseth reportedly went further, designating Anthropic a national‑security “supply‑chain risk”.
This action could prevent military contractors from working with the company and marks the first time such a label has been applied to a major U.S. AI firm.
Impact across government and industry
The ban affects every federal department, from defence and intelligence to civilian agencies.
Contractors supplying AI‑enabled systems must now ensure their tools do not rely on Anthropic’s models, forcing rapid audits and potential redesigns.
AI generated image
Rival AI providers have already begun positioning themselves to fill the gap, with some announcing new Pentagon partnerships within hours of the ban.
The designation as a supply‑chain risk also carries legal and commercial consequences. Anthropic has argued the move is “legally unsound,” but the ruling stands, effectively placing the company on a federal blacklist.
Political debate
The decision has triggered intense debate across the technology sector. Supporters argue that the government must retain full authority over military AI applications.
Critics warn that forcing companies to abandon safety constraints could set a dangerous precedent.
The ban highlights a deepening fault line in U.S. AI governance: the struggle to balance national‑security imperatives with the ethical frameworks developed by leading AI firms.
As agencies begin disentangling themselves from Anthropic’s systems, the long‑term implications for federal procurement, AI safety norms, and the future of military‑AI collaboration remain unresolved.
When artificial intelligence first ignited investor enthusiasm, it lifted almost every major technology stock.
The narrative was simple: AI would transform industries, boost productivity and unlock vast new revenue streams.
Yet as the cycle matures, markets are becoming more selective. In recent weeks, shares of IBM have drifted lower, illustrating how the ‘AI effect’ can cut both ways.
At first glance, IBM should be a prime beneficiary. The company has spent years repositioning itself around hybrid cloud infrastructure, data analytics and enterprise AI solutions.
Its Watson platform has been refreshed with generative AI tools designed to automate customer service, streamline software development and enhance business decision-making. Management has repeatedly emphasised AI as a core growth engine.
Market Expectations
However, the market’s expectations have shifted. Investors are increasingly rewarding companies that sit at the very heart of AI infrastructure — those supplying advanced semiconductors, high-performance computing capacity and hyperscale cloud services.
These businesses are reporting visible surges in AI-related demand, often accompanied by sharp revenue acceleration and expanding margins.
By contrast, IBM’s AI exposure is embedded within broader consulting and software operations, making its growth trajectory appear steadier rather than explosive.
This distinction matters in a momentum-driven environment. When earnings updates fail to deliver dramatic upside surprises, shares can quickly lose favour.
Less AI Effect
IBM’s results have shown progress in software and recurring revenue, but they have not reflected the kind of dramatic AI-driven uplift seen elsewhere in the sector. For some investors, that raises questions about competitive positioning and pricing power.
There is also a perception issue. Despite its reinvention efforts, IBM still carries the legacy image of a mature technology conglomerate rather than a cutting-edge AI disruptor.
In a market captivated by bold innovation stories, narrative can influence valuation just as much as fundamentals.
If capital flows concentrate in a handful of high-growth AI names, diversified players may struggle to keep pace in share price performance.
AI Tension
Yet the sell-off may also highlight a deeper tension within the AI theme. Enterprise adoption of AI tools tends to be gradual, cautious and closely tied to measurable productivity gains.
IBM’s strategy is built around long-term integration rather than short-term hype. While that approach may lack immediate fireworks, it could prove more durable as corporate clients prioritise reliability, governance and cost control.
For now, though, the AI effect is amplifying investor discrimination. In a market eager for rapid transformation, IBM’s more measured path has translated into weaker share performance — a reminder that not all AI exposure is valued equally.
Further discussion
IBM has found itself on the wrong side of the artificial intelligence boom, with its shares tumbling more than 13% after Anthropic unveiled a new capability that directly targets one of the company’s most enduring revenue pillars: COBOL modernisation.
The sell‑off reflects a broader market anxiety that AI is beginning to erode long‑protected niches in enterprise technology, and IBM has become the latest high‑profile casualty.
For decades, IBM has been synonymous with mainframe computing and the maintenance of vast COBOL‑based systems that underpin global finance, government services, airlines, and retail transactions.
These systems are notoriously complex, expensive to update, and dependent on a shrinking pool of specialist developers.
Premium Brand
That scarcity has long worked in IBM’s favour, allowing it to charge a premium for modernisation and support.
Anthropic’s announcement threatens to upend that equation. Its Claude Code tool, the company claims, can automate the most time‑consuming and costly parts of understanding and restructuring legacy COBOL environments.
Tasks that once required teams of analysts months to complete—mapping dependencies, documenting workflows, identifying risks—can now be accelerated dramatically through AI‑driven analysis.
The implication is clear: modernising legacy systems may no longer require the same level of human expertise, nor the same level of spending.
Investors reacted swiftly. IBM’s share price fell to $223.35, extending a year‑to‑date decline of more than 24% – recovering later to $229.39
IBM one-year chart as of 24th February 2026
The drop reflects not only concerns about lost revenue, but also the fear that IBM’s competitive moat—built on decades of institutional reliance on COBOL—may be eroding faster than expected.
The timing has amplified market jitters. Only days earlier, cybersecurity stocks were hit by another Anthropic announcement: Claude Code Security, a feature designed to scan codebases for vulnerabilities.
AI Mood Logic
The rapid expansion of AI into specialised technical domains has created a ‘sell first, ask questions later’ mood across the market, with investors increasingly wary of companies whose business models depend on labour‑intensive or legacy‑bound processes.
For IBM, the challenge now is to demonstrate that it can harness AI rather than be displaced by it.
The company has invested heavily in its own AI initiatives, but the latest market reaction suggests investors are unconvinced that these efforts will offset the threat to its traditional strongholds.
The AI revolution is reshaping the technology landscape at speed. IBM’s sharp decline is a reminder that even the industry’s oldest giants are not insulated from disruption—and that the next wave of AI competition may hit the most established players hardest.
But remember, this is IBM we are talking about.
Explainer
What is COBOL?
COBOL is an old but remarkably durable programming language created in the late 1950s to run business, finance, and government systems, and it’s still powering much of the world’s banking and administrative infrastructure today.
It was designed to read almost like plain English, making it easier for non‑technical managers to understand, and its stability means many core systems have never been replaced.
Alibaba has unveiled Qwen 3.5, its latest large language model series, signalling a decisive shift in China’s increasingly competitive AI landscape.
Released on the eve of the Chinese New Year, the new model arrives with both open‑weight and hosted versions, giving developers the option to run the system on their own infrastructure or through Alibaba’s cloud platform.
The company emphasises that Qwen 3.5 delivers improved performance and lower operating costs compared with earlier iterations, while introducing ‘native multimodal capabilities’ that allow it to process text, images, and video within a single system.
Ability
What sets Qwen 3.5 apart is its focus on agentic behaviour — the ability for AI systems to take actions, complete multi‑step tasks, and operate with minimal human supervision.
This trend has accelerated globally following recent releases from Anthropic and other U.S. based developers, prompting Chinese firms to respond rapidly.
Alibaba says Qwen 3.5 is compatible with popular open‑source agent frameworks such as OpenClaw, which has surged in adoption among developers seeking more autonomous AI tools.
Capable
The open‑weight version features 397 billion parameters, fewer than Alibaba’s previous flagship model, yet the company claims significant gains in reasoning and benchmark performance.
It also supports 201 languages and dialects — a notable expansion that reflects Alibaba’s ambition to position Qwen as a global‑ready platform rather than a purely domestic competitor.
With rivals like ByteDance and Zhipu AI launching their own upgraded models, Qwen 3.5 underscores how China’s AI race is evolving from chatbot development to full‑scale autonomous agents — a shift that could reshape software markets and business models worldwide.
For much of the modern AI era, the United States has held a clear advantage in frontier research, compute infrastructure, and commercial deployment.
Silicon Valley’s combination of elite talent, abundant capital, and world‑class semiconductor design created an environment where breakthroughs could scale at extraordinary speed.
Challenge
That dominance, however, is no longer uncontested. China’s accelerating push into advanced AI is reshaping the global technological landscape and posing the most credible challenge yet to America’s leadership.
China’s strategy is not built on a single breakthrough but on coordinated national effort. Beijing has spent years aligning universities, state‑backed funds, and private‑sector giants around a shared objective: achieving self‑sufficiency in critical technologies and becoming a global AI powerhouse.
Competitive
Companies such as Huawei, Baidu, Alibaba and Tencent are now producing increasingly competitive large models, while domestic chipmakers are narrowing the performance gap with U.S. suppliers despite export controls.
Crucially, China’s AI ecosystem benefits from scale and cost advantages that the U.S. cannot easily replicate.
Massive data availability, lower energy costs, and vertically integrated supply chains allow Chinese firms to train and deploy models at prices that appeal to developing economies.
For many countries, especially those already reliant on Chinese infrastructure, adopting a Chinese AI stack is becoming a pragmatic economic choice rather than a geopolitical statement.
Investment returns?
This shift is occurring just as U.S. tech giants embark on unprecedented spending cycles. Hyperscalers are pouring hundreds of billions of dollars into data centres, specialised chips, and model training.
The U.S. and its massive BIG Tech Spending Spree – Feeding the AI Habit
While this investment underscores America’s determination to stay ahead, it also raises questions about sustainability.
Investors are increasingly asking whether such vast capital expenditure can deliver long‑term returns in a world where China is offering cheaper, rapidly improving alternatives.
The emerging reality is not one of immediate American decline but of a genuinely multipolar AI landscape. The U.S. still leads in foundational research, top‑tier talent, and cutting‑edge semiconductor design.
Yet China’s rise represents a powerful economy that has mounted a serious challenge to the technological frontier.
The global AI race is no longer defined by a single centre of gravity. Instead, two competing ecosystems — one market‑driven, one reportedly state‑directed — are shaping the future of intelligent technology.
The outcome will influence not only economic power but the digital architecture of much of the world.
The world’s largest cloud providers are engaged in one of the most expensive technological races in history.
Amazon, Microsoft, Meta and Alphabet are collectively on track to spend as much as $700 billion on AI‑related capital expenditure this year — a figure that rivals the GDP of mid‑sized nations and has understandably rattled investors.
The question now dominating markets is simple: can hyperscalers justify this level of spending, and should analysts remain so bullish on their stocks?
A Binary Bet on the Future of AI
The scale of investment has shifted the AI build‑out from a strategic growth initiative to what some analysts describe as a binary corporate bet. As some analysts suggest, the leap in capex — up roughly 60% year‑on‑year — means the payoff must be both rapid and substantial.
If monetisation fails to keep pace, the consequences could be of severe concern.
This is compounded by the fact that hyperscalers are now consuming nearly all of their operating cash flow to fund AI infrastructure, compared with a decade‑long average of around 40%. That shift alone explains the recent market jitters.
Why Analysts Remain Upbeat
Despite the turbulence, many analysts still argue the long‑term fundamentals remain intact. One reason is that hyperscalers are pre‑selling data‑centre capacity before it is even built, effectively locking in revenue ahead of deployment.
That dynamic supports the bullish view that AI demand is not only real but accelerating.
There is also a belief that as AI tools become embedded across consumer and enterprise workflows, willingness to pay will rise sharply.
If that scenario plays out, today’s eye‑watering capex could look prescient rather than reckless.
The Real Risk: Timelines
The challenge is timing. Much of the infrastructure being deployed — from chips to data‑centre hardware — has a useful life of just three to five years.
That gives hyperscalers a narrow window to recoup investment before the next upgrade cycle hits.
Without clearer monetisation strategies and firmer payback timelines, investor anxiety is likely to persist.
AI capex justification?
Hyperscalerscan justify their AI capex — but only if demand scales as quickly as they expect and monetisation becomes more transparent.
Analysts may be right to stay bullish, but the margin for error is shrinking. In the coming quarters, clarity will matter as much as capital.
Baidu has begun integrating the fast‑rising AI agent OpenClaw directly into its flagship search app, opening the door for 700 million monthly users to access advanced task‑automation tools just ahead of China’s Lunar New Year holiday.
The move marks one of the company’s most significant consumer‑facing upgrades in years, as competition intensifies among Chinese tech giants racing to commercialise AI at scale.
Until now, OpenClaw — an Austrian‑developed, open‑source agent — was primarily accessed through chat platforms such as WhatsApp and Telegram.
Baidu rollout
Baidu’s rollout means users who opt in will be able to message the agent within the search app to handle everyday digital tasks, from scheduling and file organisation to writing code.
The company is also extending OpenClaw’s capabilities across its wider ecosystem, including e‑commerce and cloud services.
The timing is strategic. Lunar New Year is one of the most competitive periods for user acquisition in China’s internet sector, and Baidu’s rivals are also accelerating their AI deployments.
Alibaba, for example, has woven its Qwen chatbot into platforms such as Taobao and Fliggy, enabling end‑to‑end shopping journeys without leaving the app — a shift that has already generated more than 120 million consumer orders in a six‑day period this month.
Popularity surge
OpenClaw’s surge in popularity reflects a broader trend: AI agents are moving beyond conversational novelty and into practical automation, capable of navigating apps, managing email and performing multi‑step online tasks.
Yet the rapid adoption has also drawn warnings from cybersecurity firms, including CrowdStrike, about the risks of granting such agents deep access to enterprise systems.
For Baidu, the integration signals a clear intent to keep pace with global AI leaders while reinforcing its dominance in China’s search market.
For users, it marks the arrival of a more hands‑on, task‑driven AI era — one embedded directly into the tools they already rely on daily, with instant access to millions of users.
Alphabet’s decision to issue a 100-year sterling bond has captured the attention of global markets, not only because of its rarity but also because of what it signals about the escalating competition in artificial intelligence.
100 year sterling bond
A century-long bond denominated in pounds is an extraordinary financing move, particularly for a technology company.
It reflects both investor confidence in Alphabet’s long-term prospects and the scale of capital now required to compete in the AI era.
On the surface, the benefits are clear. Locking in funding for 100 years at today’s rates provides financial certainty. Alphabet can secure vast sums of capital without facing refinancing risk for generations.
In an industry defined by rapid change and enormous upfront costs — from data centres and semiconductor procurement to specialised AI chips and energy infrastructure — patient capital is invaluable.
Sterling
The sterling denomination also diversifies Alphabet’s funding base beyond U.S. dollar markets, potentially appealing to European institutional investors seeking stable, long-duration assets.
The bond may also be interpreted as a strategic signal. By committing to long-term financing, Alphabet demonstrates confidence in its ability to generate cash flows well into the next century.
It reinforces the company’s image as a durable, infrastructure-like enterprise rather than a volatile technology stock.
For investors such as pension funds and insurers, a 100-year instrument from a highly rated issuer can offer predictable returns in a world where long-term yield is scarce.
Cyclical
However, the move is not without shortcomings. Committing to fixed debt obligations over such an extended horizon reduces flexibility. While Alphabet currently enjoys strong balance sheet metrics, the technology sector is notoriously cyclical.
A century is an eternity in innovation terms. Business models, regulatory frameworks and geopolitical dynamics may shift dramatically.
Future generations of management will inherit the obligation, regardless of whether today’s AI investments deliver the expected returns.
More broadly, the bond feeds concern about a debt-fuelled AI arms race. As technology giants pour tens of billions into AI research, chip design and cloud infrastructure, borrowing is becoming an increasingly prominent tool.
If rivals respond with similar long-dated issuance, the sector’s leverage could rise meaningfully. In a downturn or if AI monetisation disappoints; heavy debt burdens could amplify financial strain.
Ultimately, Alphabet’s 100-year sterling bond embodies both ambition and risk. It underlines the immense capital demands of the AI revolution while raising questions about whether today’s competitive fervour is encouraging companies to stretch their balance sheets too far in pursuit of technological dominance.
Systemic anxiety
The deeper anxiety is systemic. With Oracle, Amazon, Microsoft and others also scaling up borrowing, total tech‑sector issuance is projected to hit $3 trillion over five years.
Some analysts warn this resembles a late‑cycle credit boom, where investors chase thematic excitement rather than sober fundamentals.
Alphabet’s century bond may be a masterstroke of timing — or a marker of excess.
Either way, it crystallises the tension at the heart of the AI revolution: extraordinary promise, financed by extraordinary debt.
Why a Sterling Bond?
Alphabet issued its 100‑year sterling bond to tap deep UK demand for ultra‑long‑dated assets, especially from pension funds seeking to match long‑term liabilities.
The sterling market offered strong appetite, with orders reportedly reaching nearly ten times the £1 billion on offer.
It also formed part of Alphabet’s broader multi‑currency fundraising drive to finance massive AI‑related capital spending, including data‑centre expansion.
Issuing in sterling diversified its investor base, reduced reliance on U.S. dollar markets, and signalled confidence in its long‑term stability as a quasi‑infrastructure‑scale business.
Anthropic has unveiled Claude Opus 4.6, its most capable AI model to date, marking a significant leap in long‑context reasoning, autonomous agent workflows, and enterprise‑grade coding performance.
The release arrives during a turbulent moment for the global software sector, with markets reacting sharply to fears that Anthropic’s accelerating capabilities could reshape entire categories of knowledge work.
At the heart of Opus 4.6 is a 1‑million‑token context window, a first for Anthropic’s Opus line and a direct response to long‑standing limitations around ‘context rot’ in extended tasks.
Benchmarks
Early benchmarks show a dramatic improvement in maintaining accuracy across vast documents and complex, multi‑step workflows.
This expanded capacity enables the model to analyse large codebases, regulatory filings, or research archives in a single pass—an ability already drawing interest from enterprise users.
Perhaps the most striking development is Anthropic’s progress in agentic systems. Claude Code and the company’s Cowork framework now support coordinated ‘agent teams’, allowing multiple Claude instances to collaborate on sophisticated engineering challenges.
In one internal experiment, a team of 16 Claude agents built a complete Rust‑based C compiler capable of compiling the Linux kernel—producing nearly 100,000 lines of code with minimal human intervention.
Agentic shift
This agentic shift is reshaping expectations around AI‑driven software development. Anthropic positions Opus 4.6 not merely as a tool but as a foundation for autonomous, multi‑agent workflows that can plan, execute, and refine complex tasks over extended periods.
The company highlights improvements in reliability, coding precision, and long‑running task stability as core differentiators.
With enterprise adoption already representing the majority of Anthropic’s business, Opus 4.6 signals a decisive step toward AI systems that operate as high‑level collaborators rather than assistants.
As markets digest the implications, one thing is clear: Anthropic is accelerating the transition from ‘AI that helps’ to AI that works alongside you—and sometimes, entirely on its own.
Legal profession
Anthropic is pushing aggressively into the legal domain, positioning Claude as a high‑precision research and drafting partner for firms handling complex regulatory workloads.
The latest models emphasise long‑context accuracy, allowing lawyers to ingest entire case bundles, contracts, or disclosure sets without losing coherence.
Anthropic has also expanded constitutional AI safeguards, aiming to reduce hallucinations in high‑stakes legal reasoning.
Early adopters report gains in due‑diligence speed, contract comparison, and regulatory interpretation, particularly in financial services and data‑protection work.
While not a substitute for legal judgement, Claude is rapidly becoming a force multiplier for teams managing heavy document‑driven tasks.
A new generation of artificial intelligence is taking shape, and at its centre sits OpenClaw — a fast‑evolving framework that embodies the shift from monolithic AI models to agile, task‑driven agents.
While large language models once dominated the conversation, the momentum has clearly moved toward systems that can reason, plan, and act with far greater autonomy. OpenClaw is emerging as one of the most intriguing examples of this transition.
Appeal
OpenClaw’s appeal lies in its modular design. Instead of relying on a single, all‑purpose model, it orchestrates multiple specialised components that collaborate to complete complex workflows.
This mirrors how real teams operate: one agent may handle research, another may draft content, and a third may evaluate quality or flag risks. The result is a system that behaves less like a tool and more like a coordinated digital workforce.
Defining trend
This shift is not happening in isolation. Across the industry, AI agents are becoming the defining trend. Companies are racing to build systems that can manage inboxes, run businesses, write and deploy code, or even negotiate with other agents.
The ambition is no longer to create a chatbot that answers questions, but an autonomous entity capable of executing multi‑step tasks with minimal human intervention.
OpenClaw stands out because it embraces openness and experimentation. Developers can plug in their own models, customise behaviours, and build agent ‘stacks’ tailored to specific industries.
Adoption
Early adopters in media, finance, and logistics are already exploring how these agents can streamline research, automate reporting, or coordinate supply‑chain decisions.
The promise is efficiency, but also creativity: agents that can generate ideas, test them, and refine them without constant supervision.
Of course, the rise of agentic AI brings challenges. Questions around safety, reliability, and accountability are becoming more urgent. An agent that can act independently must also be constrained responsibly.
Challenge
The industry is now grappling with how to balance autonomy with oversight, ensuring that these systems remain aligned with human goals and values.
Even with these concerns, the trajectory is unmistakable. OpenClaw and its peers represent a decisive step toward AI that is not merely reactive but proactive — capable of taking initiative, managing complexity, and collaborating with humans in more meaningful ways.
As these systems mature, they are likely to reshape not just how we work, but how we think about intelligence itself.
If you want to explore how this trend could influence your editorial or creative workflows, I’m ready to dive deeper with you.
U.S. stock markets are behaving like a mood ring in a thunderstorm—volatile, reactive, and oddly sentimental.
One moment, President Trump threatens a ‘massive increase’ in tariffs on Chinese imports, and nearly $2 trillion in market value evaporates.
The next, he posts that: ‘all will be fine‘, and futures rebound overnight. It’s not just policy—it’s theatre, and Wall Street is watching every act with bated breath.
This hypersensitivity isn’t new, but it’s been amplified by the precarious state of global trade and the towering expectations placed on artificial intelligence.
Trump’s recent comments about China’s rare earth export controls triggered a sell-off that saw the Nasdaq drop 3.6% and the S&P 500 fall 2.7%—the worst single-day performance since April.
Tech stocks, especially those reliant on semiconductors and AI infrastructure, were hit hardest. Nvidia alone lost nearly 5%.
Why so fickle? Because the market’s current rally is built on a foundation of hope and hype. AI has been the engine driving valuations to record highs, with companies like OpenAI and Anthropic reaching eye-watering valuations despite uncertain profitability.
The IMF and Bank of England have both warned that we may be in stage three of a classic bubble cycle6. Circular investment deals—where AI startups use funding to buy chips from their investors—have raised eyebrows and comparisons to the dot-com era.
Yet, the bubble hasn’t burst. Not yet. The ‘Buffett Indicator‘ sits at a historic 220%, and the S&P 500 trades at 188% of U.S. GDP. These are not numbers grounded in sober fundamentals—they’re fuelled by speculative fervour and a fear of missing out (FOMO).
But unlike the dot-com crash, today’s AI surge is backed by real infrastructure: data centres, chip fabrication, and enterprise adoption. Whether that’s enough to justify the valuations remains to be seen.
In the meantime, markets remain twitchy. Trump’s tariff threats are more than political posturing—they’re economic tremors that ripple through supply chains and investor sentiment.
And with AI valuations stretched to breaking point, even a modest correction could trigger a cascade.
So yes, the market is fickle. But it’s not irrational—it’s just balancing on a knife’s edge between technological optimism and geopolitical anxiety.
Influential figures and institutions are sounding the AI alarm—or at least raising eyebrows—about the frothy valuations and speculative fervour surrounding artificial intelligence.
Who’s Warning About the AI Bubble?
🏛️ Bank of England – Financial Policy Committee
View: Stark warning.
Quote: “The risk of a sharp market correction has increased.”
Why it matters: The BoE compares current AI stock valuations to the dotcom bubble, noting that the top five S&P 500 firms now command nearly 30% of market cap—the highest concentration in 50 years.
🏦 Jerome Powell – Chair, U.S. Federal Reserve
View: Cautiously sceptical.
Quote: Assets are “fairly highly valued.”
Why it matters: While not naming AI directly, Powell’s remarks echo broader concerns about tech valuations and investor exuberance.
🧮 Lisa Shalett – Chief Investment Officer, Morgan Stanley Wealth Management
View: Deeply concerned.
Quote: “This is not going to be pretty” if AI capital expenditure disappoints.
Why it matters: Shalett warns that 75% of S&P 500 returns are tied to AI hype, likening the moment to the “Cisco cliff” of the early 2000s.
🌍 Kristalina Georgieva – Managing Director, IMF
View: Watchful.
Quote: Financial conditions could “turn abruptly.”
Why it matters: Georgieva highlights the fragility of markets despite AI’s productivity promise, warning of sudden sentiment shifts.
🧨 Sam Altman – CEO, OpenAI
View: Self-aware caution.
Quote: “People will overinvest and lose money.”
Why it matters: Altman’s admission from inside the AI gold rush adds credibility to bubble concerns—even as his company fuels the hype.
📦 Jeff Bezos – Founder, Amazon
View: Bubble-aware.
Quote: Described the current environment as “kind of an industrial bubble.”
Why it matters: Bezos sees parallels with past tech manias, suggesting that infrastructure spending may be overextended.
🧠 Adam Slater – Lead Economist, Oxford Economics
View: Analytical.
Quote: “There are a few potential symptoms of a bubble.”
Why it matters: Slater points to stretched valuations and extreme optimism, noting that productivity projections vary wildly.
🏛️ Goldman Sachs – Investment Strategy Division
View: Cautiously optimistic.
Quote: “A bubble has not yet formed,” but investors should “diversify.”
Why it matters: Goldman acknowledges the risks while maintaining that fundamentals may still justify valuations—though they advise caution.
AI Bubble voices infographic October 2025
🧠 Julius Černiauskas and the Oxylabs AI/ML Advisory Board
🔍 View: The AI hype is nearing its peak—and may soon deflate.
Černiauskas warns that AI development is straining environmental resources and public trust. He’s pushing for responsible and sustainable AI practices, noting that transparency is lacking in how many models operate.
Ali Chaudhry, research fellow at UCL and founder of ResearchPal, adds that scaling laws are showing their limits. He predicts diminishing returns from simply making models bigger, and expects tightened regulations around generative AI in 2025.
Adi Andrei, cofounder of Technosophics, goes further: he believes the Gen AI bubble is on the verge of bursting, citing overinvestment and unmet expectations
🧠 Jamie Dimon on the AI Bubble
🔥 View: Sharply concerned—more than most as widely reported
Quote: “I’m far more worried than others about the prospects of a downturn.”
Context: Dimon believes AI stock valuations are “stretched” and compares the current surge to the dotcom bubble of the late 1990s.
📉 Key Warnings from Dimon
“Sharp correction” risk: He sees a real danger of a sudden market pullback, especially given how AI-related stocks have surged disproportionately—like AMD jumping 24% in a single day after an OpenAI deal.
“Most people involved won’t do well”: Dimon told the BBC that while AI will ultimately pay off—like cars and TVs did—many investors will lose money along the way.
“Governments are distracted”: He criticised policymakers for focusing on crypto and ignoring real security threats, saying: “We should be stockpiling bullets, guns and bombs”.
“AI will disrupt jobs and companies”: At a trade event in Dublin, he warned that AI’s ubiquity will shake up industries and employment across the board.
And so…
The AI boom of 2025 has ignited a speculative frenzy across global markets, with tech stocks soaring and investors piling into anything labelled “AI-adjacent.”
But beneath the euphoria, a chorus of high-profile warnings is growing louder. From the Bank of England and IMF to JPMorgan’s Jamie Dimon and OpenAI’s Sam Altman, concerns are mounting that valuations are dangerously stretched, capital is overconcentrated, and the narrative is outpacing reality.
Dimon likens the moment to the dotcom bubble, while Altman admits many will “lose money” chasing the hype. Analysts point to classic bubble signals: retail mania, corporate FOMO, and earnings divorced from fundamentals.
Even as AI’s long-term utility remains promising, the short-term exuberance may be setting the stage for a sharp correction.
Whether it’s a pullback or a full-blown crash, the mood is shifting—from uncritical optimism to wary anticipation.
The question now is not whether AI will change the world, but whether markets have priced in too much, too soon.
We have been warned!
The AI bubble will pop – it’s just a matter of when and not if.
Anthropic has unveiled Claude Sonnet 4.5, its most advanced AI model to date—described by the company as ‘the best coding model in the world’.
Released in September 2025, Sonnet 4.5 marks a significant evolution in agentic capability, safety alignment, and real-world task execution.
Designed to power Claude Code and enterprise-grade AI agents, Sonnet 4.5 excels in long-context coding, autonomous software development, and complex business workflows.
Benchmark
In benchmark trials, the model reportedly sustained 30+ hours of uninterrupted coding, outperforming its predecessor Opus 4.1 and rival systems like GPT-5 and Gemini 2.52.
Anthropic’s emphasis on safety is equally notable. Sonnet 4.5 underwent extensive alignment training to reduce sycophancy, deception, and prompt injection vulnerabilities.
It now operates under Anthropic’s AI Safety Level 3 framework, with filters guarding against misuse in sensitive domains such as chemical or biological research.
New features include ‘checkpoints’ for code rollback, file creation within chat (spreadsheets, slides, documents), and a refreshed terminal interface.
Developers can now build custom agents using the Claude Agent SDK, extending the model’s reach into autonomous task orchestration4.
Anthropic’s positioning is clear: Claude Sonnet 4.5 is not merely a chatbot—it’s a colleague. With pricing held at $3 per million input tokens and $15 per million output tokens, the model is accessible yet formidable.
Whether this heralds a renaissance or a reckoning remains to be seen—but for now, Anthropic’s latest release sets a new benchmark for intelligent autonomy.
There’s growing concern that parts of the AI boom—especially the infrastructure and monetisation frenzy—might be built on shaky foundations.
The term ‘AI house of cards’ is being used to describe deals like Oracle’s multiyear agreement with OpenAI, which has committed to buying $300 billion in computing power over five years starting in 2027.
That’s on top of OpenAI’s existing $100 billion in commitments, despite having only about $12 billion in annual recurring revenue. Analysts are questioning whether the math adds up, and whether Oracle’s backlog—up 359% year-over-year—is too dependent on a single customer.
Oracle’s stock surged 36%, then dropped 5% Friday as investors took profits and reassessed the risks.
Some analysts remain neutral, citing murky contract details and the possibility that OpenAI’s nonprofit status could limit its ability to absorb the $40 billion it raised earlier this year.
The broader picture? AI infrastructure spending is ballooning into the trillions, echoing the dot-com era’s early adoption frenzy. If demand doesn’t materialise fast enough, we could see a correction.
But others argue this is just the messy middle of a long-term transformation—where data centres become the new utilities
The AI infrastructure boom—especially the Oracle–OpenAI deal—is raising eyebrows because the financial and operational foundations look more speculative than solid.
Here’s why some analysts are calling it a potential house of cards
⚠️ 1. Mismatch Between Revenue and Commitments
OpenAI’s annual revenue is reportedly around $10–12 billion, but it’s committed to $300 billion in cloud spending with Oracle over five years.
That’s $60 billion per year, meaning OpenAI would need to grow revenue 5–6x just to break even on compute costs.
CEO Sam Altman projects $44 billion in losses before profitability in 2029.
🔌 2. Massive Energy Demands
The infrastructure needed to fulfill this contract requires electricity equivalent to two Hoover Dams.
That’s not just expensive—it’s logistically daunting. Data centres are planned across five U.S. states, but power sourcing and environmental impact remain unclear.
AI House of Cards Infographic
💸 3. Oracle’s Risk Exposure
Oracle’s debt-to-equity ratio is already 10x higher than Microsoft’s, and it may need to borrow more to meet OpenAI’s demands.
The deal accounts for most of Oracle’s $317 billion backlog, tying its future growth to a single customer.
🔄 4. Shifting Alliances and Uncertain Lock-In
OpenAI recently ended its exclusive cloud deal with Microsoft, freeing it to sign with Oracle—but also introducing risk if future models are restricted by AGI clauses.
Microsoft is now integrating Anthropic’s Claude into Office 365, signalling a diversification away from OpenAI.
🧮 5. Speculative Scaling Assumptions
The entire bet hinges on continued global adoption of OpenAI’s tech and exponential demand for inference at scale.
If adoption plateaus or competitors leapfrog, the infrastructure could become overbuilt—echoing the dot-com frenzy of the early 2000s.
Is this a moment for the AI frenzy to take a breather?
Anthropic, a rival to OpenAI, unveiled Claude 3.5 Sonnet on Thursday, touting it as their most advanced AI model to date.
Claude has joined the ranks of widely used chatbots such as OpenAI’s ChatGPT and Google’s Gemini. Founded by former OpenAI research leaders, Anthropic has secured backing from major tech entities like Google, Salesforce, and Amazon. Over the past year, the company has completed numerous funding rounds, reportedly amassing approximately $7.3 billion.
The announcement comes after Anthropic introduced its Claude 3 series of models in March, followed by OpenAI’s GPT-4o in May 2024. Anthropic has stated that Claude 3.5 Sonnet, the initial model from the new Claude 3.5 series, surpasses the speed of its predecessor, Claude 3 Opus.
“It shows marked improvement in grasping nuance, humour, and complex instructions, and is exceptional at writing high-quality content with a natural, relatable tone,” the company said in a blog post.
It can also write, edit and execute code in a real time workspace open for the user to engage.