Markets on a Hair Trigger: Trump’s Tariff Whiplash and the AI Bubble That Won’t Pop

Markets move as Trump tweets

U.S. stock markets are behaving like a mood ring in a thunderstorm—volatile, reactive, and oddly sentimental.

One moment, President Trump threatens a ‘massive increase’ in tariffs on Chinese imports, and nearly $2 trillion in market value evaporates.

The next, he posts that: ‘all will be fine‘, and futures rebound overnight. It’s not just policy—it’s theatre, and Wall Street is watching every act with bated breath.

This hypersensitivity isn’t new, but it’s been amplified by the precarious state of global trade and the towering expectations placed on artificial intelligence.

Trump’s recent comments about China’s rare earth export controls triggered a sell-off that saw the Nasdaq drop 3.6% and the S&P 500 fall 2.7%—the worst single-day performance since April.

Tech stocks, especially those reliant on semiconductors and AI infrastructure, were hit hardest. Nvidia alone lost nearly 5%.

Why so fickle? Because the market’s current rally is built on a foundation of hope and hype. AI has been the engine driving valuations to record highs, with companies like OpenAI and Anthropic reaching eye-watering valuations despite uncertain profitability.

The IMF and Bank of England have both warned that we may be in stage three of a classic bubble cycle6. Circular investment deals—where AI startups use funding to buy chips from their investors—have raised eyebrows and comparisons to the dot-com era.

Yet, the bubble hasn’t burst. Not yet. The ‘Buffett Indicator‘ sits at a historic 220%, and the S&P 500 trades at 188% of U.S. GDP. These are not numbers grounded in sober fundamentals—they’re fuelled by speculative fervour and a fear of missing out (FOMO).

But unlike the dot-com crash, today’s AI surge is backed by real infrastructure: data centres, chip fabrication, and enterprise adoption. Whether that’s enough to justify the valuations remains to be seen.

In the meantime, markets remain twitchy. Trump’s tariff threats are more than political posturing—they’re economic tremors that ripple through supply chains and investor sentiment.

And with AI valuations stretched to breaking point, even a modest correction could trigger a cascade.

So yes, the market is fickle. But it’s not irrational—it’s just balancing on a knife’s edge between technological optimism and geopolitical anxiety.

One tweet can tip the scales.

Fickle!

AI Crash! Correction or pullback? Something is coming…

AI Bubble concerns

Influential figures and institutions are sounding the AI alarm—or at least raising eyebrows—about the frothy valuations and speculative fervour surrounding artificial intelligence.

Who’s Warning About the AI Bubble?

🏛️ Bank of England – Financial Policy Committee

  • View: Stark warning.
  • Quote: “The risk of a sharp market correction has increased.”
  • Why it matters: The BoE compares current AI stock valuations to the dotcom bubble, noting that the top five S&P 500 firms now command nearly 30% of market cap—the highest concentration in 50 years.

🏦 Jerome Powell – Chair, U.S. Federal Reserve

  • View: Cautiously sceptical.
  • Quote: Assets are “fairly highly valued.”
  • Why it matters: While not naming AI directly, Powell’s remarks echo broader concerns about tech valuations and investor exuberance.

🧮 Lisa Shalett – Chief Investment Officer, Morgan Stanley Wealth Management

  • View: Deeply concerned.
  • Quote: “This is not going to be pretty” if AI capital expenditure disappoints.
  • Why it matters: Shalett warns that 75% of S&P 500 returns are tied to AI hype, likening the moment to the “Cisco cliff” of the early 2000s.

🌍 Kristalina Georgieva – Managing Director, IMF

  • View: Watchful.
  • Quote: Financial conditions could “turn abruptly.”
  • Why it matters: Georgieva highlights the fragility of markets despite AI’s productivity promise, warning of sudden sentiment shifts.

🧨 Sam Altman – CEO, OpenAI

  • View: Self-aware caution.
  • Quote: “People will overinvest and lose money.”
  • Why it matters: Altman’s admission from inside the AI gold rush adds credibility to bubble concerns—even as his company fuels the hype.

📦 Jeff Bezos – Founder, Amazon

  • View: Bubble-aware.
  • Quote: Described the current environment as “kind of an industrial bubble.”
  • Why it matters: Bezos sees parallels with past tech manias, suggesting that infrastructure spending may be overextended.

🧠 Adam Slater – Lead Economist, Oxford Economics

  • View: Analytical.
  • Quote: “There are a few potential symptoms of a bubble.”
  • Why it matters: Slater points to stretched valuations and extreme optimism, noting that productivity projections vary wildly.

🏛️ Goldman Sachs – Investment Strategy Division

  • View: Cautiously optimistic.
  • Quote: “A bubble has not yet formed,” but investors should “diversify.”
  • Why it matters: Goldman acknowledges the risks while maintaining that fundamentals may still justify valuations—though they advise caution.
AI Bubble voices infographic October 2025

🧠 Julius Černiauskas and the Oxylabs AI/ML Advisory Board

🔍 View: The AI hype is nearing its peak—and may soon deflate.

  • Černiauskas warns that AI development is straining environmental resources and public trust. He’s pushing for responsible and sustainable AI practices, noting that transparency is lacking in how many models operate.
  • Ali Chaudhry, research fellow at UCL and founder of ResearchPal, adds that scaling laws are showing their limits. He predicts diminishing returns from simply making models bigger, and expects tightened regulations around generative AI in 2025.
  • Adi Andrei, cofounder of Technosophics, goes further: he believes the Gen AI bubble is on the verge of bursting, citing overinvestment and unmet expectations

🧠 Jamie Dimon on the AI Bubble

🔥 View: Sharply concerned—more than most as widely reported

  • Quote: “I’m far more worried than others about the prospects of a downturn.”
  • Context: Dimon believes AI stock valuations are “stretched” and compares the current surge to the dotcom bubble of the late 1990s.

📉 Key Warnings from Dimon

  • “Sharp correction” risk: He sees a real danger of a sudden market pullback, especially given how AI-related stocks have surged disproportionately—like AMD jumping 24% in a single day after an OpenAI deal.
  • “Most people involved won’t do well”: Dimon told the BBC that while AI will ultimately pay off—like cars and TVs did—many investors will lose money along the way.
  • “Governments are distracted”: He criticised policymakers for focusing on crypto and ignoring real security threats, saying: “We should be stockpiling bullets, guns and bombs”.
  • AI will disrupt jobs and companies”: At a trade event in Dublin, he warned that AI’s ubiquity will shake up industries and employment across the board.

And so…

The AI boom of 2025 has ignited a speculative frenzy across global markets, with tech stocks soaring and investors piling into anything labelled “AI-adjacent.”

But beneath the euphoria, a chorus of high-profile warnings is growing louder. From the Bank of England and IMF to JPMorgan’s Jamie Dimon and OpenAI’s Sam Altman, concerns are mounting that valuations are dangerously stretched, capital is overconcentrated, and the narrative is outpacing reality.

Dimon likens the moment to the dotcom bubble, while Altman admits many will “lose money” chasing the hype. Analysts point to classic bubble signals: retail mania, corporate FOMO, and earnings divorced from fundamentals.

Even as AI’s long-term utility remains promising, the short-term exuberance may be setting the stage for a sharp correction.

Whether it’s a pullback or a full-blown crash, the mood is shifting—from uncritical optimism to wary anticipation.

The question now is not whether AI will change the world, but whether markets have priced in too much, too soon.

We have been warned!

The AI bubble will pop – it’s just a matter of when and not if.

Go lock up your investments!

Claude Sonnet 4.5: Anthropic’s Leap Toward Autonomous Intelligence

Anthropic AI Claude

Anthropic has unveiled Claude Sonnet 4.5, its most advanced AI model to date—described by the company as ‘the best coding model in the world’.

Released in September 2025, Sonnet 4.5 marks a significant evolution in agentic capability, safety alignment, and real-world task execution.

Designed to power Claude Code and enterprise-grade AI agents, Sonnet 4.5 excels in long-context coding, autonomous software development, and complex business workflows.

Benchmark

In benchmark trials, the model reportedly sustained 30+ hours of uninterrupted coding, outperforming its predecessor Opus 4.1 and rival systems like GPT-5 and Gemini 2.52.

Anthropic’s emphasis on safety is equally notable. Sonnet 4.5 underwent extensive alignment training to reduce sycophancy, deception, and prompt injection vulnerabilities.

It now operates under Anthropic’s AI Safety Level 3 framework, with filters guarding against misuse in sensitive domains such as chemical or biological research.

New features include ‘checkpoints’ for code rollback, file creation within chat (spreadsheets, slides, documents), and a refreshed terminal interface.

Developers can now build custom agents using the Claude Agent SDK, extending the model’s reach into autonomous task orchestration4.

Anthropic’s positioning is clear: Claude Sonnet 4.5 is not merely a chatbot—it’s a colleague. With pricing held at $3 per million input tokens and $15 per million output tokens, the model is accessible yet formidable.

As AI enters its ‘super cycle’, Claude Sonnet 4.5 signals a shift from conversational novelty to operational necessity.

Whether this heralds a renaissance or a reckoning remains to be seen—but for now, Anthropic’s latest release sets a new benchmark for intelligent autonomy.

Are we looking at an AI house of cards? Bubble worries emerge after Oracle blowout figures

AI Bubble?

There’s growing concern that parts of the AI boom—especially the infrastructure and monetisation frenzy—might be built on shaky foundations.

The term ‘AI house of cards’ is being used to describe deals like Oracle’s multiyear agreement with OpenAI, which has committed to buying $300 billion in computing power over five years starting in 2027.

That’s on top of OpenAI’s existing $100 billion in commitments, despite having only about $12 billion in annual recurring revenue. Analysts are questioning whether the math adds up, and whether Oracle’s backlog—up 359% year-over-year—is too dependent on a single customer.

Oracle’s stock surged 36%, then dropped 5% Friday as investors took profits and reassessed the risks.

Some analysts remain neutral, citing murky contract details and the possibility that OpenAI’s nonprofit status could limit its ability to absorb the $40 billion it raised earlier this year.

The broader picture? AI infrastructure spending is ballooning into the trillions, echoing the dot-com era’s early adoption frenzy. If demand doesn’t materialise fast enough, we could see a correction.

But others argue this is just the messy middle of a long-term transformation—where data centres become the new utilities

The AI infrastructure boom—especially the Oracle–OpenAI deal—is raising eyebrows because the financial and operational foundations look more speculative than solid.

Here’s why some analysts are calling it a potential house of cards

⚠️ 1. Mismatch Between Revenue and Commitments

  • OpenAI’s annual revenue is reportedly around $10–12 billion, but it’s committed to $300 billion in cloud spending with Oracle over five years.
  • That’s $60 billion per year, meaning OpenAI would need to grow revenue 5–6x just to break even on compute costs.
  • CEO Sam Altman projects $44 billion in losses before profitability in 2029.

🔌 2. Massive Energy Demands

  • The infrastructure needed to fulfill this contract requires electricity equivalent to two Hoover Dams.
  • That’s not just expensive—it’s logistically daunting. Data centres are planned across five U.S. states, but power sourcing and environmental impact remain unclear.
AI House of Cards Infographic

💸 3. Oracle’s Risk Exposure

  • Oracle’s debt-to-equity ratio is already 10x higher than Microsoft’s, and it may need to borrow more to meet OpenAI’s demands.
  • The deal accounts for most of Oracle’s $317 billion backlog, tying its future growth to a single customer.

🔄 4. Shifting Alliances and Uncertain Lock-In

  • OpenAI recently ended its exclusive cloud deal with Microsoft, freeing it to sign with Oracle—but also introducing risk if future models are restricted by AGI clauses.
  • Microsoft is now integrating Anthropic’s Claude into Office 365, signalling a diversification away from OpenAI.

🧮 5. Speculative Scaling Assumptions

  • The entire bet hinges on continued global adoption of OpenAI’s tech and exponential demand for inference at scale.
  • If adoption plateaus or competitors leapfrog, the infrastructure could become overbuilt—echoing the dot-com frenzy of the early 2000s.

Is this a moment for the AI frenzy to take a breather?

Anthropic releases its most powerful AI Chatbot

Chatbot

Anthropic, a rival to OpenAI, unveiled Claude 3.5 Sonnet on Thursday, touting it as their most advanced AI model to date.

Claude has joined the ranks of widely used chatbots such as OpenAI’s ChatGPT and Google’s Gemini. Founded by former OpenAI research leaders, Anthropic has secured backing from major tech entities like Google, Salesforce, and Amazon. Over the past year, the company has completed numerous funding rounds, reportedly amassing approximately $7.3 billion.

The announcement comes after Anthropic introduced its Claude 3 series of models in March, followed by OpenAI’s GPT-4o in May 2024. Anthropic has stated that Claude 3.5 Sonnet, the initial model from the new Claude 3.5 series, surpasses the speed of its predecessor, Claude 3 Opus.

It shows marked improvement in grasping nuance, humour, and complex instructions, and is exceptional at writing high-quality content with a natural, relatable tone,” the company said in a blog post.

It can also write, edit and execute code in a real time workspace open for the user to engage.