The bubble that thinks: Sam Altman’s AI paradox

AI Bubble?

Sam Altman, CEO of OpenAI, has never been shy about bold predictions. But his latest remarks strike a curious chord reportedly saying: ‘Yes, we’re in an AI bubble’.

‘And yes, AI is the most important thing to happen in a very long time’. It’s a paradox that feels almost ‘Altmanesque’—equal parts caution and conviction, like a person warning of a storm while building a lighthouse.

Altman’s reported bubble talk isn’t just market-speak. It’s a philosophical hedge against the frothy exuberance that’s gripped Silicon Valley and Wall Street alike.

With AI valuations soaring past dot-com levels, and retail investors piling into AI-branded crypto tokens and meme stocks, the signs of speculative mania are hard to ignore.

Even ChatGPT, OpenAI’s flagship product, boasts 1.5 billion monthly users—but fewer than 1% pay for it. That’s not a business model—it’s a popularity contest.

Yet Altman isn’t calling for a crash. He’s calling for clarity. His point is that bubbles form around kernels of truth—and AI’s kernel is enormous.

From autonomous agents to enterprise integration in law, medicine, and finance, the technology is reshaping workflows faster than regulators can blink.

Microsoft and Nvidia are pouring billions into infrastructure, not because they’re chasing hype, but because they see utility. Real utility.

Still, Altman’s warning is timely. The AI gold rush has spawned a legion of startups with dazzling demos and dismal revenue. This is likely the Dotcom ‘Esque’ reality – many will fail.

Many are burning cash at unsustainable rates, betting on future breakthroughs that may never materialise. Investors, Altman suggests, need to recalibrate—not abandon ship, but stop treating every chatbot as the next Google.

What makes Altman’s stance compelling is its duality. He’s not a doomsayer, nor a blind optimist. He’s a realist who understands that transformative tech often arrives wrapped in irrational exuberance. The internet had its crash before it changed the world. AI may follow suit.

So, is this a bubble? Yes. But it’s a bubble with brains. And if Altman’s lighthouse holds, it might just guide us through the fog—not to safety, but to something truly revolutionary.

In the meantime, investors would do well to remember hype inflates, but only utility sustains.

And Altman, ever the ‘paradoxical prophet’, seems to be betting on both.

AMD Unveils Instinct MI400: is it time for AMD to challenge NVIDIA dominance?

AMD & NVIDIA chip go head-to-head

AMD has officially lifted the curtain on its next-generation AI chip, the Instinct MI400, marking a significant escalation in the battle for data centre dominance.

Set to launch in 2026, the MI400 is designed to power hyperscale AI workloads with unprecedented efficiency and performance.

Sam Altman and OpenAI have played a surprisingly hands-on role in AMD’s development of the Instinct MI400 series.

Altman appeared on stage with AMD CEO Lisa Su at the company’s ‘Advancing AI’ event, where he revealed that OpenAI had provided direct feedback during the chip’s design process.

Altman described his initial reaction to the MI400 specs as ‘totally crazy’ but expressed excitement at how close AMD has come to delivering on its ambitious goals.

He praised the MI400’s architecture – particularly its memory design – as being well-suited for both inference and training tasks.

OpenAI has already been using AMD’s MI300X chips for some workloads and is expected to adopt the MI400 series when it launches in 2026.

This collaboration is part of a broader trend: OpenAI, traditionally reliant on Nvidia GPUs via Microsoft Azure, is now diversifying its compute stack.

AMD’s open standards and cost-effective performance are clearly appealing, especially as OpenAI also explores its own chip development efforts with Broadcom.

AMD’s one-year chart snap-shot

One-year AMD chart snap-shot

So, while OpenAI isn’t ditching Nvidia entirely, its involvement with AMD signals a strategic shift—and a vote of confidence in AMD’s growing role in the AI hardware ecosystem.

At the heart of AMD’s strategy is the Helios rack-scale system, a unified architecture that allows thousands of MI400 chips to function as a single, massive compute engine.

This approach is tailored for the growing demands of large language models and generative AI, where inference speed and energy efficiency are paramount.

AMD technical power

The MI400 boasts a staggering 432GB of next-generation HBM4 memory and a bandwidth of 19.6TB/sec—more than double that of its predecessor.

With up to four Accelerated Compute Dies (XCDs) and enhanced interconnects, the chip delivers 40 PFLOPs of FP4 performance, positioning it as a formidable rival to Nvidia’s Rubin R100 GPU.

AMD’s open-source networking technology, UALink, replaces Nvidia’s proprietary NVLink, reinforcing the company’s commitment to open standards. This, combined with aggressive pricing and lower power consumption, gives AMD a compelling value proposition.

The company claims its chips can deliver 40% more AI tokens per dollar than Nvidia’s offerings.

Big tech follows AMD

OpenAI, Meta, Microsoft, and Oracle are among the major players already integrating AMD’s Instinct chips into their infrastructure. OpenAI CEO Sam Altman, speaking at the launch event reportedly praised the MI400’s capabilities, calling it ‘an amazing thing‘.

With the AI chip market projected to exceed $500 billion by 2028, AMD’s MI400 is more than just a product—it’s a statement of intent. As the race for AI supremacy intensifies, AMD is betting big on performance, openness, and affordability to carve out a larger share of the future.

It certainly looks like AMD is positioning the Instinct MI400 as a serious contender in the AI accelerator space – and Nvidia will be watching closely.

The MI400 doesn’t just aim to catch up; it’s designed to challenge Nvidia head-on with bold architectural shifts and aggressive performance-per-dollar metrics.

Nvidia has long held the upper hand with its CUDA software ecosystem and dominant market share, especially with the popularity of its H100 and the upcoming Rubin GPU. But AMD is playing the long game.

Nvidia 0ne-year chart snapshot

Nvidia 0ne-year chart snapshot

By offering open standards like UALink and boasting impressive specs like 432GB of HBM4 memory and 40 PFLOPs of FP4 performance, the MI400 is pushing into territory that was once Nvidia’s alone.

Whether it truly rivals Nvidia will depend on a few key factors: industry adoption, software compatibility, real-world performance under AI workloads, and AMD’s ability to scale production and support.

But with major players like OpenAI, Microsoft, and Meta already lining up to adopt the MI400.

Is now a good time to invest in AMD?