Are we looking at an AI house of cards? Bubble worries emerge after Oracle blowout figures

AI Bubble?

There’s growing concern that parts of the AI boom—especially the infrastructure and monetisation frenzy—might be built on shaky foundations.

The term ‘AI house of cards’ is being used to describe deals like Oracle’s multiyear agreement with OpenAI, which has committed to buying $300 billion in computing power over five years starting in 2027.

That’s on top of OpenAI’s existing $100 billion in commitments, despite having only about $12 billion in annual recurring revenue. Analysts are questioning whether the math adds up, and whether Oracle’s backlog—up 359% year-over-year—is too dependent on a single customer.

Oracle’s stock surged 36%, then dropped 5% Friday as investors took profits and reassessed the risks.

Some analysts remain neutral, citing murky contract details and the possibility that OpenAI’s nonprofit status could limit its ability to absorb the $40 billion it raised earlier this year.

The broader picture? AI infrastructure spending is ballooning into the trillions, echoing the dot-com era’s early adoption frenzy. If demand doesn’t materialise fast enough, we could see a correction.

But others argue this is just the messy middle of a long-term transformation—where data centres become the new utilities

The AI infrastructure boom—especially the Oracle–OpenAI deal—is raising eyebrows because the financial and operational foundations look more speculative than solid.

Here’s why some analysts are calling it a potential house of cards

⚠️ 1. Mismatch Between Revenue and Commitments

  • OpenAI’s annual revenue is reportedly around $10–12 billion, but it’s committed to $300 billion in cloud spending with Oracle over five years.
  • That’s $60 billion per year, meaning OpenAI would need to grow revenue 5–6x just to break even on compute costs.
  • CEO Sam Altman projects $44 billion in losses before profitability in 2029.

🔌 2. Massive Energy Demands

  • The infrastructure needed to fulfill this contract requires electricity equivalent to two Hoover Dams.
  • That’s not just expensive—it’s logistically daunting. Data centres are planned across five U.S. states, but power sourcing and environmental impact remain unclear.
AI House of Cards Infographic

💸 3. Oracle’s Risk Exposure

  • Oracle’s debt-to-equity ratio is already 10x higher than Microsoft’s, and it may need to borrow more to meet OpenAI’s demands.
  • The deal accounts for most of Oracle’s $317 billion backlog, tying its future growth to a single customer.

🔄 4. Shifting Alliances and Uncertain Lock-In

  • OpenAI recently ended its exclusive cloud deal with Microsoft, freeing it to sign with Oracle—but also introducing risk if future models are restricted by AGI clauses.
  • Microsoft is now integrating Anthropic’s Claude into Office 365, signalling a diversification away from OpenAI.

🧮 5. Speculative Scaling Assumptions

  • The entire bet hinges on continued global adoption of OpenAI’s tech and exponential demand for inference at scale.
  • If adoption plateaus or competitors leapfrog, the infrastructure could become overbuilt—echoing the dot-com frenzy of the early 2000s.

Is this a moment for the AI frenzy to take a breather?

China’s AI vs U.S. AI – competition heats up – and that’s good for business – isn’t it?

DeepSeek AI

The escalating AI competition between the U.S. and China has taken a new turn with the emergence of DeepSeek, a Chinese AI startup that has introduced a low-cost AI model capable of rivaling the performance of OpenAI’s models.

This development has significant implications for data centres and the broader technology sector.

The rise of DeepSeek

DeepSeek’s recent breakthrough involves the development of two AI models, V3 and R1, which have been created at a fraction of the cost compared to their Western counterparts.

The total training cost for these models is estimated at around $6 million, significantly lower than the billions spent by major U.S. tech firms. This has challenged the prevailing assumption that developing large AI models requires massive financial investments and access to cutting-edge hardware.

Impact on data centres

The introduction of cost-effective AI models like those developed by DeepSeek could lead to a shift in how data centers operate.

Traditional AI models require substantial computational power and energy, leading to high operational costs for data centers. DeepSeek’s models, which are less energy-intensive, could reduce these costs and make AI technology more accessible to a wider range of businesses and organizations.

Technological advancements

DeepSeek’s success also highlights the potential for innovation in AI without relying on the most advanced hardware.

This could encourage other companies to explore alternative approaches to AI development, fostering a more diverse and competitive landscape. Additionally, the open-source nature of DeepSeek’s models promotes collaborative innovation, allowing developers worldwide to customise and improve upon these models2.

Competitive dynamics

The competition between DeepSeek and OpenAI underscores the broader U.S.-China rivalry in the AI space. While DeepSeek’s models pose a limited immediate threat to well-funded U.S. AI labs, they demonstrate China’s growing capabilities in AI innovation.

This competition could drive both countries to invest more in AI research and development, leading to faster technological advancements and more robust AI applications.

Broader implications

The rise of DeepSeek and similar Chinese and other AI startups could have far-reaching implications for the global technology sector.

As AI becomes increasingly integrated into various industries, the ability to develop and deploy AI models efficiently will be crucial.

Data centres will need to adapt to these changes, potentially investing in more energy-efficient infrastructure and exploring new ways to support AI workloads.

Where from here?

DeepSeek’s emergence as a significant player in the AI race highlights the dynamic nature of technological competition between the U.S. and China.

While the immediate impact on data centres and technology may be limited, the long-term implications could be profound.

As AI continues to evolve, the ability to innovate cost-effectively and collaborate across borders will be key to driving progress and maintaining competitiveness in the global technology landscape.

China’s DeepSeek low-cost challenger to AI rattles tech U.S. markets

China Deepseek AI

U.S. technology stocks plunged as Chinese startup DeepSeek sparked concerns over competitiveness in AI and America’s lead in the sector, triggering a global sell-off

DeepSeek launched a free, open-source large-language model in late December 2024, claiming it was developed in just two months at a cost of under $6 million.

The developments have stoked concerns about the large amounts of money big tech companies have been investing in AI models and data centres.

DeepSeek is a Chinese artificial intelligence startup that has recently gained significant attention in the AI world. Founded in 2023 by Liang Wenfeng, DeepSeek develops open-source large language models. The company is funded by High-Flyer, a hedge fund also founded by Wenfeng.

The AI models from DeepSeek have demonstrated impressive performance, rivaling some of the best chatbots in the world at a fraction of the cost. This has caused quite a stir in the tech industry, leading to significant drops in the stock prices of major AI-related firms.

The company’s latest model, DeepSeek-V3, is known for its efficiency and high performance across various benchmarks.

DeepSeek’s emergence challenges the notion that massive capital expenditure is necessary to achieve top-tier AI performance.

The company’s success has led to a re-evaluation of the AI market and has put pressure on other tech giants to innovate and reduce costs.

Google releases the first of its Gemini 2.0 AI models

Google AI

Google released the first version of its Gemini 2.0 family of artificial intelligence models in December 2024

Gemini 2.0 Flash, as the model is named is available in a chat version for users worldwide, while experimental multimodal version of the model, with text-to-speech image generation capabilities, available to developers.

‘If Gemini 1.0 was about organising and understanding information, Gemini 2.0 is about making it much more useful,’ Google CEO Sundar Pichai reportedly said in a statement.

Google’s latest large language model surpasses its predecessors in most user request areas, including code generation and the ability to provide factually accurate responses. However, it falls short compared to Gemini1.5 Pro when it comes evaluating longer contexts.

To access the chat-optimized version of the experimental Flash 2.0, Gemini users can select from the drop-down menu on both desktop and mobile web platforms. According to the company it will soon be available on the Gemini mobile app.

The multimodal version of Gemini Flash .0 will be accessible through Google’s AI Studio and Vertex AI developer platforms.

The general availability of Gemini 2.0 Flash’s multimodal version is scheduled for January, along with additional Gemini 2.0 model sizes, Google announced. The company also plans to expand Gemini 20 to more Google products in early 2025.

Gemini 2.0 signifies Google’s latest efforts in the increasingly competitive AI industry. Google is competing with major tech rivals such as Microsoft and Meta, as well as startups like OpenAI, the creator of ChatGPT, Perplexity, and Anthropic, which developed Claude.

In addition to new Flash, other research prototypes are aimed at developing more ‘agentic’ AI models and experiences. According to the company, agentic models ‘can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision’.

U.S. AI Safety Institute to evaluate OpenAI and Anthropic new AI models before release to the general public

U.S. AI Safety Inspection

On Thursday 29th August 2024, the U.S. AI Safety Institute announced a testing and evaluation agreement with OpenAI and Anthropic

This agreement reportedly grants the institute access to significant new AI models from each company before and after their public release.

Recently, several AI developers and researchers have voiced concerns regarding safety and ethics within the growing profit-driven AI industry.

Anthropic releases its most powerful AI Chatbot

Chatbot

Anthropic, a rival to OpenAI, unveiled Claude 3.5 Sonnet on Thursday, touting it as their most advanced AI model to date.

Claude has joined the ranks of widely used chatbots such as OpenAI’s ChatGPT and Google’s Gemini. Founded by former OpenAI research leaders, Anthropic has secured backing from major tech entities like Google, Salesforce, and Amazon. Over the past year, the company has completed numerous funding rounds, reportedly amassing approximately $7.3 billion.

The announcement comes after Anthropic introduced its Claude 3 series of models in March, followed by OpenAI’s GPT-4o in May 2024. Anthropic has stated that Claude 3.5 Sonnet, the initial model from the new Claude 3.5 series, surpasses the speed of its predecessor, Claude 3 Opus.

It shows marked improvement in grasping nuance, humour, and complex instructions, and is exceptional at writing high-quality content with a natural, relatable tone,” the company said in a blog post.

It can also write, edit and execute code in a real time workspace open for the user to engage.

Anthropic launches Claude in Europe – its AI chatbot

AI Chatbot

Anthropic, the artificial intelligence (AI) startup backed by Amazon, reported on Monday 13th May 2024 that it’s launching its generative AI assistant Claude in Europe on Tuesday 14th May 2024.

Claude.ai will be accessible to both individuals and businesses via the web and an iPhone app. While it is already free on both platforms in the U.K., Anthropic states that this marks the product’s inaugural launch for users in the EU and non-EU nations such as, Switzerland, Norway and Iceland.

Anthropic is introducing a paid subscription-based version of its Claude assistant, named Claude Pro, which will provide users with access to all its models, including the highly advanced Claude 3 Opus.

In its announcement about launching Claude in European countries, Anthropic emphasized security and privacy as central aspects.

Earlier this year, the EU enacted the first significant global regulatory framework to govern AI.

Amazon to invest up to $4 billion in leading edge tech Anthropic

Tech AI led investment

E-commerce conglomerate Amazon announced on Monday 25th September 2023 that it will invest up to $4 billion in artificial intelligence (AI) firm Anthropic, a rival to ChatGPT developer OpenAI, and take a minority ownership position in the company.

The move further enforces Amazon’s aggressive AI push as it aims to keep pace with rivals such as Microsoft and Alphabet’s Google.

The two firms reportedly said that they are forming a strategic collaboration to advance generative AI, with the startup selecting Amazon Web Services as its primary cloud provider.