Are we looking at an AI house of cards? Bubble worries emerge after Oracle blowout figures

AI Bubble?

There’s growing concern that parts of the AI boom—especially the infrastructure and monetisation frenzy—might be built on shaky foundations.

The term ‘AI house of cards’ is being used to describe deals like Oracle’s multiyear agreement with OpenAI, which has committed to buying $300 billion in computing power over five years starting in 2027.

That’s on top of OpenAI’s existing $100 billion in commitments, despite having only about $12 billion in annual recurring revenue. Analysts are questioning whether the math adds up, and whether Oracle’s backlog—up 359% year-over-year—is too dependent on a single customer.

Oracle’s stock surged 36%, then dropped 5% Friday as investors took profits and reassessed the risks.

Some analysts remain neutral, citing murky contract details and the possibility that OpenAI’s nonprofit status could limit its ability to absorb the $40 billion it raised earlier this year.

The broader picture? AI infrastructure spending is ballooning into the trillions, echoing the dot-com era’s early adoption frenzy. If demand doesn’t materialise fast enough, we could see a correction.

But others argue this is just the messy middle of a long-term transformation—where data centres become the new utilities

The AI infrastructure boom—especially the Oracle–OpenAI deal—is raising eyebrows because the financial and operational foundations look more speculative than solid.

Here’s why some analysts are calling it a potential house of cards

⚠️ 1. Mismatch Between Revenue and Commitments

  • OpenAI’s annual revenue is reportedly around $10–12 billion, but it’s committed to $300 billion in cloud spending with Oracle over five years.
  • That’s $60 billion per year, meaning OpenAI would need to grow revenue 5–6x just to break even on compute costs.
  • CEO Sam Altman projects $44 billion in losses before profitability in 2029.

🔌 2. Massive Energy Demands

  • The infrastructure needed to fulfill this contract requires electricity equivalent to two Hoover Dams.
  • That’s not just expensive—it’s logistically daunting. Data centres are planned across five U.S. states, but power sourcing and environmental impact remain unclear.
AI House of Cards Infographic

💸 3. Oracle’s Risk Exposure

  • Oracle’s debt-to-equity ratio is already 10x higher than Microsoft’s, and it may need to borrow more to meet OpenAI’s demands.
  • The deal accounts for most of Oracle’s $317 billion backlog, tying its future growth to a single customer.

🔄 4. Shifting Alliances and Uncertain Lock-In

  • OpenAI recently ended its exclusive cloud deal with Microsoft, freeing it to sign with Oracle—but also introducing risk if future models are restricted by AGI clauses.
  • Microsoft is now integrating Anthropic’s Claude into Office 365, signalling a diversification away from OpenAI.

🧮 5. Speculative Scaling Assumptions

  • The entire bet hinges on continued global adoption of OpenAI’s tech and exponential demand for inference at scale.
  • If adoption plateaus or competitors leapfrog, the infrastructure could become overbuilt—echoing the dot-com frenzy of the early 2000s.

Is this a moment for the AI frenzy to take a breather?

AI creates paradigm shift in computing – programming AI is like training a person

Teaching or programing?

At London Tech Week, Nvidia CEO Jensen Huang made a striking statement: “The way you program an AI is like the way you program a person.” (Do we really program people or do we teach)?

This marks a fundamental shift in how we interact with artificial intelligence, moving away from traditional coding languages and towards natural human communication.

Historically, programming required specialised knowledge of languages like C++ or Python. Developers had to meticulously craft instructions for computers to follow.

Huang argues that AI has now evolved to understand and respond to human language, making programming more intuitive and accessible.

This transformation is largely driven by advancements in conversational AI models, such as ChatGPT, Gemini, and Copilot.

These systems allow users to issue commands in plain English – whether asking an AI to generate images, write a poem, or even create software code. Instead of writing complex algorithms, users can simply ask nicely, much like instructing a colleague or student.

Huang’s analogy extends beyond convenience. Just as people learn through feedback and iteration, AI models refine their responses based on user input.

If an AI-generated poem isn’t quite right, users can prompt it to improve, and it will think and adjust accordingly.

This iterative process mirrors human learning, where guidance and refinement lead to better outcomes.

The implications of this shift are profound. AI is no longer just a tool for experts – it is a great equalizer, enabling anyone to harness computing power without technical expertise.

As businesses integrate AI into their workflows, employees will need to adapt, treating AI as a collaborative partner rather than a mere machine.

This evolution in AI programming is not just about efficiency; it represents a new era where technology aligns more closely with human thought and interaction.

China’s AI vs U.S. AI – competition heats up – and that’s good for business – isn’t it?

DeepSeek AI

The escalating AI competition between the U.S. and China has taken a new turn with the emergence of DeepSeek, a Chinese AI startup that has introduced a low-cost AI model capable of rivaling the performance of OpenAI’s models.

This development has significant implications for data centres and the broader technology sector.

The rise of DeepSeek

DeepSeek’s recent breakthrough involves the development of two AI models, V3 and R1, which have been created at a fraction of the cost compared to their Western counterparts.

The total training cost for these models is estimated at around $6 million, significantly lower than the billions spent by major U.S. tech firms. This has challenged the prevailing assumption that developing large AI models requires massive financial investments and access to cutting-edge hardware.

Impact on data centres

The introduction of cost-effective AI models like those developed by DeepSeek could lead to a shift in how data centers operate.

Traditional AI models require substantial computational power and energy, leading to high operational costs for data centers. DeepSeek’s models, which are less energy-intensive, could reduce these costs and make AI technology more accessible to a wider range of businesses and organizations.

Technological advancements

DeepSeek’s success also highlights the potential for innovation in AI without relying on the most advanced hardware.

This could encourage other companies to explore alternative approaches to AI development, fostering a more diverse and competitive landscape. Additionally, the open-source nature of DeepSeek’s models promotes collaborative innovation, allowing developers worldwide to customise and improve upon these models2.

Competitive dynamics

The competition between DeepSeek and OpenAI underscores the broader U.S.-China rivalry in the AI space. While DeepSeek’s models pose a limited immediate threat to well-funded U.S. AI labs, they demonstrate China’s growing capabilities in AI innovation.

This competition could drive both countries to invest more in AI research and development, leading to faster technological advancements and more robust AI applications.

Broader implications

The rise of DeepSeek and similar Chinese and other AI startups could have far-reaching implications for the global technology sector.

As AI becomes increasingly integrated into various industries, the ability to develop and deploy AI models efficiently will be crucial.

Data centres will need to adapt to these changes, potentially investing in more energy-efficient infrastructure and exploring new ways to support AI workloads.

Where from here?

DeepSeek’s emergence as a significant player in the AI race highlights the dynamic nature of technological competition between the U.S. and China.

While the immediate impact on data centres and technology may be limited, the long-term implications could be profound.

As AI continues to evolve, the ability to innovate cost-effectively and collaborate across borders will be key to driving progress and maintaining competitiveness in the global technology landscape.

Google releases the first of its Gemini 2.0 AI models

Google AI

Google released the first version of its Gemini 2.0 family of artificial intelligence models in December 2024

Gemini 2.0 Flash, as the model is named is available in a chat version for users worldwide, while experimental multimodal version of the model, with text-to-speech image generation capabilities, available to developers.

‘If Gemini 1.0 was about organising and understanding information, Gemini 2.0 is about making it much more useful,’ Google CEO Sundar Pichai reportedly said in a statement.

Google’s latest large language model surpasses its predecessors in most user request areas, including code generation and the ability to provide factually accurate responses. However, it falls short compared to Gemini1.5 Pro when it comes evaluating longer contexts.

To access the chat-optimized version of the experimental Flash 2.0, Gemini users can select from the drop-down menu on both desktop and mobile web platforms. According to the company it will soon be available on the Gemini mobile app.

The multimodal version of Gemini Flash .0 will be accessible through Google’s AI Studio and Vertex AI developer platforms.

The general availability of Gemini 2.0 Flash’s multimodal version is scheduled for January, along with additional Gemini 2.0 model sizes, Google announced. The company also plans to expand Gemini 20 to more Google products in early 2025.

Gemini 2.0 signifies Google’s latest efforts in the increasingly competitive AI industry. Google is competing with major tech rivals such as Microsoft and Meta, as well as startups like OpenAI, the creator of ChatGPT, Perplexity, and Anthropic, which developed Claude.

In addition to new Flash, other research prototypes are aimed at developing more ‘agentic’ AI models and experiences. According to the company, agentic models ‘can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision’.

Google Unveils AI Chatbot Gemini 1.5 Flash as competition from OpenAI heats up

AI Chatbot Gemini

Google is advancing the frontiers of artificial intelligence (AI) with its new release, Gemini 1.5 Flash, which is set to transform our online information interactions.

Unveiled at Google I/O 2024, this latest model enhances sophisticated features with rapid performance and efficiency. The new AI Chatbot was unveiled on 15th May 2024.

The unveiling comes a day after OpenAI announced its newest artificial intelligence (AI) model, GPT-4o.

Google Gemini 1.5 Flash

The Gemini 1.5 Flash is engineered for exceptional speed, processing queries with reduced latency, which makes it perfectly suited for real-time applications.

Context Understanding

Similar to its forerunner, Gemini 1.5 Pro, Flash is adept at contextual understanding. It is capable of interpreting user prompts through multiple modalities such as text, images, video, and speech.

Smaller Scaled Version

Google also introduced a scaled-down version called Gemini 1.5 Nano, which runs locally on devices.

AI quick answers

A prominent feature of Gemini 1.5 Flash is the AI Overviews integration. These ‘precis’ summaries deliver rapid responses to intricate inquiries. Users are presented with a topical overview and pertinent links for additional research. The AI Overviews feature is currently being introduced to U.S. users, with worldwide availability anticipated by the end of the year.

Future of Google search

Gemini 1.5 Flash is Google’s latest endeavour to improve search experiences. Whether it’s for research, planning, or brainstorming, this AI model simplifies the process. With the advent of generative AI, Google Search is becoming increasingly potent, enabling users to effortlessly access reliable information.

Apple and Alphabet reportedly in Gemini AI talks

AI mobile phone

Apple playing AI catchup

Apple is reportedly engaged in negotiations to acquire a licence for Google’s Gemini, a generative AI platform, with the intention of integrating it into iPhones. These ongoing discussions may result in Gemini enhancing iPhone software with new features later this year.

The terms, branding, and implementation details have not been finalised. This potential partnership could significantly impact the AI capabilities of future iPhones.

Google’s woke AI needs fixing!

Chatbot learning

Google’s ‘Woke’ AI Problem needs attention

In recent days, Google’s artificial intelligence (AI) tool, Gemini, has faced intense criticism online. As the tech giant’s answer to the OpenAI/Microsoft chatbot ChatGPT, Gemini can respond to text queries and even generate images based on prompts. However, its journey has been far from smooth.

The AI answer is wrong

The issues began when Gemini’s image generator inaccurately portrayed historical figures. For instance, it depicted the U.S. Founding Fathers with a black man, and German World War II soldiers included both a black man and an Asian woman.

AI answer from Google’s Gemini Chatbot

Google swiftly apologized and paused the tool, acknowledging that it had “missed the mark.”

It gets worse

But the controversy didn’t end there. Gemini’s text responses veered into over-political correctness. When asked whether Elon Musk posting memes was worse than Hitler’s atrocities, it replied that there was “no right or wrong answer.” In another instance, it refused to misgender high-profile trans woman Caitlin Jenner, even if it meant preventing nuclear apocalypse. Elon Musk himself found these responses “extremely alarming.”

Nuance

The root cause lies in the vast amounts of data AI tools are trained on. Publicly available internet data contains biases, leading to embarrassing mistakes. Google attempted to counter this by instructing Gemini not to make assumptions, but it backfired. Human history and culture are nuanced, and machines struggle to grasp these complexities.

Political bias

Google now faces the challenge of striking a balance: addressing bias without becoming absurdly politically correct. As Gemini evolves, finding this equilibrium will be crucial for its survival.

After all, it’s not just about AI, is it? It’s about navigating the delicate intersection of technology, culture, and ethics.

Definition of nuance – I asked ChatGPT for its definition…

Nuance refers to the subtle, intricate, or delicate aspects of something. It encompasses the fine distinctions, shades of meaning, and context-specific interpretations that add depth and complexity to a situation, conversation, or piece of art. In essence, nuance recognizes that not everything can be neatly categorized or expressed in black-and-white terms; rather, it acknowledges the richness and variability of human experiences and ideas. Whether in literature, politics, or everyday interactions, appreciating nuance allows us to navigate the complexities of life with greater understanding and empathy.

Google halts Gemini AI image generator after it created inaccurate historical content

Gemini chatbot illustration

Google on Thursday 22nd February 2024 said it is pausing its Gemini artificial intelligence (AI) image generation feature after saying it offers ‘inaccuracies’ in historical pictures

Users had been reporting that the AI tool generated images of historical figures, like the U.S. Founding Fathers as people of colour, calling this inaccurate.

Google posted a statement on Thursday 22nd February 2024, saying that it will pause Gemini’s feature to generate images of people and will re-release an ‘improved’ version soon.

Is Google struggling to keep up with the AI race?

The image generator tool was launched at the start of February 2024 through Gemini, which was orignally called Bard.

It is facing challenges at a time when Google is trying to catch up with Microsoft-backed OpenAI project, Copilot.

Google releases Gemini, its latest AI venture

Chatbot

Pressure mounts on the Google to demonstrate how it plans monetize AI.

Google has launched its largest and most capable (by its own admission) artificial intelligence (AI) model on Wednesday 6th December 2023 pressure mounts on the company to answer how it’ll monetize AI.

Gemini

The large language model Gemini will include a suite of three different sizes: Gemini Ultra, its largest, most capable category; Gemini Pro, which scales across a wide range of tasks; and Gemini Nano, which it will use for specific tasks and mobile devices.

Cloud

Google is reportedly planning to licence Gemini to clients through Google Cloud to use in their own applications. Developers and enterprise customers can access Gemini Pro via the Gemini API in Google AI Studio or Google Cloud Vertex AI.

Android

Android developers will also be able to build with Gemini Nano. Gemini will also be used to power Google products like its Bard Chatbot and Search Generative Experience, which tries to answer search queries with conversational-style text.

Ultra

Gemini Ultra is reportedly the first model to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities, the company said in a blog post Wednesday 6th December 2023. 

It can supposedly understand nuance and reasoning in complex subjects.

Advanced

The company gave examples demonstrating Gemini being able to take a screenshot of a chart and analyse hundreds of pages from research and then updating the chart.

Another example was analyzing a photo of a person’s math homework and identifying correct answers and pointing out incorrect ones.

The future is artificial.

Definition of the word Gemini: Constellation, Astrological Sign or Twins in Latin.