IBM Shares Slide as AI Threatens Its Legacy Stronghold

AI and IBM

When artificial intelligence first ignited investor enthusiasm, it lifted almost every major technology stock.

The narrative was simple: AI would transform industries, boost productivity and unlock vast new revenue streams.

Yet as the cycle matures, markets are becoming more selective. In recent weeks, shares of IBM have drifted lower, illustrating how the ‘AI effect’ can cut both ways.

At first glance, IBM should be a prime beneficiary. The company has spent years repositioning itself around hybrid cloud infrastructure, data analytics and enterprise AI solutions.

Its Watson platform has been refreshed with generative AI tools designed to automate customer service, streamline software development and enhance business decision-making. Management has repeatedly emphasised AI as a core growth engine.

Market Expectations

However, the market’s expectations have shifted. Investors are increasingly rewarding companies that sit at the very heart of AI infrastructure — those supplying advanced semiconductors, high-performance computing capacity and hyperscale cloud services.

These businesses are reporting visible surges in AI-related demand, often accompanied by sharp revenue acceleration and expanding margins.

By contrast, IBM’s AI exposure is embedded within broader consulting and software operations, making its growth trajectory appear steadier rather than explosive.

This distinction matters in a momentum-driven environment. When earnings updates fail to deliver dramatic upside surprises, shares can quickly lose favour.

Less AI Effect

IBM’s results have shown progress in software and recurring revenue, but they have not reflected the kind of dramatic AI-driven uplift seen elsewhere in the sector. For some investors, that raises questions about competitive positioning and pricing power.

There is also a perception issue. Despite its reinvention efforts, IBM still carries the legacy image of a mature technology conglomerate rather than a cutting-edge AI disruptor.

In a market captivated by bold innovation stories, narrative can influence valuation just as much as fundamentals.

If capital flows concentrate in a handful of high-growth AI names, diversified players may struggle to keep pace in share price performance.

AI Tension

Yet the sell-off may also highlight a deeper tension within the AI theme. Enterprise adoption of AI tools tends to be gradual, cautious and closely tied to measurable productivity gains.

IBM’s strategy is built around long-term integration rather than short-term hype. While that approach may lack immediate fireworks, it could prove more durable as corporate clients prioritise reliability, governance and cost control.

For now, though, the AI effect is amplifying investor discrimination. In a market eager for rapid transformation, IBM’s more measured path has translated into weaker share performance — a reminder that not all AI exposure is valued equally.

Further discussion

IBM has found itself on the wrong side of the artificial intelligence boom, with its shares tumbling more than 13% after Anthropic unveiled a new capability that directly targets one of the company’s most enduring revenue pillars: COBOL modernisation.

The sell‑off reflects a broader market anxiety that AI is beginning to erode long‑protected niches in enterprise technology, and IBM has become the latest high‑profile casualty.

For decades, IBM has been synonymous with mainframe computing and the maintenance of vast COBOL‑based systems that underpin global finance, government services, airlines, and retail transactions.

These systems are notoriously complex, expensive to update, and dependent on a shrinking pool of specialist developers.

Premium Brand

That scarcity has long worked in IBM’s favour, allowing it to charge a premium for modernisation and support.

Anthropic’s announcement threatens to upend that equation. Its Claude Code tool, the company claims, can automate the most time‑consuming and costly parts of understanding and restructuring legacy COBOL environments.

Tasks that once required teams of analysts months to complete—mapping dependencies, documenting workflows, identifying risks—can now be accelerated dramatically through AI‑driven analysis.

The implication is clear: modernising legacy systems may no longer require the same level of human expertise, nor the same level of spending.

Investors reacted swiftly. IBM’s share price fell to $223.35, extending a year‑to‑date decline of more than 24% – recovering later to $229.39

IBM one-year chart as of 24th February 2026

The drop reflects not only concerns about lost revenue, but also the fear that IBM’s competitive moat—built on decades of institutional reliance on COBOL—may be eroding faster than expected.

The timing has amplified market jitters. Only days earlier, cybersecurity stocks were hit by another Anthropic announcement: Claude Code Security, a feature designed to scan codebases for vulnerabilities.

AI Mood Logic

The rapid expansion of AI into specialised technical domains has created a ‘sell first, ask questions later’ mood across the market, with investors increasingly wary of companies whose business models depend on labour‑intensive or legacy‑bound processes.

For IBM, the challenge now is to demonstrate that it can harness AI rather than be displaced by it.

The company has invested heavily in its own AI initiatives, but the latest market reaction suggests investors are unconvinced that these efforts will offset the threat to its traditional strongholds.

The AI revolution is reshaping the technology landscape at speed. IBM’s sharp decline is a reminder that even the industry’s oldest giants are not insulated from disruption—and that the next wave of AI competition may hit the most established players hardest.

But remember, this is IBM we are talking about.

Explainer

What is COBOL?

COBOL is an old but remarkably durable programming language created in the late 1950s to run business, finance, and government systems, and it’s still powering much of the world’s banking and administrative infrastructure today.

It was designed to read almost like plain English, making it easier for non‑technical managers to understand, and its stability means many core systems have never been replaced.

China’s AI Tech Surge Puts Pressure on America’s AI Dominance

Robots line up for AI battle

For much of the modern AI era, the United States has held a clear advantage in frontier research, compute infrastructure, and commercial deployment.

Silicon Valley’s combination of elite talent, abundant capital, and world‑class semiconductor design created an environment where breakthroughs could scale at extraordinary speed.

Challenge

That dominance, however, is no longer uncontested. China’s accelerating push into advanced AI is reshaping the global technological landscape and posing the most credible challenge yet to America’s leadership.

China’s strategy is not built on a single breakthrough but on coordinated national effort. Beijing has spent years aligning universities, state‑backed funds, and private‑sector giants around a shared objective: achieving self‑sufficiency in critical technologies and becoming a global AI powerhouse.

Competitive

Companies such as Huawei, Baidu, Alibaba and Tencent are now producing increasingly competitive large models, while domestic chipmakers are narrowing the performance gap with U.S. suppliers despite export controls.

Crucially, China’s AI ecosystem benefits from scale and cost advantages that the U.S. cannot easily replicate.

Massive data availability, lower energy costs, and vertically integrated supply chains allow Chinese firms to train and deploy models at prices that appeal to developing economies.

For many countries, especially those already reliant on Chinese infrastructure, adopting a Chinese AI stack is becoming a pragmatic economic choice rather than a geopolitical statement.

Investment returns?

This shift is occurring just as U.S. tech giants embark on unprecedented spending cycles. Hyperscalers are pouring hundreds of billions of dollars into data centres, specialised chips, and model training.

The U.S. and its massive BIG Tech Spending Spree – Feeding the AI Habit

While this investment underscores America’s determination to stay ahead, it also raises questions about sustainability.

Investors are increasingly asking whether such vast capital expenditure can deliver long‑term returns in a world where China is offering cheaper, rapidly improving alternatives.

The emerging reality is not one of immediate American decline but of a genuinely multipolar AI landscape. The U.S. still leads in foundational research, top‑tier talent, and cutting‑edge semiconductor design.

Yet China’s rise represents a powerful economy that has mounted a serious challenge to the technological frontier.

The global AI race is no longer defined by a single centre of gravity. Instead, two competing ecosystems — one market‑driven, one reportedly state‑directed — are shaping the future of intelligent technology.

The outcome will influence not only economic power but the digital architecture of much of the world.

Alphabet’s 100‑Year Bond: Ambition, Appetite and Anxiety in the AI Debt Boom

Alphabet's 100-year Sterling Bond for pensions

Alphabet’s decision to issue a 100-year sterling bond has captured the attention of global markets, not only because of its rarity but also because of what it signals about the escalating competition in artificial intelligence.

100 year sterling bond

A century-long bond denominated in pounds is an extraordinary financing move, particularly for a technology company.

It reflects both investor confidence in Alphabet’s long-term prospects and the scale of capital now required to compete in the AI era.

On the surface, the benefits are clear. Locking in funding for 100 years at today’s rates provides financial certainty. Alphabet can secure vast sums of capital without facing refinancing risk for generations.

In an industry defined by rapid change and enormous upfront costs — from data centres and semiconductor procurement to specialised AI chips and energy infrastructure — patient capital is invaluable.

Sterling

The sterling denomination also diversifies Alphabet’s funding base beyond U.S. dollar markets, potentially appealing to European institutional investors seeking stable, long-duration assets.

The bond may also be interpreted as a strategic signal. By committing to long-term financing, Alphabet demonstrates confidence in its ability to generate cash flows well into the next century.

It reinforces the company’s image as a durable, infrastructure-like enterprise rather than a volatile technology stock.

For investors such as pension funds and insurers, a 100-year instrument from a highly rated issuer can offer predictable returns in a world where long-term yield is scarce.

Cyclical

However, the move is not without shortcomings. Committing to fixed debt obligations over such an extended horizon reduces flexibility. While Alphabet currently enjoys strong balance sheet metrics, the technology sector is notoriously cyclical.

A century is an eternity in innovation terms. Business models, regulatory frameworks and geopolitical dynamics may shift dramatically.

Future generations of management will inherit the obligation, regardless of whether today’s AI investments deliver the expected returns.

More broadly, the bond feeds concern about a debt-fuelled AI arms race. As technology giants pour tens of billions into AI research, chip design and cloud infrastructure, borrowing is becoming an increasingly prominent tool.

If rivals respond with similar long-dated issuance, the sector’s leverage could rise meaningfully. In a downturn or if AI monetisation disappoints; heavy debt burdens could amplify financial strain.

Ultimately, Alphabet’s 100-year sterling bond embodies both ambition and risk. It underlines the immense capital demands of the AI revolution while raising questions about whether today’s competitive fervour is encouraging companies to stretch their balance sheets too far in pursuit of technological dominance.

Systemic anxiety

The deeper anxiety is systemic. With Oracle, Amazon, Microsoft and others also scaling up borrowing, total tech‑sector issuance is projected to hit $3 trillion over five years.

Some analysts warn this resembles a late‑cycle credit boom, where investors chase thematic excitement rather than sober fundamentals.

Alphabet’s century bond may be a masterstroke of timing — or a marker of excess.

Either way, it crystallises the tension at the heart of the AI revolution: extraordinary promise, financed by extraordinary debt.

Why a Sterling Bond?

Alphabet issued its 100‑year sterling bond to tap deep UK demand for ultra‑long‑dated assets, especially from pension funds seeking to match long‑term liabilities.

The sterling market offered strong appetite, with orders reportedly reaching nearly ten times the £1 billion on offer.

It also formed part of Alphabet’s broader multi‑currency fundraising drive to finance massive AI‑related capital spending, including data‑centre expansion.

Issuing in sterling diversified its investor base, reduced reliance on U.S. dollar markets, and signalled confidence in its long‑term stability as a quasi‑infrastructure‑scale business.

It’s all debt; however you look at it!

Anthropic Pushes the Frontier Again with Claude Opus 4.6

Claude Opus 4.5

Anthropic has unveiled Claude Opus 4.6, its most capable AI model to date, marking a significant leap in long‑context reasoning, autonomous agent workflows, and enterprise‑grade coding performance.

The release arrives during a turbulent moment for the global software sector, with markets reacting sharply to fears that Anthropic’s accelerating capabilities could reshape entire categories of knowledge work.

At the heart of Opus 4.6 is a 1‑million‑token context window, a first for Anthropic’s Opus line and a direct response to long‑standing limitations around ‘context rot’ in extended tasks.

Benchmarks

Early benchmarks show a dramatic improvement in maintaining accuracy across vast documents and complex, multi‑step workflows.

This expanded capacity enables the model to analyse large codebases, regulatory filings, or research archives in a single pass—an ability already drawing interest from enterprise users.

Perhaps the most striking development is Anthropic’s progress in agentic systems. Claude Code and the company’s Cowork framework now support coordinated ‘agent teams’, allowing multiple Claude instances to collaborate on sophisticated engineering challenges.

In one internal experiment, a team of 16 Claude agents built a complete Rust‑based C compiler capable of compiling the Linux kernel—producing nearly 100,000 lines of code with minimal human intervention.

Agentic shift

This agentic shift is reshaping expectations around AI‑driven software development. Anthropic positions Opus 4.6 not merely as a tool but as a foundation for autonomous, multi‑agent workflows that can plan, execute, and refine complex tasks over extended periods.

The company highlights improvements in reliability, coding precision, and long‑running task stability as core differentiators.

With enterprise adoption already representing the majority of Anthropic’s business, Opus 4.6 signals a decisive step toward AI systems that operate as high‑level collaborators rather than assistants.

As markets digest the implications, one thing is clear: Anthropic is accelerating the transition from ‘AI that helps’ to AI that works alongside you—and sometimes, entirely on its own.

Legal profession

Anthropic is pushing aggressively into the legal domain, positioning Claude as a high‑precision research and drafting partner for firms handling complex regulatory workloads.

The latest models emphasise long‑context accuracy, allowing lawyers to ingest entire case bundles, contracts, or disclosure sets without losing coherence.

Anthropic has also expanded constitutional AI safeguards, aiming to reduce hallucinations in high‑stakes legal reasoning.

Early adopters report gains in due‑diligence speed, contract comparison, and regulatory interpretation, particularly in financial services and data‑protection work.

While not a substitute for legal judgement, Claude is rapidly becoming a force multiplier for teams managing heavy document‑driven tasks.

The Rise of OpenClaw and the New Era of AI Agents

Agent AI

A new generation of artificial intelligence is taking shape, and at its centre sits OpenClaw — a fast‑evolving framework that embodies the shift from monolithic AI models to agile, task‑driven agents.

While large language models once dominated the conversation, the momentum has clearly moved toward systems that can reason, plan, and act with far greater autonomy. OpenClaw is emerging as one of the most intriguing examples of this transition.

Appeal

OpenClaw’s appeal lies in its modular design. Instead of relying on a single, all‑purpose model, it orchestrates multiple specialised components that collaborate to complete complex workflows.

This mirrors how real teams operate: one agent may handle research, another may draft content, and a third may evaluate quality or flag risks. The result is a system that behaves less like a tool and more like a coordinated digital workforce.

Defining trend

This shift is not happening in isolation. Across the industry, AI agents are becoming the defining trend. Companies are racing to build systems that can manage inboxes, run businesses, write and deploy code, or even negotiate with other agents.

The ambition is no longer to create a chatbot that answers questions, but an autonomous entity capable of executing multi‑step tasks with minimal human intervention.

OpenClaw stands out because it embraces openness and experimentation. Developers can plug in their own models, customise behaviours, and build agent ‘stacks’ tailored to specific industries.

Adoption

Early adopters in media, finance, and logistics are already exploring how these agents can streamline research, automate reporting, or coordinate supply‑chain decisions.

The promise is efficiency, but also creativity: agents that can generate ideas, test them, and refine them without constant supervision.

Of course, the rise of agentic AI brings challenges. Questions around safety, reliability, and accountability are becoming more urgent. An agent that can act independently must also be constrained responsibly.

Challenge

The industry is now grappling with how to balance autonomy with oversight, ensuring that these systems remain aligned with human goals and values.

Even with these concerns, the trajectory is unmistakable. OpenClaw and its peers represent a decisive step toward AI that is not merely reactive but proactive — capable of taking initiative, managing complexity, and collaborating with humans in more meaningful ways.

As these systems mature, they are likely to reshape not just how we work, but how we think about intelligence itself.

If you want to explore how this trend could influence your editorial or creative workflows, I’m ready to dive deeper with you.

Is This a Make‑or‑Break Year for OpenAI?

Where is OpenAI's profit?

OpenAI enters 2026 in a paradoxical position: simultaneously one of the fastest‑growing technology companies in history and one of the most financially strained.

With annualised revenue now exceeding $20 billion, the company has clearly proven global demand for generative AI. Yet the central question remains unresolved: where is the profit, and is this the year OpenAI must prove its business model is sustainable?

The company’s revenue trajectory has been extraordinary. Annual recurring revenue rose from $2 billion in 2023 to $6 billion in 2024, before leaping past $20 billion in 2025.

This growth reflects the rapid embedding of ChatGPT into enterprise workflows and the expansion of compute capacity, which has roughly tripled each year. But the same infrastructure powering this boom is also the source of OpenAI’s financial dilemma.

Costs

Compute costs have ballooned at a rate that rivals — and in some projections exceeds — revenue growth. Analysts estimate cumulative losses could reach $143 billion by 2029 if current spending patterns continue.

The company’s burn rate, driven by massive GPU procurement and long‑term energy commitments, has been described as ‘immense’ even by industry standards Benzinga.

OpenAI’s long‑term infrastructure deals, totalling more than 26 gigawatts of future compute capacity, underline the scale of its ambition — and its financial exposure.

To counterbalance these costs, OpenAI is experimenting with new revenue streams, including the introduction of advertising within ChatGPT for U.S. users.

This marks a strategic shift from pure subscription and enterprise licensing toward a more diversified, consumer‑scale monetisation model.

Make or break?

So is 2026 a make‑or‑break year? In many ways, yes. OpenAI has proven demand, scale, and cultural impact. What it has not yet proven is that generative AI can be profitable at planetary scale.

This year will test whether the company can convert extraordinary growth into a sustainable business — or whether its costs will continue to outpace even its most impressive revenue milestones.

Anthropic’s ‘connected’ AI deal and others too

Anthropic's AI valuation

Anthropic has reportedly struck major deals with Microsoft and Nvidia. On Tuesday 18th November 2025, Microsoft announced plans to invest up to $5 billion in the startup, while Nvidia will contribute as much as $10 billion. According to a reports, this brings Anthropic’s valuation to around $350 billion. Wow!

Google has unveiled its newest AI model, Gemini 3. According to Alphabet CEO Sundar Pichai, it will deliver desired answers with less prompting.

This update comes just eight months after the launch of Gemini 2.5 and is reported to be available in the coming weeks.

Money keeps flowing

Money keeps flowing into artificial intelligence companies but out of AI stocks

In what seems like yet another case of mutual ‘back-scratching’, Microsoft and Nvidia are set to invest a combined $15 billion in Anthropic, with the OpenAI rival agreeing to purchase computing power from its two newest backers.

Lately, a large chunk of AI news feels like it boils down to: ‘Company X invests in Company Y, and Company Y turns around and buys from Company X’.

That’s not entirely correct or fair. There are plenty of advancements in the AI world that focus on actual development rather than investments. Google recently introduced the third version of Gemini, its AI model.

Anthropic’s valuation has surged to around $350 billion, propelled by a landmark $15 billion investment from Microsoft and Nvidia.

Anthropic, the AI start-up founded in 2021 by former OpenAI employees, has rapidly ascended into the ranks of the world’s most valuable companies, more than doubling its worth from $183 billion just a few months earlier.

A valuation of $350 billion for a company only 4 years old is astounding!

The deal reportedly sees Microsoft commit up to $5 billion and Nvidia up to $10 billion. Anthropic has agreed to purchase an extraordinary $30 billion in Azure compute capacity and additional infrastructure from Nvidia.

This strategic alliance is not merely financial; it signals a deliberate diversification of Microsoft’s AI ecosystem beyond its reliance on OpenAI. And Nvidia strengthens its dominance in AI hardware.

Anthropic’s valuation has reached $350 billion, following the massive $15 billion investment from Microsoft and Nvidia, which positions the company among the most valuable in the world.

This astronomical figure reflects both the scale of its partnerships — including $30 billion in Azure compute commitments and Nvidia’s cutting-edge hardware.

The valuation underscores both the intensity of the global AI race and the confidence investors place in Anthropic’s safety-conscious approach to artificial intelligence.

Yet, it also raises questions about whether such astronomical figures reflect genuine long-term value. Or is it the froth of an overheated market.

Hyperscalers keep pumping the money into AI but are they getting the justified returns yet? Probably not yet – but it will come in the future.

But by then, it will be time to upgrade the system as it develops and so more money will be pumped in

Even AI Firms Voice Concern Over Bubble Fears

AI bubble

For some time now, talk of an ‘AI bubble‘ has largely come from investors and financial analysts. Now, strikingly, some of the loudest warnings are coming from inside the industry itself.

At the Web Summit in Lisbon, senior executives from companies such as DeepL and Picsart reportedly admitted they were uneasy about the soaring valuations attached to artificial intelligence ventures. Sam Altman of OpenAI has also sounded warnings of AI overvaluation.

DeepL’s chief executive Jarek Kutylowski reportedly described current market conditions as ‘pretty exaggerated’ and suggested that signs of a bubble are already visible.

Picsart’s Hovhannes Avoyan reportedly echoed the sentiment, criticising the way start‑ups are being valued despite having little or no revenue. He reportedly coined the phrase ‘vibe revenue’ to describe firms being backed on hype rather than substance.

These remarks highlight a paradox. On one hand, demand for AI services remains strong, with enterprises expected to increase adoption in 2026.

On the other, the financial side of the sector looks overheated. Investors such as Michael Burry have accused major cloud providers of overstating profits, while banks including Goldman Sachs and Morgan Stanley have warned of potential corrections.

The tension reflects a broader question: can the industry sustain its rapid expansion without a painful reset?

Venture capital forecasts suggest trillions will be poured into AI data centres over the next five years, yet some insiders argue that the scale of spending is unnecessary.

Even optimists concede that businesses are struggling to integrate AI effectively, meaning the promised returns may take longer to materialise.

For now, the AI sector stands at a crossroads. The technology’s transformative potential is undeniable, but the financial exuberance surrounding it may prove unsustainable.

If the warnings from within the industry are correct, the next chapter of the AI story could be less about innovation and more about value correction.

Markets on a Hair Trigger: Trump’s Tariff Whiplash and the AI Bubble That Won’t Pop

Markets move as Trump tweets

U.S. stock markets are behaving like a mood ring in a thunderstorm—volatile, reactive, and oddly sentimental.

One moment, President Trump threatens a ‘massive increase’ in tariffs on Chinese imports, and nearly $2 trillion in market value evaporates.

The next, he posts that: ‘all will be fine‘, and futures rebound overnight. It’s not just policy—it’s theatre, and Wall Street is watching every act with bated breath.

This hypersensitivity isn’t new, but it’s been amplified by the precarious state of global trade and the towering expectations placed on artificial intelligence.

Trump’s recent comments about China’s rare earth export controls triggered a sell-off that saw the Nasdaq drop 3.6% and the S&P 500 fall 2.7%—the worst single-day performance since April.

Tech stocks, especially those reliant on semiconductors and AI infrastructure, were hit hardest. Nvidia alone lost nearly 5%.

Why so fickle? Because the market’s current rally is built on a foundation of hope and hype. AI has been the engine driving valuations to record highs, with companies like OpenAI and Anthropic reaching eye-watering valuations despite uncertain profitability.

The IMF and Bank of England have both warned that we may be in stage three of a classic bubble cycle6. Circular investment deals—where AI startups use funding to buy chips from their investors—have raised eyebrows and comparisons to the dot-com era.

Yet, the bubble hasn’t burst. Not yet. The ‘Buffett Indicator‘ sits at a historic 220%, and the S&P 500 trades at 188% of U.S. GDP. These are not numbers grounded in sober fundamentals—they’re fuelled by speculative fervour and a fear of missing out (FOMO).

But unlike the dot-com crash, today’s AI surge is backed by real infrastructure: data centres, chip fabrication, and enterprise adoption. Whether that’s enough to justify the valuations remains to be seen.

In the meantime, markets remain twitchy. Trump’s tariff threats are more than political posturing—they’re economic tremors that ripple through supply chains and investor sentiment.

And with AI valuations stretched to breaking point, even a modest correction could trigger a cascade.

So yes, the market is fickle. But it’s not irrational—it’s just balancing on a knife’s edge between technological optimism and geopolitical anxiety.

One tweet can tip the scales.

Fickle!

Claude Sonnet 4.5: Anthropic’s Leap Toward Autonomous Intelligence

Anthropic AI Claude

Anthropic has unveiled Claude Sonnet 4.5, its most advanced AI model to date—described by the company as ‘the best coding model in the world’.

Released in September 2025, Sonnet 4.5 marks a significant evolution in agentic capability, safety alignment, and real-world task execution.

Designed to power Claude Code and enterprise-grade AI agents, Sonnet 4.5 excels in long-context coding, autonomous software development, and complex business workflows.

Benchmark

In benchmark trials, the model reportedly sustained 30+ hours of uninterrupted coding, outperforming its predecessor Opus 4.1 and rival systems like GPT-5 and Gemini 2.52.

Anthropic’s emphasis on safety is equally notable. Sonnet 4.5 underwent extensive alignment training to reduce sycophancy, deception, and prompt injection vulnerabilities.

It now operates under Anthropic’s AI Safety Level 3 framework, with filters guarding against misuse in sensitive domains such as chemical or biological research.

New features include ‘checkpoints’ for code rollback, file creation within chat (spreadsheets, slides, documents), and a refreshed terminal interface.

Developers can now build custom agents using the Claude Agent SDK, extending the model’s reach into autonomous task orchestration4.

Anthropic’s positioning is clear: Claude Sonnet 4.5 is not merely a chatbot—it’s a colleague. With pricing held at $3 per million input tokens and $15 per million output tokens, the model is accessible yet formidable.

As AI enters its ‘super cycle’, Claude Sonnet 4.5 signals a shift from conversational novelty to operational necessity.

Whether this heralds a renaissance or a reckoning remains to be seen—but for now, Anthropic’s latest release sets a new benchmark for intelligent autonomy.

The bubble that thinks: Sam Altman’s AI paradox

AI Bubble?

Sam Altman, CEO of OpenAI, has never been shy about bold predictions. But his latest remarks strike a curious chord reportedly saying: ‘Yes, we’re in an AI bubble’.

‘And yes, AI is the most important thing to happen in a very long time’. It’s a paradox that feels almost ‘Altmanesque’—equal parts caution and conviction, like a person warning of a storm while building a lighthouse.

Altman’s reported bubble talk isn’t just market-speak. It’s a philosophical hedge against the frothy exuberance that’s gripped Silicon Valley and Wall Street alike.

With AI valuations soaring past dot-com levels, and retail investors piling into AI-branded crypto tokens and meme stocks, the signs of speculative mania are hard to ignore.

Even ChatGPT, OpenAI’s flagship product, boasts 1.5 billion monthly users—but fewer than 1% pay for it. That’s not a business model—it’s a popularity contest.

Yet Altman isn’t calling for a crash. He’s calling for clarity. His point is that bubbles form around kernels of truth—and AI’s kernel is enormous.

From autonomous agents to enterprise integration in law, medicine, and finance, the technology is reshaping workflows faster than regulators can blink.

Microsoft and Nvidia are pouring billions into infrastructure, not because they’re chasing hype, but because they see utility. Real utility.

Still, Altman’s warning is timely. The AI gold rush has spawned a legion of startups with dazzling demos and dismal revenue. This is likely the Dotcom ‘Esque’ reality – many will fail.

Many are burning cash at unsustainable rates, betting on future breakthroughs that may never materialise. Investors, Altman suggests, need to recalibrate—not abandon ship, but stop treating every chatbot as the next Google.

What makes Altman’s stance compelling is its duality. He’s not a doomsayer, nor a blind optimist. He’s a realist who understands that transformative tech often arrives wrapped in irrational exuberance. The internet had its crash before it changed the world. AI may follow suit.

So, is this a bubble? Yes. But it’s a bubble with brains. And if Altman’s lighthouse holds, it might just guide us through the fog—not to safety, but to something truly revolutionary.

In the meantime, investors would do well to remember hype inflates, but only utility sustains.

And Altman, ever the ‘paradoxical prophet’, seems to be betting on both.

China’s new AI model GLM-4.5 threatens DeepSeek – will it also threaten OpenAI?

China's AI

In a bold move reshaping the global AI landscape, Chinese startup Z.ai has launched GLM-4.5, an open-source model touted as cheaper, smaller, and more efficient than rivals like DeepSeek.

The announcement, made at the World Artificial Intelligence Conference in Shanghai, has sent ripples across the tech sector.

What sets GLM-4.5 apart is its lean architecture. Requiring just eight Nvidia H20 chips—custom-built to comply with U.S. export restrictions—it slashes operating costs dramatically.

By comparison, DeepSeek’s model demands nearly double the compute power, making GLM-4.5 a tantalising alternative for cost-conscious developers and enterprises.

But the savings don’t stop there. Z.ai revealed that it will charge just $0.11 per million input tokens and $0.28 per million output tokens. In contrast, DeepSeek R1 costs $0.14 for input and a hefty $2.19 for output, putting Z.ai firmly in the affordability lead.

Functionally, GLM-4.5 leverages ‘agentic’ AI—meaning it can deconstruct tasks into subtasks autonomously, delivering more accurate results with minimal human intervention.

This approach marks a shift from traditional logic-based models and promises smarter integration into coding, design, and editorial workflows.

Z.ai, formerly known as Zhipu, boasts an impressive funding roster including Alibaba, Tencent, and state-backed municipal tech funds.

With IPO ambitions on the horizon, its momentum mirrors China’s broader push to dominate the next wave of AI innovation.

While the U.S. has placed Z.ai on its entity list, stifling some Western partnerships, the firm insists it has adequate computing resources to scale.

As AI becomes a battleground for technological and geopolitical influence, GLM-4.5 may prove to be a powerful competitor.

But it has some way yet to go.

Microsoft joins Nvidia in the $4 trillion Market Cap club

Microdift and Nvidia only two companies in exclusive $4 trillion market cap club

In a landmark moment for the tech industry, Microsoft has officially joined Nvidia in the exclusive $4 trillion market capitalisation club, following a surge in its share price after stellar Q4 earnings.

This accolade achieved on 31st July 2025 marks a dramatic shift in the hierarchy of global tech giants, with Microsoft briefly overtaking Nvidia to become the world’s most valuable company. But for how long?

The rally was fuelled by Microsoft’s aggressive investment in artificial intelligence and cloud infrastructure. Azure, its cloud platform, posted a 39% year-on-year revenue increase, surpassing $75 billion in annual sales.

The company’s Copilot AI tools, now boasting over 100 million monthly active users, have become central to its strategy, embedding generative AI across productivity software, development platforms, and enterprise services.

Microsoft’s transformation from a traditional software provider to an AI-first powerhouse has been swift and strategic. Its partnerships with OpenAI, Meta, and xAI, combined with over $100 billion in planned capital expenditure, signal a long-term commitment to shaping the future of AI utility.

While Nvidia dominates the hardware side of the AI revolution, Microsoft is staking its claim as the platform through which AI is experienced.

This milestone not only redefines Microsoft’s legacy—it redraws the map of pure tech power and reach the company has around the world.

This has been earned over decades of business commitment.

Nvidia regains top spot by market cap

Nvidia top value company again

Nvidia has once again claimed the title of the world’s most valuable publicly traded company, overtaking Microsoft with a staggering market capitalisation of $3.76 trillion.

This milestone follows a 4% surge in Nvidia’s share price, closing at an all-time high of $154.10.

The rally was fuelled by renewed investor enthusiasm for artificial intelligence. Analysts citing it as a ‘Golden Wave’ of generative AI adoption driving demand for Nvidia’s high-performance chips.

The company’s meteoric rise has been underpinned by its dominance in AI hardware, particularly its GPUs, which power everything from ChatGPT to enterprise-scale AI models.

Since bottoming out in early April 2025, Nvidia’s stock has soared more than 60%, far outpacing the broader tech market.

Founded in 1993 to produce graphics chips for gaming, Nvidia has transformed into the backbone of the AI revolution. Its accelerators are now essential infrastructure for companies like Microsoft, Meta, and Google.

Nvidia share price as of 25th June 2025 – a 3 month snapshot

Nvidia share price as of 25th June 2025 – a 3 month snapshot

Despite its rapid ascent, Nvidia’s valuation remains relatively modest compared to historical norms, trading at around 30 times projected earnings.

As the AI arms race intensifies, Nvidia’s position at the summit of global markets underscores the growing importance of its power in shaping the digital future.

AMD Unveils Instinct MI400: is it time for AMD to challenge NVIDIA dominance?

AMD & NVIDIA chip go head-to-head

AMD has officially lifted the curtain on its next-generation AI chip, the Instinct MI400, marking a significant escalation in the battle for data centre dominance.

Set to launch in 2026, the MI400 is designed to power hyperscale AI workloads with unprecedented efficiency and performance.

Sam Altman and OpenAI have played a surprisingly hands-on role in AMD’s development of the Instinct MI400 series.

Altman appeared on stage with AMD CEO Lisa Su at the company’s ‘Advancing AI’ event, where he revealed that OpenAI had provided direct feedback during the chip’s design process.

Altman described his initial reaction to the MI400 specs as ‘totally crazy’ but expressed excitement at how close AMD has come to delivering on its ambitious goals.

He praised the MI400’s architecture – particularly its memory design – as being well-suited for both inference and training tasks.

OpenAI has already been using AMD’s MI300X chips for some workloads and is expected to adopt the MI400 series when it launches in 2026.

This collaboration is part of a broader trend: OpenAI, traditionally reliant on Nvidia GPUs via Microsoft Azure, is now diversifying its compute stack.

AMD’s open standards and cost-effective performance are clearly appealing, especially as OpenAI also explores its own chip development efforts with Broadcom.

AMD’s one-year chart snap-shot

One-year AMD chart snap-shot

So, while OpenAI isn’t ditching Nvidia entirely, its involvement with AMD signals a strategic shift—and a vote of confidence in AMD’s growing role in the AI hardware ecosystem.

At the heart of AMD’s strategy is the Helios rack-scale system, a unified architecture that allows thousands of MI400 chips to function as a single, massive compute engine.

This approach is tailored for the growing demands of large language models and generative AI, where inference speed and energy efficiency are paramount.

AMD technical power

The MI400 boasts a staggering 432GB of next-generation HBM4 memory and a bandwidth of 19.6TB/sec—more than double that of its predecessor.

With up to four Accelerated Compute Dies (XCDs) and enhanced interconnects, the chip delivers 40 PFLOPs of FP4 performance, positioning it as a formidable rival to Nvidia’s Rubin R100 GPU.

AMD’s open-source networking technology, UALink, replaces Nvidia’s proprietary NVLink, reinforcing the company’s commitment to open standards. This, combined with aggressive pricing and lower power consumption, gives AMD a compelling value proposition.

The company claims its chips can deliver 40% more AI tokens per dollar than Nvidia’s offerings.

Big tech follows AMD

OpenAI, Meta, Microsoft, and Oracle are among the major players already integrating AMD’s Instinct chips into their infrastructure. OpenAI CEO Sam Altman, speaking at the launch event reportedly praised the MI400’s capabilities, calling it ‘an amazing thing‘.

With the AI chip market projected to exceed $500 billion by 2028, AMD’s MI400 is more than just a product—it’s a statement of intent. As the race for AI supremacy intensifies, AMD is betting big on performance, openness, and affordability to carve out a larger share of the future.

It certainly looks like AMD is positioning the Instinct MI400 as a serious contender in the AI accelerator space – and Nvidia will be watching closely.

The MI400 doesn’t just aim to catch up; it’s designed to challenge Nvidia head-on with bold architectural shifts and aggressive performance-per-dollar metrics.

Nvidia has long held the upper hand with its CUDA software ecosystem and dominant market share, especially with the popularity of its H100 and the upcoming Rubin GPU. But AMD is playing the long game.

Nvidia 0ne-year chart snapshot

Nvidia 0ne-year chart snapshot

By offering open standards like UALink and boasting impressive specs like 432GB of HBM4 memory and 40 PFLOPs of FP4 performance, the MI400 is pushing into territory that was once Nvidia’s alone.

Whether it truly rivals Nvidia will depend on a few key factors: industry adoption, software compatibility, real-world performance under AI workloads, and AMD’s ability to scale production and support.

But with major players like OpenAI, Microsoft, and Meta already lining up to adopt the MI400.

Is now a good time to invest in AMD?

AI creates paradigm shift in computing – programming AI is like training a person

Teaching or programing?

At London Tech Week, Nvidia CEO Jensen Huang made a striking statement: “The way you program an AI is like the way you program a person.” (Do we really program people or do we teach)?

This marks a fundamental shift in how we interact with artificial intelligence, moving away from traditional coding languages and towards natural human communication.

Historically, programming required specialised knowledge of languages like C++ or Python. Developers had to meticulously craft instructions for computers to follow.

Huang argues that AI has now evolved to understand and respond to human language, making programming more intuitive and accessible.

This transformation is largely driven by advancements in conversational AI models, such as ChatGPT, Gemini, and Copilot.

These systems allow users to issue commands in plain English – whether asking an AI to generate images, write a poem, or even create software code. Instead of writing complex algorithms, users can simply ask nicely, much like instructing a colleague or student.

Huang’s analogy extends beyond convenience. Just as people learn through feedback and iteration, AI models refine their responses based on user input.

If an AI-generated poem isn’t quite right, users can prompt it to improve, and it will think and adjust accordingly.

This iterative process mirrors human learning, where guidance and refinement lead to better outcomes.

The implications of this shift are profound. AI is no longer just a tool for experts – it is a great equalizer, enabling anyone to harness computing power without technical expertise.

As businesses integrate AI into their workflows, employees will need to adapt, treating AI as a collaborative partner rather than a mere machine.

This evolution in AI programming is not just about efficiency; it represents a new era where technology aligns more closely with human thought and interaction.

Baidu, once China’s generative AI leader – is battling to regain its position

A Chatbot

Chinese tech giant Baidu has released two new free-to-use artificial intelligence models as it vies to regain its leading position in the country’s fiercely competitive AI space

The Baidu models launched on Sunday 16th March 2025 included the company’s first reasoning-focused model and come ahead of plans to move towards an open-source strategy. 

However, analysts reportedly said that while the release of the models is a positive development for Baidu, they also highlight how it is playing catch up as its Ernie bot – one of China’s earliest versions of a ChatGPT-like chatbot – struggles to gain widespread adoption. 

‘The new models make Baidu more competitive since the company has been lagging behind in a reasoning model release’, one expert is reported as saying.

A reasoning model is a large language model that breaks down tasks into smaller pieces and considers multiple approaches before generating a response. It is designed to process complex problems in a similar way to humans.

Chinese startup DeepSeek upended the global AI race and transformed China’s ecosystem in January when it released its R1 reasoning model, which rivalled American competitors despite costing a fraction of the price.

Artificial intelligence capable of matching humans at any task will be available within five ten years

AI

Artificial General Intelligence (AGI), a form of AI capable of matching or surpassing human intelligence across all tasks, is expected to emerge within the next five to ten years, according to Demis Hassabis, CEO of Google DeepMind.

Speaking recently, Hassabis highlighted the advancements in AI systems that are paving the way for AGI.

While current AI excels in specific domains, such as playing complex games like chess or Go – it still lacks the ability to generalise knowledge and adapt to real-world challenges.

But the advancements made in AI chatbots such as ChatGPT from OpenAI and DeepSeek have showcased remarkable development, and at speed too. Applying AI to work environments, science and domestic tasks is forever expanding.

Hassabis emphasised that significant research is still required to achieve AGI. The focus lies on improving AI’s understanding of context and its ability to plan and reason in dynamic environments.

Multi-agent systems, where AI entities collaborate or compete, are seen as a promising avenue for development.

These systems aim to replicate the intricate decision-making processes humans exhibit in complex scenarios.

The implications of AGI are profound, with potential applications spanning healthcare, education, and beyond.

However, its development also raises ethical and societal questions, including concerns about control, safety, and equitable access.

While the timeline remains speculative, Hassabis’s insights underscore the accelerating pace of AI innovation, bringing humanity closer to a future where machines and humans collaborate in unprecedented ways.

Or not?

Musk’s xAI releases new Grok 3 AI

xAI Grok AI

Elon Musk’s AI company, xAI, has recently released its latest AI model, Grok 3.

This new AI model is designed to be significantly more powerful and capable than its predecessor, Grok 2.
  • Enhanced Capabilities: Grok 3 boasts 10 times more computing power than Grok 2 and has been trained on an expanded dataset, including court case filings.
  • Reasoning Models: Grok 3 includes reasoning models that can carefully analyze and fact-check information before providing responses. This helps in avoiding common pitfalls of AI models.
  • Benchmark Performance: Grok 3 has outperformed other leading AI models, including OpenAI’s GPT-4o and DeepSeek’s R1, on various benchmarks such as AIME (math questions) and GPQA (physics, biology, chemistry problems).
  • New Features: The Grok app now includes a ‘DeepSearch’ feature that scans the internet and xAI’s social network, X, to provide summarised responses to user queries.
  • Subscription Plans: xAI has introduced a new subscription plan called SuperGrok, which offers additional reasoning capabilities and unlimited image generation.

Grok 3 is being hailed as the ‘smartest AI on Earth’ by Musk, and it’s expected to have a significant impact on various industries.

Definition

Grok is a neologism (a newly coined word or expression), referenced by Robert A. Heinlein for his 1961 science fiction novel Stranger in a Strange Land. It means to understand something so deeply that you become one with it.

Grok is a term used in computer programming to mean to ‘profoundly understand something‘, such as a system, a language, or an algorithm.

Less woke

Grok, the company previously reportedly said, is modelled on ‘The Hitchhiker’s Guide to the Galaxy’. 

It is supposed to have ‘a bit of wit, a rebellious streak’ and it should answer the ‘spicy questions’ that other AI might dodge, according to a statement from xAI.

I wonder if it has been modelled on Elon Musk?

Chinese tech giant Baidu to release next-generation AI model soon as DeepSeek leads Chinese AI tech

AI

China’s Baidu reportedly plans to release the next generation of its artificial intelligence model in the second half of this year, according to information recently reported.

The planned update comes as Chinese companies race to develop innovative AI models to compete with OpenAI and other U.S. based companies.

Baidu was the first major Chinese tech company to roll out a ChatGPT-like chatbot called Ernie in March 2023.

However, despite initial momentum, the product has since been eclipsed by other Chinese chatbots from large tech companies such as Alibaba and ByteDance, as well as startups.

The aftermath from the arrival of Deepseek

Deepseek AI

Nvidia, the renowned American technology company, recently experienced the largest one-day loss in U.S. history. On January 27, 2025, Nvidia’s stock plummeted by 17%, resulting in a staggering market cap loss of nearly $600 billion.

This unprecedented drop was primarily triggered by the emergence of DeepSeek, a Chinese artificial intelligence startup that has been making waves in the tech industry.

DeepSeek, founded in 2023 by Liang Wenfeng, has developed open-source large language models that rival some of the best AI models in the world. The company’s latest model, DeepSeek-V3, has demonstrated impressive performance at a fraction of the cost of its competitors.

This has raised concerns among investors about the sustainability of Nvidia’s dominance in the AI chip market.

The release of DeepSeek’s latest technology has caused significant anxiety among U.S. tech giants, leading to a massive sell-off in the stock market. Companies that rely heavily on Nvidia’s GPUs, such as Dell, Oracle, and Super Micro Computer, also saw their stock prices plummet.

The ripple effect of Nvidia’s loss was felt across the tech-heavy Nasdaq, which dropped by 3.1% on the same day.

Nvidia one-month chart 27th January 2025

In response to this market upheaval, former President Donald Trump commented on the situation, stating that DeepSeek’s emergence should serve as a ‘wake-up call’ for American companies.

Trump emphasised the need for U.S. industries to remain competitive and innovative in the face of rising competition from Chinese tech firms. He acknowledged the impressive advancements made by DeepSeek and suggested that American companies could benefit from adopting more cost-effective methods in their AI development.

Trump’s remarks highlight the growing concern among U.S. policymakers and industry leaders about the rapid advancements in AI technology coming from China.

The success of DeepSeek has demonstrated that significant breakthroughs can be achieved with relatively modest investments, challenging the notion that massive capital expenditure is necessary for top-tier AI performance.

As the AI race continues to heat up, it is clear that companies like Nvidia will need to adapt and innovate to maintain their competitive edge. The emergence of DeepSeek has not only disrupted the market but also underscored the importance of staying ahead in the rapidly evolving field of artificial intelligence.

For American tech giants, this wake-up call serves as a reminder that the global AI landscape is becoming increasingly competitive, and complacency is not an option.

Has the U.S. spent wisely on AI development?

The emergence of DeepSeek has indeed raised questions about the spending strategies of U.S. tech companies on AI development. DeepSeek’s ability to develop high-performing AI models at a fraction of the cost has led to concerns about the necessity of the massive capital expenditures by large U.S. tech firms.

For instance, companies like Microsoft, Alphabet, Meta, and Amazon have collectively spent over $1 trillion on AI investments in recent years. This significant spending has been driven by the belief that larger models and more computing power would lead to better performance.

However, DeepSeek’s success with more cost-effective methods challenges this notion and suggests that efficient innovation might be more important than sheer scale.

The impressive performance and lower costs of DeepSeek’s models have prompted some analysts to question whether the billions of dollars spent by U.S. tech companies on AI development are justified. This has led to a re-evaluation of AI spending strategies and a push for more efficient and innovative approaches to AI development.

In summary, while U.S. tech companies have invested heavily in AI, the success of DeepSeek highlights the potential for achieving cutting-edge results with fewer resources. This has sparked a broader discussion about the most effective and efficient ways to advance AI technology.

Asking ChatGPT – ‘Do you see Deepseek as a competitor or a threat?’

The reply:

As an AI, I don’t have personal opinions or feelings. However, I can share that DeepSeek’s rapid rise and impressive performance in the AI field are noteworthy. Their ability to develop high-performing models at a lower cost is a significant achievement and has certainly shaken up the industry. It’s a reminder of the dynamic and competitive nature of the AI landscape, where innovation can come from unexpected places.

Asking ChatGPT – ‘What do you think of Deepseek?’

The reply:

DeepSeek is a fascinating and disruptive player in the AI landscape. Founded in 2023 by Liang Wenfeng, this Chinese startup has quickly gained attention for its open-source large language models, which rival some of the best AI models in the world at a fraction of the cost. Their latest models, DeepSeek-V3 and DeepSeek-R1, have demonstrated impressive performance and efficiency, challenging the dominance of established tech giants2.

DeepSeek’s emergence has caused significant market upheaval, leading to substantial drops in the stock prices of major AI-related firms, including Nvidia. The company’s success highlights the potential for innovation and cost-effectiveness in AI development, which has prompted industry leaders and policymakers to re-evaluate their strategies.

Overall, DeepSeek’s rapid rise and impact on the AI industry underscore the importance of staying competitive and innovative in this fast-evolving field. It’s a reminder that breakthroughs can come from unexpected places and that the AI landscape is becoming increasingly dynamic and competitive.

It would not be drawn on these questions but, one thing is for sure the U.S. has spent billions developing their systems and charge fortunes for using it.

This is about to change.

And further…

The Deepseek website reportedly crashed on 27th January 2025 due to heavy global demand. There were also reports of unrelenting cyberattacks on its systems…?

Curious. From minnow to centre stage in just one day!

Google releases the first of its Gemini 2.0 AI models

Google AI

Google released the first version of its Gemini 2.0 family of artificial intelligence models in December 2024

Gemini 2.0 Flash, as the model is named is available in a chat version for users worldwide, while experimental multimodal version of the model, with text-to-speech image generation capabilities, available to developers.

‘If Gemini 1.0 was about organising and understanding information, Gemini 2.0 is about making it much more useful,’ Google CEO Sundar Pichai reportedly said in a statement.

Google’s latest large language model surpasses its predecessors in most user request areas, including code generation and the ability to provide factually accurate responses. However, it falls short compared to Gemini1.5 Pro when it comes evaluating longer contexts.

To access the chat-optimized version of the experimental Flash 2.0, Gemini users can select from the drop-down menu on both desktop and mobile web platforms. According to the company it will soon be available on the Gemini mobile app.

The multimodal version of Gemini Flash .0 will be accessible through Google’s AI Studio and Vertex AI developer platforms.

The general availability of Gemini 2.0 Flash’s multimodal version is scheduled for January, along with additional Gemini 2.0 model sizes, Google announced. The company also plans to expand Gemini 20 to more Google products in early 2025.

Gemini 2.0 signifies Google’s latest efforts in the increasingly competitive AI industry. Google is competing with major tech rivals such as Microsoft and Meta, as well as startups like OpenAI, the creator of ChatGPT, Perplexity, and Anthropic, which developed Claude.

In addition to new Flash, other research prototypes are aimed at developing more ‘agentic’ AI models and experiences. According to the company, agentic models ‘can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision’.

Why has Sumsung fallen behind in the AI boom?

A Cartoon AI chip

Samsung’s struggle in the AI race

Samsung, previously a powerhouse in the semiconductor industry, has encountered significant hurdles in the AI competition, leading to a notable decline in market value. The company’s faltering stance can be attributed to a variety of factors, such as strategic errors, fierce competition, and swift technological progress in the AI field.

Missteps

A key factor in Samsung’s downturn in the AI sector is its insufficient investment in high-bandwidth memory (HBM) technology, which is vital for AI applications due to its ability to expedite data processing and enhance performance.

Although Samsung was once at the forefront of memory technology, it did not leverage the increasing demand for HBM, thus ceding ground to competitors such as SK Hynix. SK Hynix made significant investments in HBM and forged a robust partnership with Nvidia, an influential entity in the AI domain.

Competition

The AI sector is fiercely competitive, featuring key companies such as Nvidia, Google, and Microsoft, which are making substantial advancements in AI technology. Nvidia has notably become a frontrunner with its GPUs, crucial for AI training. Samsung’s struggle to match these developments has resulted in a decline in both market share and revenue.

Rapid technological advancements

The swift advancement of technology in the AI sector has presented challenges for Samsung. The company’s emphasis on conventional memory technology did not fully prepare it for the transition to AI-centric applications. With the rise of AI applications such as OpenAI’s ChatGPT, the need for sophisticated memory solutions surged, highlighting Samsung’s insufficient investment in High-Bandwidth Memory (HBM) as a notable shortcoming.

Financial implications

Samsung’s difficulties in the AI sector have significantly affected its finances. The company has seen a reported loss of around $122 billion in market value since July 2024, marking the most substantial drop among global chipmakers. This decline is largely due to Samsung’s challenges in adapting to the evolving AI industry and competing with its rivals.

Prospects

Despite facing challenges, Samsung is actively striving to advance in the AI domain. The company has recently introduced its next-generation Bixby AI, which utilizes large language model technology, positioning it to better contend with competitors such as ChatGPT and Google Gemini.

Additionally, Samsung is cultivating its proprietary AI model, named Samsung Gauss, with the goal of augmenting device functionality and elevating the consumer experience.

Samsung’s lag in the AI sector is due to strategic errors, fierce competition, and swift technological progress. Despite considerable financial setbacks, the company is vigorously pursuing new AI initiatives and investments to recover its standing in the industry.

The path forward is fraught with challenges, yet Samsung’s commitment to innovation and adaptation could enable it to regain its status as a frontrunner in the AI domain.

Big tech companies are increasingly adopting nuclear power to meet the high energy demands of their AI data centres

Data centre powered by nuclear reactors

Why?

Elevated Energy Needs

AI systems, particularly generative AI, necessitate substantial computational power, leading to significant energy use. Conventional energy sources might not meet these growing demands.

Environmental Commitments

Numerous tech firms have pledged to lower their carbon emissions. Nuclear power, a low-emission energy source, supports these environmental commitments.

Dependability

Nuclear energy offers a consistent and uninterrupted power supply, essential for data centres that operate around the clock.

Technological Advancements

Progress in nuclear technologies, such as small modular reactors (SMRs), has enhanced the feasibility and appeal of nuclear power for extensive use.

For example, Google has entered into an agreement with Kairos Power for electricity from small modular reactors to bolster its AI operations. In a similar vein, Microsoft has collaborated with Constellation to refurbish an inactive reactor at the Three Mile Island nuclear facility.

These collaborations mark a notable transition in the energy strategies of the tech sector, as they pursue dependable, eco-friendly, and robust power solutions to support their AI initiatives.

Nvidia hits new record high with new $3.4 trillion market cap

AI chips

Nvidia’s shares have reached a record peak as the company continues to benefit from the surging demand for its AI chips

Tech giants such as Microsoft, Meta, Google, and Amazon are acquiring Nvidia’s GPUs in large volumes to create extensive AI computing clusters.

Nvidia, with a market capitalisation of around $3.4 trillion, ranks as the second most valuable publicly traded company in the U.S., trailing behind Apple, which has a market cap of approximately $3.55 trillion.

And to think… just 6 weeks ago Nvidia hit the news with this headline: Nvidia $279 billion market cap wipeout — the biggest in U.S. history for just ONE company.

Oh, the volatility of tech stocks, don’t you just love it?

The company’s stock rose by 2.4% to close at $138.07, exceeding the previous high of $135.58 set on 18th June 2023. The shares have increased by nearly 180% this year and have experienced a more than ninefold increase since early 2023.

Regarded as the leading supplier in the AI revolution, Nvidia has gained significantly from the generative AI surge initiated by OpenAI’s ChatGPT release in November 2022. Nvidia’s GPUs are instrumental in developing and running sophisticated AI models, including those that operate ChatGPT and related platforms.

You can’t go far wrong when big players such as Microsoft, Meta, Google and Amazon are buying your stuff.

Meta reportedly to use the voice of Dame Judi Dench for its celebrity AI Chatbot

Chatbot with celebrity voices

Meta, the parent company of Instagram, has announced that voices of Dame Judi Dench and John Cena will be available as options for its AI chatbot.

Moreover, users can access information through AI representations of celebrities such as Awkwafina, Keegan-Michael Key, and Kristen Bell. Meta is hopeful that this new endeavour with celebrity chatbots will surpass the success of its previous attempts. In September 2023, Meta introduced AI chatbots featuring the ‘personalities’ of celebrities including Kendall Jenner and Snoop Dogg, but the project was terminated within a year.

At Meta’s annual Connect conference, CEO Mark Zuckerberg announced the new celebrity chatbot project, remarking, ‘Interacting with AI through voice will be more intuitive than through text.‘ The enhanced ChatGPT-style chatbot will also be capable of recognizing objects in images and providing relevant details. Additionally, a novel image editing tool will allow users to alter photos by simply directing the Meta AI with their requests.

Meta has disclosed that its AI now reaches over 400 million people monthly, with 185 million engaging weekly.

Meta one-year chart as of 2nd October 2024

Meta one-year chart as of 2nd October 2024