TSMC’s 58% surge in first‑quarter profit is the clearest sign yet that the AI boom is no longer a cyclical uplift but a structural shift reshaping the entire semiconductor industry.
The Taiwanese chipmaker delivered record earnings, comfortably beating analyst expectations, as demand for advanced processors continued to outstrip supply.
Net income reportedly reached NT$572.48 billion, marking a fourth consecutive quarter of record profits, while revenue climbed to NT$1.134 trillion, driven overwhelmingly by high‑performance computing and AI‑related orders.
What stands out is the composition of that growth. Roughly three‑quarters of TSMC’s wafer revenue reportedly came from advanced nodes, with 3‑nanometre chips alone accounting for a quarter of shipments.
Nvidia
Nvidia has now overtaken Apple as TSMC’s largest customer, underscoring how AI accelerators have become the industry’s most valuable real estate.
TSMC’s executives described AI demand as “extremely robust”, with customers signalling multi‑year achievements rather than the usual stop‑start ordering cycle.
The company also moved to reassure investors over supply‑chain risks linked to the Middle East conflict, saying it has diversified sources for critical gases such as helium and hydrogen.
With capacity running hot and capital spending set to hit the top end of guidance, TSMC is positioning itself as the indispensable chipmaker in the AI era.
ASML’s decision to raise its 2026 guidance underlines a simple reality: demand for advanced AI chips is not easing, and the world’s most important semiconductor equipment maker remains at the centre of that surge.
The company signalled stronger-than-expected orders for its extreme ultraviolet (EUV) and next‑generation high‑NA systems, driven by chipmakers racing to expand capacity for AI accelerators, data‑centre processors and cutting‑edge logic nodes.
Bottleneck
The upgrade matters because ASML sits at the bottleneck of global chip production. Only a handful of firms can even buy its most advanced machines, and those firms – chiefly TSMC, Intel and Samsung – are all scaling up AI‑focused manufacturing.
Their capital expenditure plans have held firm despite broader economic uncertainty, suggesting that AI infrastructure is becoming a non‑discretionary investment rather than a cyclical one.
Two forces are driving the momentum. First, hyperscalers continue to pour billions into AI clusters, creating sustained demand for the most advanced lithography tools.
Long-term lock in
Second, geopolitical pressure to secure domestic chip capacity is pushing governments and manufacturers to lock in long‑term equipment orders.
ASML’s raised outlook reinforces the sense that the semiconductor cycle is diverging: consumer electronics remain patchy, but AI‑related manufacturing is entering a multi‑year expansion.
The key question now is whether supply can keep pace with the ambition of its customers.
Meta has unveiled Muse Spark, its first major artificial intelligence model since the company overhauled its AI strategy in response to the underwhelming reception of its previous Llama 4 models.
Developed by the newly formed Meta Superintelligence Labs under the leadership of Alexandr Wang, Muse Spark represents a deliberate shift towards smaller, faster, and more capable systems designed to compete directly with Google, OpenAI, and Anthropic.
Foundation
Muse Spark is positioned as the foundation of a new family of models internally known as Avocado. Meta reportedly describes it as “small and fast by design”, yet able to reason through complex questions in science, maths, and health — a notable claim given the company’s recent struggles to keep pace with rivals.
Early evaluations suggest the model performs competitively in language and visual understanding, though it still trails in coding and abstract reasoning.
Crucially, Muse Spark is deeply integrated into Meta’s ecosystem. It already powers the Meta AI app and website and will soon replace Llama across WhatsApp, Instagram, Facebook, Messenger, and Meta’s smart glasses.
Integrated
This rollout signals Meta’s intention to embed AI more tightly into everyday user interactions, from search and recommendations to multimodal tasks such as analysing photos or comparing products.
The company is also experimenting with new revenue streams by offering a private API preview to select partners — a departure from its previous open‑source approach.
Whether this shift will alienate developers who embraced the openness of Llama remains to be seen.
Meta frames Muse Spark as an early step toward “personal superintelligence”, an assistant that can understand the world alongside the user rather than waiting for typed instructions.
It’s an ambitious vision — and one that will be tested as the model expands globally and faces scrutiny over privacy, safety, and real‑world performance.
Oracle is swinging hard at its own workforce as the company races to reposition itself as an AI‑infrastructure contender.
Thousands of roles are being eliminated, a drastic move that reflects the sheer financial pressure of trying to keep up with hyperscale rivals in the most capital‑intensive tech shift in decades.
The company’s share price has slumped 25% this year, with investors increasingly uneasy about soaring data‑centre spending and the heavy debt required to fund it.
Oracle has already raised $50 billion to bankroll new GPU‑ready facilities, but unlike Amazon or Microsoft, it lacks the cushion of vast cloud scale.
The result: a balance sheet under strain and a leadership team forced into tough decisions.
Future
Oracle’s remaining performance obligations have ballooned to more than half a trillion dollars, fuelled by major AI partnerships including a huge deal with OpenAI.
But those future revenues don’t solve today’s cash‑flow squeeze. Analysts estimate that cutting 20,000 to 30,000 jobs could free up as much as $10 billion — enough to keep the AI build‑out moving without further rattling the markets.
Oracle is betting that a leaner organisation now will buy it the runway to compete later. The question is whether the cuts arrive in time to match the speed of the AI race.
When artificial intelligence first ignited investor enthusiasm, it lifted almost every major technology stock.
The narrative was simple: AI would transform industries, boost productivity and unlock vast new revenue streams.
Yet as the cycle matures, markets are becoming more selective. In recent weeks, shares of IBM have drifted lower, illustrating how the ‘AI effect’ can cut both ways.
At first glance, IBM should be a prime beneficiary. The company has spent years repositioning itself around hybrid cloud infrastructure, data analytics and enterprise AI solutions.
Its Watson platform has been refreshed with generative AI tools designed to automate customer service, streamline software development and enhance business decision-making. Management has repeatedly emphasised AI as a core growth engine.
Market Expectations
However, the market’s expectations have shifted. Investors are increasingly rewarding companies that sit at the very heart of AI infrastructure — those supplying advanced semiconductors, high-performance computing capacity and hyperscale cloud services.
These businesses are reporting visible surges in AI-related demand, often accompanied by sharp revenue acceleration and expanding margins.
By contrast, IBM’s AI exposure is embedded within broader consulting and software operations, making its growth trajectory appear steadier rather than explosive.
This distinction matters in a momentum-driven environment. When earnings updates fail to deliver dramatic upside surprises, shares can quickly lose favour.
Less AI Effect
IBM’s results have shown progress in software and recurring revenue, but they have not reflected the kind of dramatic AI-driven uplift seen elsewhere in the sector. For some investors, that raises questions about competitive positioning and pricing power.
There is also a perception issue. Despite its reinvention efforts, IBM still carries the legacy image of a mature technology conglomerate rather than a cutting-edge AI disruptor.
In a market captivated by bold innovation stories, narrative can influence valuation just as much as fundamentals.
If capital flows concentrate in a handful of high-growth AI names, diversified players may struggle to keep pace in share price performance.
AI Tension
Yet the sell-off may also highlight a deeper tension within the AI theme. Enterprise adoption of AI tools tends to be gradual, cautious and closely tied to measurable productivity gains.
IBM’s strategy is built around long-term integration rather than short-term hype. While that approach may lack immediate fireworks, it could prove more durable as corporate clients prioritise reliability, governance and cost control.
For now, though, the AI effect is amplifying investor discrimination. In a market eager for rapid transformation, IBM’s more measured path has translated into weaker share performance — a reminder that not all AI exposure is valued equally.
Further discussion
IBM has found itself on the wrong side of the artificial intelligence boom, with its shares tumbling more than 13% after Anthropic unveiled a new capability that directly targets one of the company’s most enduring revenue pillars: COBOL modernisation.
The sell‑off reflects a broader market anxiety that AI is beginning to erode long‑protected niches in enterprise technology, and IBM has become the latest high‑profile casualty.
For decades, IBM has been synonymous with mainframe computing and the maintenance of vast COBOL‑based systems that underpin global finance, government services, airlines, and retail transactions.
These systems are notoriously complex, expensive to update, and dependent on a shrinking pool of specialist developers.
Premium Brand
That scarcity has long worked in IBM’s favour, allowing it to charge a premium for modernisation and support.
Anthropic’s announcement threatens to upend that equation. Its Claude Code tool, the company claims, can automate the most time‑consuming and costly parts of understanding and restructuring legacy COBOL environments.
Tasks that once required teams of analysts months to complete—mapping dependencies, documenting workflows, identifying risks—can now be accelerated dramatically through AI‑driven analysis.
The implication is clear: modernising legacy systems may no longer require the same level of human expertise, nor the same level of spending.
Investors reacted swiftly. IBM’s share price fell to $223.35, extending a year‑to‑date decline of more than 24% – recovering later to $229.39
IBM one-year chart as of 24th February 2026
The drop reflects not only concerns about lost revenue, but also the fear that IBM’s competitive moat—built on decades of institutional reliance on COBOL—may be eroding faster than expected.
The timing has amplified market jitters. Only days earlier, cybersecurity stocks were hit by another Anthropic announcement: Claude Code Security, a feature designed to scan codebases for vulnerabilities.
AI Mood Logic
The rapid expansion of AI into specialised technical domains has created a ‘sell first, ask questions later’ mood across the market, with investors increasingly wary of companies whose business models depend on labour‑intensive or legacy‑bound processes.
For IBM, the challenge now is to demonstrate that it can harness AI rather than be displaced by it.
The company has invested heavily in its own AI initiatives, but the latest market reaction suggests investors are unconvinced that these efforts will offset the threat to its traditional strongholds.
The AI revolution is reshaping the technology landscape at speed. IBM’s sharp decline is a reminder that even the industry’s oldest giants are not insulated from disruption—and that the next wave of AI competition may hit the most established players hardest.
But remember, this is IBM we are talking about.
Explainer
What is COBOL?
COBOL is an old but remarkably durable programming language created in the late 1950s to run business, finance, and government systems, and it’s still powering much of the world’s banking and administrative infrastructure today.
It was designed to read almost like plain English, making it easier for non‑technical managers to understand, and its stability means many core systems have never been replaced.
For much of the past three years, the so‑called Magnificent Seven – Apple, Microsoft, Alphabet, Amazon, Meta, Tesla and Nvidia – have powered US equities to repeated record highs.
Their sheer scale, earnings strength and centrality to the AI boom turned them into a market narrative as much as an investment theme.
But as 2026 unfolds, the question is no longer whether they can keep leading the market higher, but whether the idea of treating them as a single trade still makes sense.
The short answer is closer to: the trade isn’t dead, but the era of effortless, broad‑based mega‑cap dominance is fading.
Mag 7 fatigue
The first sign of fatigue is the breakdown in cohesion. Last year, only a minority of the seven outperformed the wider S&P 500, a sharp contrast to the near‑uniform surges of 2023 and early 2024.
Nvidia and Alphabet continue to benefit from the structural demand for AI infrastructure and cloud‑driven productivity gains. Others, however, appear to be wrestling with slower growth, regulatory pressure or strategic resets.
Apple faces a maturing hardware cycle, Tesla is contending with intensifying global competition, and Meta’s spending plans continue to divide investors.
Mag 7 trade – which company is missing?
Divergence
This divergence matters. For years, investors could simply buy the group and let the rising tide of AI enthusiasm and index concentration do the work.
That simplicity has evaporated. Stock‑picking is back, and the market is finally distinguishing between companies with accelerating earnings power and those relying on past momentum.
At the same time, market breadth is improving. Capital is rotating into industrials and defensive sectors as investors seek exposure to areas that have lagged the mega‑cap rally. However, AI is affecting software stocks, law and financial sectors.
Healthy future
This broadening is healthy: it reduces concentration risk and signals that the U.S. economy is no longer dependent on a handful of tech giants to sustain equity performance.
Yet it would be premature to declare the Magnificent Seven irrelevant. Their combined earnings growth is still expected to outpace the rest of the index, and their role in AI, cloud computing and digital infrastructure remains foundational.
Change
What has changed is the nature of the trade. These are no longer seven interchangeable vehicles for tech exposure; they are seven distinct stories with diverging trajectories.
The Magnificent Seven haven’t left the stage. They have likely stopped performing in unison – and for investors, that marks the beginning of a more nuanced, more selective chapter.
Alibaba has unveiled Qwen 3.5, its latest large language model series, signalling a decisive shift in China’s increasingly competitive AI landscape.
Released on the eve of the Chinese New Year, the new model arrives with both open‑weight and hosted versions, giving developers the option to run the system on their own infrastructure or through Alibaba’s cloud platform.
The company emphasises that Qwen 3.5 delivers improved performance and lower operating costs compared with earlier iterations, while introducing ‘native multimodal capabilities’ that allow it to process text, images, and video within a single system.
Ability
What sets Qwen 3.5 apart is its focus on agentic behaviour — the ability for AI systems to take actions, complete multi‑step tasks, and operate with minimal human supervision.
This trend has accelerated globally following recent releases from Anthropic and other U.S. based developers, prompting Chinese firms to respond rapidly.
Alibaba says Qwen 3.5 is compatible with popular open‑source agent frameworks such as OpenClaw, which has surged in adoption among developers seeking more autonomous AI tools.
Capable
The open‑weight version features 397 billion parameters, fewer than Alibaba’s previous flagship model, yet the company claims significant gains in reasoning and benchmark performance.
It also supports 201 languages and dialects — a notable expansion that reflects Alibaba’s ambition to position Qwen as a global‑ready platform rather than a purely domestic competitor.
With rivals like ByteDance and Zhipu AI launching their own upgraded models, Qwen 3.5 underscores how China’s AI race is evolving from chatbot development to full‑scale autonomous agents — a shift that could reshape software markets and business models worldwide.
For much of the modern AI era, the United States has held a clear advantage in frontier research, compute infrastructure, and commercial deployment.
Silicon Valley’s combination of elite talent, abundant capital, and world‑class semiconductor design created an environment where breakthroughs could scale at extraordinary speed.
Challenge
That dominance, however, is no longer uncontested. China’s accelerating push into advanced AI is reshaping the global technological landscape and posing the most credible challenge yet to America’s leadership.
China’s strategy is not built on a single breakthrough but on coordinated national effort. Beijing has spent years aligning universities, state‑backed funds, and private‑sector giants around a shared objective: achieving self‑sufficiency in critical technologies and becoming a global AI powerhouse.
Competitive
Companies such as Huawei, Baidu, Alibaba and Tencent are now producing increasingly competitive large models, while domestic chipmakers are narrowing the performance gap with U.S. suppliers despite export controls.
Crucially, China’s AI ecosystem benefits from scale and cost advantages that the U.S. cannot easily replicate.
Massive data availability, lower energy costs, and vertically integrated supply chains allow Chinese firms to train and deploy models at prices that appeal to developing economies.
For many countries, especially those already reliant on Chinese infrastructure, adopting a Chinese AI stack is becoming a pragmatic economic choice rather than a geopolitical statement.
Investment returns?
This shift is occurring just as U.S. tech giants embark on unprecedented spending cycles. Hyperscalers are pouring hundreds of billions of dollars into data centres, specialised chips, and model training.
The U.S. and its massive BIG Tech Spending Spree – Feeding the AI Habit
While this investment underscores America’s determination to stay ahead, it also raises questions about sustainability.
Investors are increasingly asking whether such vast capital expenditure can deliver long‑term returns in a world where China is offering cheaper, rapidly improving alternatives.
The emerging reality is not one of immediate American decline but of a genuinely multipolar AI landscape. The U.S. still leads in foundational research, top‑tier talent, and cutting‑edge semiconductor design.
Yet China’s rise represents a powerful economy that has mounted a serious challenge to the technological frontier.
The global AI race is no longer defined by a single centre of gravity. Instead, two competing ecosystems — one market‑driven, one reportedly state‑directed — are shaping the future of intelligent technology.
The outcome will influence not only economic power but the digital architecture of much of the world.
The world’s largest cloud providers are engaged in one of the most expensive technological races in history.
Amazon, Microsoft, Meta and Alphabet are collectively on track to spend as much as $700 billion on AI‑related capital expenditure this year — a figure that rivals the GDP of mid‑sized nations and has understandably rattled investors.
The question now dominating markets is simple: can hyperscalers justify this level of spending, and should analysts remain so bullish on their stocks?
A Binary Bet on the Future of AI
The scale of investment has shifted the AI build‑out from a strategic growth initiative to what some analysts describe as a binary corporate bet. As some analysts suggest, the leap in capex — up roughly 60% year‑on‑year — means the payoff must be both rapid and substantial.
If monetisation fails to keep pace, the consequences could be of severe concern.
This is compounded by the fact that hyperscalers are now consuming nearly all of their operating cash flow to fund AI infrastructure, compared with a decade‑long average of around 40%. That shift alone explains the recent market jitters.
Why Analysts Remain Upbeat
Despite the turbulence, many analysts still argue the long‑term fundamentals remain intact. One reason is that hyperscalers are pre‑selling data‑centre capacity before it is even built, effectively locking in revenue ahead of deployment.
That dynamic supports the bullish view that AI demand is not only real but accelerating.
There is also a belief that as AI tools become embedded across consumer and enterprise workflows, willingness to pay will rise sharply.
If that scenario plays out, today’s eye‑watering capex could look prescient rather than reckless.
The Real Risk: Timelines
The challenge is timing. Much of the infrastructure being deployed — from chips to data‑centre hardware — has a useful life of just three to five years.
That gives hyperscalers a narrow window to recoup investment before the next upgrade cycle hits.
Without clearer monetisation strategies and firmer payback timelines, investor anxiety is likely to persist.
AI capex justification?
Hyperscalerscan justify their AI capex — but only if demand scales as quickly as they expect and monetisation becomes more transparent.
Analysts may be right to stay bullish, but the margin for error is shrinking. In the coming quarters, clarity will matter as much as capital.
Alphabet’s decision to issue a 100-year sterling bond has captured the attention of global markets, not only because of its rarity but also because of what it signals about the escalating competition in artificial intelligence.
100 year sterling bond
A century-long bond denominated in pounds is an extraordinary financing move, particularly for a technology company.
It reflects both investor confidence in Alphabet’s long-term prospects and the scale of capital now required to compete in the AI era.
On the surface, the benefits are clear. Locking in funding for 100 years at today’s rates provides financial certainty. Alphabet can secure vast sums of capital without facing refinancing risk for generations.
In an industry defined by rapid change and enormous upfront costs — from data centres and semiconductor procurement to specialised AI chips and energy infrastructure — patient capital is invaluable.
Sterling
The sterling denomination also diversifies Alphabet’s funding base beyond U.S. dollar markets, potentially appealing to European institutional investors seeking stable, long-duration assets.
The bond may also be interpreted as a strategic signal. By committing to long-term financing, Alphabet demonstrates confidence in its ability to generate cash flows well into the next century.
It reinforces the company’s image as a durable, infrastructure-like enterprise rather than a volatile technology stock.
For investors such as pension funds and insurers, a 100-year instrument from a highly rated issuer can offer predictable returns in a world where long-term yield is scarce.
Cyclical
However, the move is not without shortcomings. Committing to fixed debt obligations over such an extended horizon reduces flexibility. While Alphabet currently enjoys strong balance sheet metrics, the technology sector is notoriously cyclical.
A century is an eternity in innovation terms. Business models, regulatory frameworks and geopolitical dynamics may shift dramatically.
Future generations of management will inherit the obligation, regardless of whether today’s AI investments deliver the expected returns.
More broadly, the bond feeds concern about a debt-fuelled AI arms race. As technology giants pour tens of billions into AI research, chip design and cloud infrastructure, borrowing is becoming an increasingly prominent tool.
If rivals respond with similar long-dated issuance, the sector’s leverage could rise meaningfully. In a downturn or if AI monetisation disappoints; heavy debt burdens could amplify financial strain.
Ultimately, Alphabet’s 100-year sterling bond embodies both ambition and risk. It underlines the immense capital demands of the AI revolution while raising questions about whether today’s competitive fervour is encouraging companies to stretch their balance sheets too far in pursuit of technological dominance.
Systemic anxiety
The deeper anxiety is systemic. With Oracle, Amazon, Microsoft and others also scaling up borrowing, total tech‑sector issuance is projected to hit $3 trillion over five years.
Some analysts warn this resembles a late‑cycle credit boom, where investors chase thematic excitement rather than sober fundamentals.
Alphabet’s century bond may be a masterstroke of timing — or a marker of excess.
Either way, it crystallises the tension at the heart of the AI revolution: extraordinary promise, financed by extraordinary debt.
Why a Sterling Bond?
Alphabet issued its 100‑year sterling bond to tap deep UK demand for ultra‑long‑dated assets, especially from pension funds seeking to match long‑term liabilities.
The sterling market offered strong appetite, with orders reportedly reaching nearly ten times the £1 billion on offer.
It also formed part of Alphabet’s broader multi‑currency fundraising drive to finance massive AI‑related capital spending, including data‑centre expansion.
Issuing in sterling diversified its investor base, reduced reliance on U.S. dollar markets, and signalled confidence in its long‑term stability as a quasi‑infrastructure‑scale business.
Anthropic has unveiled Claude Opus 4.6, its most capable AI model to date, marking a significant leap in long‑context reasoning, autonomous agent workflows, and enterprise‑grade coding performance.
The release arrives during a turbulent moment for the global software sector, with markets reacting sharply to fears that Anthropic’s accelerating capabilities could reshape entire categories of knowledge work.
At the heart of Opus 4.6 is a 1‑million‑token context window, a first for Anthropic’s Opus line and a direct response to long‑standing limitations around ‘context rot’ in extended tasks.
Benchmarks
Early benchmarks show a dramatic improvement in maintaining accuracy across vast documents and complex, multi‑step workflows.
This expanded capacity enables the model to analyse large codebases, regulatory filings, or research archives in a single pass—an ability already drawing interest from enterprise users.
Perhaps the most striking development is Anthropic’s progress in agentic systems. Claude Code and the company’s Cowork framework now support coordinated ‘agent teams’, allowing multiple Claude instances to collaborate on sophisticated engineering challenges.
In one internal experiment, a team of 16 Claude agents built a complete Rust‑based C compiler capable of compiling the Linux kernel—producing nearly 100,000 lines of code with minimal human intervention.
Agentic shift
This agentic shift is reshaping expectations around AI‑driven software development. Anthropic positions Opus 4.6 not merely as a tool but as a foundation for autonomous, multi‑agent workflows that can plan, execute, and refine complex tasks over extended periods.
The company highlights improvements in reliability, coding precision, and long‑running task stability as core differentiators.
With enterprise adoption already representing the majority of Anthropic’s business, Opus 4.6 signals a decisive step toward AI systems that operate as high‑level collaborators rather than assistants.
As markets digest the implications, one thing is clear: Anthropic is accelerating the transition from ‘AI that helps’ to AI that works alongside you—and sometimes, entirely on its own.
Legal profession
Anthropic is pushing aggressively into the legal domain, positioning Claude as a high‑precision research and drafting partner for firms handling complex regulatory workloads.
The latest models emphasise long‑context accuracy, allowing lawyers to ingest entire case bundles, contracts, or disclosure sets without losing coherence.
Anthropic has also expanded constitutional AI safeguards, aiming to reduce hallucinations in high‑stakes legal reasoning.
Early adopters report gains in due‑diligence speed, contract comparison, and regulatory interpretation, particularly in financial services and data‑protection work.
While not a substitute for legal judgement, Claude is rapidly becoming a force multiplier for teams managing heavy document‑driven tasks.
A new generation of artificial intelligence is taking shape, and at its centre sits OpenClaw — a fast‑evolving framework that embodies the shift from monolithic AI models to agile, task‑driven agents.
While large language models once dominated the conversation, the momentum has clearly moved toward systems that can reason, plan, and act with far greater autonomy. OpenClaw is emerging as one of the most intriguing examples of this transition.
Appeal
OpenClaw’s appeal lies in its modular design. Instead of relying on a single, all‑purpose model, it orchestrates multiple specialised components that collaborate to complete complex workflows.
This mirrors how real teams operate: one agent may handle research, another may draft content, and a third may evaluate quality or flag risks. The result is a system that behaves less like a tool and more like a coordinated digital workforce.
Defining trend
This shift is not happening in isolation. Across the industry, AI agents are becoming the defining trend. Companies are racing to build systems that can manage inboxes, run businesses, write and deploy code, or even negotiate with other agents.
The ambition is no longer to create a chatbot that answers questions, but an autonomous entity capable of executing multi‑step tasks with minimal human intervention.
OpenClaw stands out because it embraces openness and experimentation. Developers can plug in their own models, customise behaviours, and build agent ‘stacks’ tailored to specific industries.
Adoption
Early adopters in media, finance, and logistics are already exploring how these agents can streamline research, automate reporting, or coordinate supply‑chain decisions.
The promise is efficiency, but also creativity: agents that can generate ideas, test them, and refine them without constant supervision.
Of course, the rise of agentic AI brings challenges. Questions around safety, reliability, and accountability are becoming more urgent. An agent that can act independently must also be constrained responsibly.
Challenge
The industry is now grappling with how to balance autonomy with oversight, ensuring that these systems remain aligned with human goals and values.
Even with these concerns, the trajectory is unmistakable. OpenClaw and its peers represent a decisive step toward AI that is not merely reactive but proactive — capable of taking initiative, managing complexity, and collaborating with humans in more meaningful ways.
As these systems mature, they are likely to reshape not just how we work, but how we think about intelligence itself.
If you want to explore how this trend could influence your editorial or creative workflows, I’m ready to dive deeper with you.
The meteoric rise of artificial intelligence (AI) stocks has captivated investors worldwide, but beneath the headlines lies a growing concern: are these valuations built on genuine fundamentals, or are they the product of collective psychology?
Increasingly, analysts point to the possibility that the fear of missing out (FOMO) is a potential driver of this rally, especially in the AI related ‘retail’ trader.
The European Central Bank recently warned that AI-related equities, particularly the so-called ‘Magnificent Seven’ tech giants—Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla—are showing signs of ‘stretched valuations‘.
Today, investors are piling into AI stocks not only because of their technological promise but also because they fear being left behind in what could be a transformative era.
Nvidia, now the world’s most valuable company, exemplifies this trend. Its dominance in AI chips has fuelled extraordinary gains, yet critics argue its valuation has raced far ahead of realistic earnings expectations.
The psychology is clear: when investors see others profiting, they rush in, often ignoring traditional measures of risk and return.
This dynamic creates a paradox. On one hand, AI undeniably represents a revolutionary force with vast potential across industries. On the other, the concentration of capital in a handful of firms raises systemic risks.
If expectations falter, the correction could be brutal, much like the dot-com crash that erased trillions in market value.
Ultimately, the AI boom may prove to be both a genuine technological leap and a speculative bubble. For sure there are undeniable revolutionary technological advancements right now – but is it all just too fast and too soon?
The challenge for investors is to distinguish between sustainable growth and hype-driven inflation—before it is too late.
The FOMO monster is definitely ‘artificially’ affecting the U.S. stock market – it will likely reveal itself soon.
The recent rebound in technology shares, led by Google’s surge in artificial intelligence optimism, offered a welcome lift to investors weary of recent market sluggishness.
Yet beneath the headlines lies a more troubling dynamic: the increasing reliance on a handful of mega‑capitalisation firms to sustain broader equity gains.
Breadth
Markets thrive on breadth. A healthy rally is one in which gains are distributed across sectors, signalling confidence in the wider economy. When only one or two companies shoulder the weight of investor sentiment, the picture becomes distorted.
Google’s AI announcements may well justify enthusiasm, but the fact that its performance alone can swing indices highlights a fragility in the current market structure.
This concentration risk is not new. In recent years, the so‑called ‘Magnificent Seven‘ technology giants have dominated returns, masking weakness in smaller firms and traditional industries.
While investors cheer the headline numbers, the underlying reality is that many sectors remain subdued. Manufacturing, retail, and even parts of the financial industry are not sharing equally in the rally.
Over Dependence
Over‑dependence on highflyers creates two problems. First, it exposes markets to sudden shocks: if sentiment turns against one of these giants, indices can tumble disproportionately.
Second, it discourages capital from flowing into diverse opportunities, stifling innovation outside the tech elite.
For long‑term stability, investors and policymakers alike should be wary of celebrating narrow gains. A resilient market requires participation from a broad base of companies, not just the fortunes of a few.
Google’s success in AI is impressive, but true economic strength will only be evident when growth spreads beyond the marquee names.
Until then, the market remains vulnerable, propped up by giants whose shoulders, however broad, cannot carry the entire economy indefinitely.
Nvidia’s Q3 results show strength, but the real risk of an AI bubble may lie in the debt-fuelled data centre boom and the circular crossover deals between tech giants.
Nvidia’s latest quarterly earnings were nothing short of spectacular. Revenue surged to $57 billion, up 62% year-on-year, with net income climbing to nearly $32 billion. The company’s data centre division alone contributed $51.2 billion, underscoring how central AI infrastructure has become to its growth.
These figures have reassured investors that Nvidia itself is not the weak link in the AI story. Yet, the question remains: if not Nvidia, where might the bubble be forming?
Data centre roll-out
The answer may lie in the debt-driven expansion of AI data centres. Building hyperscale facilities requires enormous capital outlays, not only for GPUs but also for power, cooling, and connectivity.
Many operators are financing this expansion through debt, betting that demand for AI services will continue to accelerate. While Nvidia’s chips are sold out and cloud providers are racing to secure supply, the sustainability of this debt-fuelled growth is less certain.
If AI adoption slows or monetisation lags, these projects could become overextended, leaving balance sheets strained.
Crossover deals
Another area of concern is the crossover deals between major technology companies. Nvidia’s Q3 was buoyed by agreements with Intel, OpenAI, Google Cloud, Microsoft, Meta, Oracle, and xAI.
These arrangements exemplify a circular investment pattern: companies simultaneously act as customers, suppliers, and investors in each other’s AI ventures.
While such deals create momentum and headline growth, they risk masking the true underlying demand.
If much of the revenue is generated by companies trading capacity and investment back and forth, the market could be inflating itself rather than reflecting genuine end-user adoption.
Bubble or not to bubble?
This dynamic is reminiscent of past bubbles, where infrastructure spending raced ahead of proven returns. The dot-com era saw fibre optic networks built faster than internet businesses could monetise them.
Today, AI data centres may be expanding faster than practical applications can justify. Nvidia’s results prove that demand for compute is real and immediate, but the broader ecosystem may be vulnerable if debt levels rise and crossover deals obscure the true picture of profitability.
In short, Nvidia’s strength does not eliminate bubble risk—it merely shifts the spotlight elsewhere. Investors and policymakers should scrutinise the sustainability of AI infrastructure financing and the circular nature of tech partnerships.
The AI revolution is undoubtedly transformative, but its foundations must rest on genuine demand rather than speculative debt and self-reinforcing deals.
Anthropic has reportedly struck major deals with Microsoft and Nvidia. On Tuesday 18th November 2025, Microsoft announced plans to invest up to $5 billion in the startup, while Nvidia will contribute as much as $10 billion. According to a reports, this brings Anthropic’s valuation to around $350 billion. Wow!
Google has unveiled its newest AI model, Gemini 3. According to Alphabet CEO Sundar Pichai, it will deliver desired answers with less prompting.
This update comes just eight months after the launch of Gemini 2.5 and is reported to be available in the coming weeks.
Money keeps flowing
Money keeps flowing into artificial intelligence companies but out of AI stocks
In what seems like yet another case of mutual ‘back-scratching’, Microsoft and Nvidia are set to invest a combined $15 billion in Anthropic, with the OpenAI rival agreeing to purchase computing power from its two newest backers.
Lately, a large chunk of AI news feels like it boils down to: ‘Company X invests in Company Y, and Company Y turns around and buys from Company X’.
That’s not entirely correct or fair. There are plenty of advancements in the AI world that focus on actual development rather than investments. Google recently introduced the third version of Gemini, its AI model.
Anthropic’s valuation has surged to around $350 billion, propelled by a landmark $15 billion investment from Microsoft and Nvidia.
Anthropic, the AI start-up founded in 2021 by former OpenAI employees, has rapidly ascended into the ranks of the world’s most valuable companies, more than doubling its worth from $183 billion just a few months earlier.
A valuation of $350 billion for a company only 4 years old is astounding!
The deal reportedly sees Microsoft commit up to $5 billion and Nvidia up to $10 billion. Anthropic has agreed to purchase an extraordinary $30 billion in Azure compute capacity and additional infrastructure from Nvidia.
This strategic alliance is not merely financial; it signals a deliberate diversification of Microsoft’s AI ecosystem beyond its reliance on OpenAI. And Nvidia strengthens its dominance in AI hardware.
Anthropic’s valuation has reached $350 billion, following the massive $15 billion investment from Microsoft and Nvidia, which positions the company among the most valuable in the world.
This astronomical figure reflects both the scale of its partnerships — including $30 billion in Azure compute commitments and Nvidia’s cutting-edge hardware.
The valuation underscores both the intensity of the global AI race and the confidence investors place in Anthropic’s safety-conscious approach to artificial intelligence.
Yet, it also raises questions about whether such astronomical figures reflect genuine long-term value. Or is it the froth of an overheated market.
Hyperscalers keep pumping the money into AI but are they getting the justified returns yet? Probably not yet – but it will come in the future.
But by then, it will be time to upgrade the system as it develops and so more money will be pumped in
Microsoft Azure experienced a widespread outage on 29th October, beginning around 16:00 UTC, which affected thousands of users and businesses globally.
The disruption stemmed from issues with Azure Front Door, Microsoft’s content delivery network, and cascaded into failures across Microsoft 365, Xbox, Minecraft, and numerous third-party services reliant on Azure infrastructure.
Major retailers such as Costco and Starbucks, as well as airlines including Alaska and Hawaiian, reported system failures that hindered customer access and internal operations.
Users struggled with authentication, hosting, and server connectivity, with DownDetector logging a surge in complaints from 15:45 GMT onwards.
Microsoft acknowledged the problem on its Azure status page, attributing the outage to a suspected configuration change.
Full service restoration was achieved by about 23:20 UTC, though the timing coincided awkwardly with Microsoft’s Q1 FY26 earnings report, where Azure was reportedly highlighted as its fastest-growing segment.
The incident underscores the critical dependence on cloud infrastructure and raises questions about resilience and contingency planning.
As businesses increasingly migrate to cloud platforms, the ripple effects of such outages become more pronounced, impacting not just productivity, but public trust in digital reliability.
It was just one week ago on Monday 20th October 2025, Amazon Web Services (AWS) experienced a major outage that rippled across the digital world, disrupting operations for millions of users and businesses.
The incident, which originated in AWS’s US-East-1 region, was reportedly traced to DNS resolution failures affecting DynamoDB—one of AWS’s core database services.
This technical fault triggered cascading issues across EC2, network load balancers, and other critical infrastructure, leaving many services offline for hours.
The impact was immediate and widespread. Major consumer platforms such as Snapchat, Reddit, Disney+, Canva, and Ring doorbells went dark.
Financial services including Venmo and Robinhood faltered, while airline customers at United and Delta struggled to access bookings. Even British government portals like Gov.uk and HMRC were affected, underscoring the global reach of AWS’s infrastructure.
World leader
AWS is the world’s leading cloud provider, commanding roughly one-third of the global market—well ahead of Microsoft Azure and Google Cloud.
Millions of companies, from startups to multinational corporations, rely on AWS for everything from data storage and virtual servers to machine learning and content delivery.
Its services underpin critical operations in healthcare, education, retail, logistics, and media. When AWS stumbles, the internet itself feels the tremor.
20 Prominent Companies Affected by the AWS Outage (20th Oct 2025)
Sector
Company Name
Impact Summary
E-commerce
Amazon
Internal systems and Seller Central offline
Social Media
Snapchat
App outages and delays
Streaming
Disney+
Service interruptions
News
Reddit
Partial outages, scaling issues
Design Tools
Canva
High error rates, reduced functionality
Smart Home
Ring
Device connectivity issues
Finance
Venmo
Transaction delays
Finance
Robinhood
Trading disruptions
Airlines
United Airlines
Booking and check-in issues
Airlines
Delta Airlines
Reservation access problems
Telecom
T-Mobile
Indirect service disruptions
Government
Gov.uk
Portal access issues
Government
HMRC
Service delays
Banking
Lloyds Bank
Online banking affected
Productivity
Zoom
Meeting access issues
Productivity
Slack
Messaging delays
Education
Canvas
Assignment submissions disrupted
Crypto
Coinbase
User access failures
Gaming
Roblox
Server outages
Gaming
Fortnite
Gameplay interruptions
This outage wasn’t the result of a cyberattack, but rather a technical fault in one of Amazon’s main data centres. Yet the consequences were no less severe.
Amazon’s own operations were disrupted, with warehouse workers unable to access internal systems and third-party sellers locked out of Seller Central.
Canva reported ‘significantly increased error rates’. while Coinbase and Roblox cited cloud-related failures.
The incident serves as a stark reminder of the risks inherent in centralised cloud infrastructure. As digital life becomes increasingly dependent on a handful of providers, the potential for systemic disruption grows.
A single point of failure can cascade across industries, affecting everything from classroom assignments to emergency services.
AWS has since restored normal operations and promised a detailed post-event summary. But for many, the outage has reignited questions about resilience, redundancy, and the wisdom of placing so much trust in a single cloud giant.
In the age of digital interdependence, even a brief lapse can feel like a global blackout.
The world’s largest contract chipmaker reported net income of NT$452.3 billion (£11.4 billion), far exceeding analyst expectations and marking a new high for the company.
Revenue climbed 30.3% year-on-year to NT$989.92 billion, driven by insatiable demand for high-performance chips powering artificial intelligence applications.
Tech giants including Nvidia, OpenAI, and Oracle have ramped up orders for TSMC’s cutting-edge processors, fuelling the company’s meteoric rise.
TSMC’s CEO, C.C. Wei, reportedly attributed the growth to ‘unprecedented investment in AI infrastructure’, noting that the company’s advanced nodes are now central to training large language models and deploying generative AI tools.
Despite global economic headwinds and ongoing trade tensions, TSMC’s strategic expansion—including a $165 billion global buildout across Arizona, Europe, and Japan—is positioning it as the backbone of next-gen computing.
The results also reflect a broader shift in the semiconductor landscape. As traditional consumer electronics plateau, AI-driven demand is reshaping supply chains and investment priorities.
Analysts suggest that AI chip spending could surpass $1 trillion in the coming years, with TSMC poised to capture a significant share.
For investors and industry observers, the message is clear: AI isn’t just a trend—it’s a fundamental shift. And TSMC, with its unparalleled fabrication expertise and global influence, is quietly shaping the future.
As the AI arms race accelerates, TSMC’s performance offers a glimpse into the future of tech: one where silicon, not software, defines the frontier.
The company’s latest earnings are not just a financial milestone—they’re a signal of where innovation is headed next.
Oracle Bets Big on AMD AI Chips, Challenging Nvidia’s Dominance
Oracle Cloud Infrastructure has announced plans to deploy 50,000 AMD Instinct MI450 graphics processors starting in the second half of 2026, marking a bold strategic shift in the AI hardware landscape.
The move signals a direct challenge to Nvidia’s long-standing dominance in the data centre GPU market, where it currently commands over 90% market share.
AMD’s MI450 chips, unveiled earlier this year, are designed for high-performance AI workloads and can be assembled into rack-sized systems that allow 72 chips to function as a unified engine.
This architecture is tailored for inferencing tasks—an area Oracle believes AMD will excel in. ‘We feel like customers are going to take up AMD very, very well’, reportedly said Karan Batta, Oracle Cloud’s senior vice president.
The announcement comes amid a broader realignment in the AI ecosystem. OpenAI, historically reliant on Nvidia hardware, has recently inked a multi-year deal with AMD involving processors requiring up to 6 gigawatts of power.
If successful, OpenAI could acquire up to 10% of AMD’s shares, further cementing the chipmaker’s role in next-generation AI infrastructure.
Oracle’s pivot also reflects its ambition to compete with cloud giants like Microsoft, Amazon, and Google. With a reported five-year cloud deal with OpenAI potentially worth $300 billion, Oracle is positioning itself not just as a capacity provider but as a strategic AI enabler.
While Nvidia remains a formidable force, Oracle’s investment in AMD chips underscores a growing appetite for alternatives.
As AI demands scale, diversity in chip supply could become a competitive advantage—especially for enterprises seeking flexibility, cost efficiency, and innovation beyond the Nvidia ecosystem.
The AI arms race is far from over, but Oracle’s latest move suggests it’s no longer content to play catch-up. It’s aiming to redefine the rules.
U.S. stock markets are behaving like a mood ring in a thunderstorm—volatile, reactive, and oddly sentimental.
One moment, President Trump threatens a ‘massive increase’ in tariffs on Chinese imports, and nearly $2 trillion in market value evaporates.
The next, he posts that: ‘all will be fine‘, and futures rebound overnight. It’s not just policy—it’s theatre, and Wall Street is watching every act with bated breath.
This hypersensitivity isn’t new, but it’s been amplified by the precarious state of global trade and the towering expectations placed on artificial intelligence.
Trump’s recent comments about China’s rare earth export controls triggered a sell-off that saw the Nasdaq drop 3.6% and the S&P 500 fall 2.7%—the worst single-day performance since April.
Tech stocks, especially those reliant on semiconductors and AI infrastructure, were hit hardest. Nvidia alone lost nearly 5%.
Why so fickle? Because the market’s current rally is built on a foundation of hope and hype. AI has been the engine driving valuations to record highs, with companies like OpenAI and Anthropic reaching eye-watering valuations despite uncertain profitability.
The IMF and Bank of England have both warned that we may be in stage three of a classic bubble cycle6. Circular investment deals—where AI startups use funding to buy chips from their investors—have raised eyebrows and comparisons to the dot-com era.
Yet, the bubble hasn’t burst. Not yet. The ‘Buffett Indicator‘ sits at a historic 220%, and the S&P 500 trades at 188% of U.S. GDP. These are not numbers grounded in sober fundamentals—they’re fuelled by speculative fervour and a fear of missing out (FOMO).
But unlike the dot-com crash, today’s AI surge is backed by real infrastructure: data centres, chip fabrication, and enterprise adoption. Whether that’s enough to justify the valuations remains to be seen.
In the meantime, markets remain twitchy. Trump’s tariff threats are more than political posturing—they’re economic tremors that ripple through supply chains and investor sentiment.
And with AI valuations stretched to breaking point, even a modest correction could trigger a cascade.
So yes, the market is fickle. But it’s not irrational—it’s just balancing on a knife’s edge between technological optimism and geopolitical anxiety.
Influential figures and institutions are sounding the AI alarm—or at least raising eyebrows—about the frothy valuations and speculative fervour surrounding artificial intelligence.
Who’s Warning About the AI Bubble?
🏛️ Bank of England – Financial Policy Committee
View: Stark warning.
Quote: “The risk of a sharp market correction has increased.”
Why it matters: The BoE compares current AI stock valuations to the dotcom bubble, noting that the top five S&P 500 firms now command nearly 30% of market cap—the highest concentration in 50 years.
🏦 Jerome Powell – Chair, U.S. Federal Reserve
View: Cautiously sceptical.
Quote: Assets are “fairly highly valued.”
Why it matters: While not naming AI directly, Powell’s remarks echo broader concerns about tech valuations and investor exuberance.
🧮 Lisa Shalett – Chief Investment Officer, Morgan Stanley Wealth Management
View: Deeply concerned.
Quote: “This is not going to be pretty” if AI capital expenditure disappoints.
Why it matters: Shalett warns that 75% of S&P 500 returns are tied to AI hype, likening the moment to the “Cisco cliff” of the early 2000s.
🌍 Kristalina Georgieva – Managing Director, IMF
View: Watchful.
Quote: Financial conditions could “turn abruptly.”
Why it matters: Georgieva highlights the fragility of markets despite AI’s productivity promise, warning of sudden sentiment shifts.
🧨 Sam Altman – CEO, OpenAI
View: Self-aware caution.
Quote: “People will overinvest and lose money.”
Why it matters: Altman’s admission from inside the AI gold rush adds credibility to bubble concerns—even as his company fuels the hype.
📦 Jeff Bezos – Founder, Amazon
View: Bubble-aware.
Quote: Described the current environment as “kind of an industrial bubble.”
Why it matters: Bezos sees parallels with past tech manias, suggesting that infrastructure spending may be overextended.
🧠 Adam Slater – Lead Economist, Oxford Economics
View: Analytical.
Quote: “There are a few potential symptoms of a bubble.”
Why it matters: Slater points to stretched valuations and extreme optimism, noting that productivity projections vary wildly.
🏛️ Goldman Sachs – Investment Strategy Division
View: Cautiously optimistic.
Quote: “A bubble has not yet formed,” but investors should “diversify.”
Why it matters: Goldman acknowledges the risks while maintaining that fundamentals may still justify valuations—though they advise caution.
AI Bubble voices infographic October 2025
🧠 Julius Černiauskas and the Oxylabs AI/ML Advisory Board
🔍 View: The AI hype is nearing its peak—and may soon deflate.
Černiauskas warns that AI development is straining environmental resources and public trust. He’s pushing for responsible and sustainable AI practices, noting that transparency is lacking in how many models operate.
Ali Chaudhry, research fellow at UCL and founder of ResearchPal, adds that scaling laws are showing their limits. He predicts diminishing returns from simply making models bigger, and expects tightened regulations around generative AI in 2025.
Adi Andrei, cofounder of Technosophics, goes further: he believes the Gen AI bubble is on the verge of bursting, citing overinvestment and unmet expectations
🧠 Jamie Dimon on the AI Bubble
🔥 View: Sharply concerned—more than most as widely reported
Quote: “I’m far more worried than others about the prospects of a downturn.”
Context: Dimon believes AI stock valuations are “stretched” and compares the current surge to the dotcom bubble of the late 1990s.
📉 Key Warnings from Dimon
“Sharp correction” risk: He sees a real danger of a sudden market pullback, especially given how AI-related stocks have surged disproportionately—like AMD jumping 24% in a single day after an OpenAI deal.
“Most people involved won’t do well”: Dimon told the BBC that while AI will ultimately pay off—like cars and TVs did—many investors will lose money along the way.
“Governments are distracted”: He criticised policymakers for focusing on crypto and ignoring real security threats, saying: “We should be stockpiling bullets, guns and bombs”.
“AI will disrupt jobs and companies”: At a trade event in Dublin, he warned that AI’s ubiquity will shake up industries and employment across the board.
And so…
The AI boom of 2025 has ignited a speculative frenzy across global markets, with tech stocks soaring and investors piling into anything labelled “AI-adjacent.”
But beneath the euphoria, a chorus of high-profile warnings is growing louder. From the Bank of England and IMF to JPMorgan’s Jamie Dimon and OpenAI’s Sam Altman, concerns are mounting that valuations are dangerously stretched, capital is overconcentrated, and the narrative is outpacing reality.
Dimon likens the moment to the dotcom bubble, while Altman admits many will “lose money” chasing the hype. Analysts point to classic bubble signals: retail mania, corporate FOMO, and earnings divorced from fundamentals.
Even as AI’s long-term utility remains promising, the short-term exuberance may be setting the stage for a sharp correction.
Whether it’s a pullback or a full-blown crash, the mood is shifting—from uncritical optimism to wary anticipation.
The question now is not whether AI will change the world, but whether markets have priced in too much, too soon.
We have been warned!
The AI bubble will pop – it’s just a matter of when and not if.
There’s growing concern that parts of the AI boom—especially the infrastructure and monetisation frenzy—might be built on shaky foundations.
The term ‘AI house of cards’ is being used to describe deals like Oracle’s multiyear agreement with OpenAI, which has committed to buying $300 billion in computing power over five years starting in 2027.
That’s on top of OpenAI’s existing $100 billion in commitments, despite having only about $12 billion in annual recurring revenue. Analysts are questioning whether the math adds up, and whether Oracle’s backlog—up 359% year-over-year—is too dependent on a single customer.
Oracle’s stock surged 36%, then dropped 5% Friday as investors took profits and reassessed the risks.
Some analysts remain neutral, citing murky contract details and the possibility that OpenAI’s nonprofit status could limit its ability to absorb the $40 billion it raised earlier this year.
The broader picture? AI infrastructure spending is ballooning into the trillions, echoing the dot-com era’s early adoption frenzy. If demand doesn’t materialise fast enough, we could see a correction.
But others argue this is just the messy middle of a long-term transformation—where data centres become the new utilities
The AI infrastructure boom—especially the Oracle–OpenAI deal—is raising eyebrows because the financial and operational foundations look more speculative than solid.
Here’s why some analysts are calling it a potential house of cards
⚠️ 1. Mismatch Between Revenue and Commitments
OpenAI’s annual revenue is reportedly around $10–12 billion, but it’s committed to $300 billion in cloud spending with Oracle over five years.
That’s $60 billion per year, meaning OpenAI would need to grow revenue 5–6x just to break even on compute costs.
CEO Sam Altman projects $44 billion in losses before profitability in 2029.
🔌 2. Massive Energy Demands
The infrastructure needed to fulfill this contract requires electricity equivalent to two Hoover Dams.
That’s not just expensive—it’s logistically daunting. Data centres are planned across five U.S. states, but power sourcing and environmental impact remain unclear.
AI House of Cards Infographic
💸 3. Oracle’s Risk Exposure
Oracle’s debt-to-equity ratio is already 10x higher than Microsoft’s, and it may need to borrow more to meet OpenAI’s demands.
The deal accounts for most of Oracle’s $317 billion backlog, tying its future growth to a single customer.
🔄 4. Shifting Alliances and Uncertain Lock-In
OpenAI recently ended its exclusive cloud deal with Microsoft, freeing it to sign with Oracle—but also introducing risk if future models are restricted by AGI clauses.
Microsoft is now integrating Anthropic’s Claude into Office 365, signalling a diversification away from OpenAI.
🧮 5. Speculative Scaling Assumptions
The entire bet hinges on continued global adoption of OpenAI’s tech and exponential demand for inference at scale.
If adoption plateaus or competitors leapfrog, the infrastructure could become overbuilt—echoing the dot-com frenzy of the early 2000s.
Is this a moment for the AI frenzy to take a breather?
Oracle Corporation has just staged one of the most dramatic rallies in tech history—catapulting itself into the elite club of near-trillion-dollar companies and reshaping the billionaire leaderboard in the process.
Founded in 1977 by Larry Ellison, Oracle began as a modest database software firm. Its first major boom came in the late 1990s, riding the dot-com wave as enterprise software demand exploded.
By 2000, Oracle’s market cap had surged past $160 billion, making it one of the most valuable tech firms of the era.
A second wave of growth followed in the mid-2000s, fuelled by aggressive acquisitions like PeopleSoft and Sun Microsystems, which expanded Oracle’s footprint into enterprise applications and hardware.
Boom
But its most recent boom—triggered in 2025—is unlike anything before. Oracle’s pivot to cloud infrastructure and artificial intelligence has paid off spectacularly. In its fiscal Q1 2026 report, Oracle revealed $455 billion in remaining performance obligations (RPO), a staggering 359% increase year-over-year.
This backlog, driven by multi-billion-dollar contracts with AI giants like OpenAI, Meta, Nvidia, and xAI, sent shockwaves through Wall Street.
Despite missing revenue and earnings expectations slightly—$14.93 billion in revenue vs. $15.04 billion expected, and $1.47 EPS vs. $1.48 forecasted—the market responded with euphoria.
Oracle’s stock soared nearly 36% in a single day, adding $244 billion to its market cap and pushing it to approximately $922 billion. Analysts called it ‘absolutely staggering’ and ‘truly awesome’, with Deutsche Bank reportedly raising its price target to $335.
Oracle Infographic September 2025
This meteoric rise had personal consequences too. Larry Ellison, Oracle’s co-founder and current CTO, saw his net worth jump by over $100 billion in one day, briefly surpassing Elon Musk to become the world’s richest person.
His fortune reportedly peaked at around $397 billion, largely tied to his 41% stake in Oracle. Ellison’s journey—from college dropout to tech titan—is now punctuated by the largest single-day wealth gain ever recorded.
CEO Safra Catz also benefited, with her net worth rising by $412 million in just six hours of trading, bringing her total to $3.4 billion. Under her leadership, Oracle’s stock has risen over 800% since she became sole CEO in 2019.
Oracle’s forecast for its cloud infrastructure business is equally jaw-dropping: $18 billion in revenue for fiscal 2026, growing to $144 billion by 2030. If these projections hold, Oracle could soon join the trillion-dollar club alongside Microsoft, Apple, and Nvidia.
From database pioneer to AI infrastructure powerhouse, Oracle’s evolution is a masterclass in strategic reinvention.
Oracle one-year chart 10th September 2025
Oracle one-year chart 10th September 2025
And with Ellison now at the summit of global wealth, the company’s narrative is no longer just about software—it’s about legacy, dominance, and the future of intelligent computing.
As BIG tech poaches top AI talent, these companies are stripped to the bone as the tech talent is being hollowed out!
In the race to dominate artificial intelligence, America’s tech giants are vacuuming up talent at an unprecedented pace.
But behind the headlines of billion-dollar acquisitions and flashy AI demos lies a quieter crisis. The creation of ‘zombie companies’ — startups left staggering and soulless after their brightest minds are poached by Big Tech.
These zombie firms aren’t dead, but they’re no longer truly alive either. They continue to operate, maintain websites, and pitch to investors, yet their core innovation engine has stalled. The problem isn’t just brain drain — it’s brain decapitation.
When a startup loses its founding engineers, lead researchers, or visionary product designers to the likes of Google, Meta, or Microsoft, what remains is often a shell with no clear path forward.
The allure is understandable. Big Tech offers salaries that dwarf startup equity, access to massive compute resources, and the prestige of working on frontier models. But the downstream effect is corrosive.
Startups, once the lifeblood of AI experimentation, are now struggling to retain talent long enough to reach product maturity. Some pivot to consultancy, others limp along with outsourced development, and many quietly fold — their IP absorbed, their vision diluted.
This phenomenon is particularly acute in the U.S., where venture capital encourages rapid scaling but rarely protects against talent attrition. The result is a growing class of companies that exist more for optics than output — kept alive by inertia, legacy funding, or the hope of acquisition.
They clutter the innovation landscape, making it harder for truly disruptive ideas to gain traction.
Ironically, Big Tech’s hunger for talent may be undermining the very ecosystem it depends on. By stripping startups of their creative lifeblood, it risks turning the AI sector into a monoculture. This culture is then dominated by a few players, with fewer voices and less diversity of thought.
The solution isn’t simple. It may require new funding models, stronger incentives for retention, or even regulatory scrutiny of talent acquisition practices.
But one thing is clear: if the U.S. wants to remain the global leader in AI, it must find a way to nurture its startups — not just harvest them.
Otherwise, the future of innovation may be haunted by the walking dead.
Where is the standard for the tariff line? Is this fair on the smaller businesses and the consumer? Money buys a solution without fixing the problem!
Nvidia and AMD have struck a deal with the U.S. government: they’ll pay 15% of their China chip sales revenues directly to Washington. This arrangement allows them to continue selling advanced chips to China despite looming export restrictions.
Apple, meanwhile, is going all-in on domestic investment. Tim Cook announced a $600 billion U.S. investment plan over four years, widely seen as a strategic move to dodge Trump’s proposed 100% tariffs on imported chips.
🧩 Strategic Motives
These deals are seen as tariff relief mechanisms, allowing companies to maintain access to key markets while appeasing the administration.
Analysts suggest Apple’s move could trigger a ‘domino effect’ across the tech sector, with other firms following suit to avoid punitive tariffs.
Tariff avoidance examples
⚖️ Legal & Investor Concerns
Some critics call the Nvidia/AMD deal a “shakedown” or even unconstitutional, likening it to a tax on exports.
Investors are wary of the arbitrary nature of these deals—questioning whether future administrations might play kingmaker with similar tactics.
Big Tech firms are striking strategic deals to sidestep escalating tariffs, with Apple pledging $600 billion in U.S. investments to avoid import duties, while Nvidia and AMD agree to pay 15% of their China chip revenues directly to Washington.
These moves are seen as calculated trade-offs—offering financial concessions or domestic reinvestment in exchange for continued market access. Critics argue such arrangements resemble export taxes or political bargaining, raising concerns about legality and precedent.
As tensions mount, these deals reflect a broader shift in how tech giants navigate geopolitical risk and regulatory pressure.