Huawei has unveiled a bold new AI chip cluster strategy aimed squarely at challenging Nvidia’s dominance in high-performance computing.
At its Connect 2025 conference in Shanghai, Huawei introduced the Atlas 950 and Atlas 960 SuperPoDs—massive AI infrastructure systems built around its in-house Ascend chips.
These clusters represent China’s most ambitious attempt yet to bypass Western semiconductor restrictions and assert technological independence.
The technical stuff
The Atlas 950 SuperPoD, launching in late 2026, will integrate 8,192 Ascend 950DT chips, delivering up to 8 EFLOPS of FP8 compute and 16 EFLOPS at FP4 precision. (Don’t ask me either – but that’s what the data sheet says).
It boasts a staggering 16.3 petabytes per second of interconnect bandwidth, enabled by Huawei’s proprietary UnifiedBus 2.0 optical protocol. It is reportedly claimed to be ten times faster than current internet backbone infrastructure.
This system is reportedly designed to outperform Nvidia’s NVL144 cluster, with Huawei asserting a 6.7× advantage in compute power and 15× in memory capacity.
In 2027, Huawei reportedly plans to release the Atlas 960 SuperPoD, doubling the specs with 15,488 Ascend 960 chips. This reportedly will give 30 EFLOPS FP8 compute, and 34 PB/s bandwidth.
These SuperPoDs will be linked into SuperClusters. The Atlas 960 SuperCluster is reportedly projected to reach 2 ZFLOPS of FP8 performance. This potentially rivals even Elon Musk’s xAI Colossus and Nvidia’s future NVL576 deployments.
Huawei’s roadmap includes annual chip upgrades: Ascend 950 in 2026, Ascend 960 in 2027, and Ascend 970 in 2028.

Each generation promises to double computing power. The chips will feature Huawei’s own high-bandwidth memory variants—HiBL 1.0 and HiZQ 2. These are designed to optimise inference and training workloads.
Strategy
This strategy reflects a shift in China’s AI hardware approach. Rather than competing on single-chip performance, Huawei is betting on scale and system integration.
By controlling the entire stack—from chip design to memory, networking, and interconnects—it aims to overcome fabrication constraints imposed by U.S. sanctions.
While Huawei’s software ecosystem still trails Nvidia’s CUDA, its CANN toolkit is gaining traction. Chinese regulators discourage purchases of Nvidia’s AI chips.
The timing of Huawei’s announcement coincides with increased scrutiny of Nvidia in China, suggesting a coordinated push for domestic alternatives.
In short, Huawei’s AI cluster strategy is not just a technical feat—it’s a geopolitical statement.
Whether it can match Nvidia’s real-world performance remains to be seen, but the ambition is unmistakable.
The AI power race just got even hotter!