Companies
18/09/2025

Huawei’s Long Game: How the Chinese Champion Plans to Close the Gap on Nvidia




Huawei on Thursday pulled back the curtain on a sweeping multi-year plan to build home-grown AI chips, high-speed memory and massive “supernode” computing clusters — a blueprint designed to blunt Nvidia’s dominance in accelerated computing and to push China toward semiconductor self-reliance.
 
At the company’s annual Huawei Connect event, rotating chairman Eric Xu laid out a cadence of chip and system launches through 2028, a pledge to double compute performance with each yearly upgrade, and the public debut of proprietary high-bandwidth memory. The announcement — the most detailed disclosure of Huawei’s chip strategy since the company first signalled a move into silicon — crystallises a three-pronged approach Beijing’s largest network and cloud vendor is using to challenge Nvidia: aggressive roadmap pacing, vertical integration of memory and interconnects, and scale via ultra-dense system designs.
 
A relentless product cadence
 
Huawei’s headline strategy is simple in concept and brutal in execution: ship new, more powerful Ascend AI processors on a one-year cycle and keep shrinking the performance gap generation by generation. The company said its Ascend roadmap will roll out major new models — the Ascend 950 in 2026 followed by successive 960 and 970 chips in 2027 and 2028 — and that each release will substantially exceed the prior generation’s compute density.
 
That cadence matters because, in the AI market, raw FLOPS and memory bandwidth are currency. By promising annual, predictable upgrades and offering variants tailored for training and inference, Huawei is signalling to cloud operators, telcos and large enterprise customers that it can meet evolving performance needs without the long, uncertain development cycles that have historically dogged domestic chipmakers.
 
The firm is pairing the Ascend roadmap with refreshes of Kunpeng server processors to provide a full stack — CPU, NPU and system software — that Chinese customers can buy from a single vendor. The pitch is clear: an integrated ecosystem that minimises the complexity of mixing components from multiple suppliers.
 
Building memory and interconnects — removing choke points
 
A key pillar of Huawei’s playbook is vertical integration. For years, high-bandwidth memory (HBM) and advanced packaging have been choke points for challengers to Nvidia and other Western suppliers. Huawei announced it has developed proprietary HBM technology and will use it in upcoming Ascend variants, a move that reduces dependence on overseas HBM suppliers and addresses one of the crucial bottlenecks in AI performance.
 
Beyond memory, Huawei emphasised interconnect architecture and system design. The company unveiled its next-generation “supernode” systems — the Atlas 950 and Atlas 960 — that are built to cluster thousands of Ascend NPUs with very high interconnect bandwidth. Huawei argues that these supernodes will deliver superior aggregate performance through dense packaging, optimized cooling and power distribution, and proprietary low-latency links between chips.
 
By controlling chips, memory and the interconnect stack, Huawei is pursuing a systems-first strategy: performance that does not rely solely on per-chip superiority but instead on engineering the entire rack and cluster to serve large AI workloads. That systems approach is designed to be compelling to customers who prioritise total throughput and cost per training run over single-chip benchmarks.
 
Scale and economics: supernodes and domestic demand
 
Huawei’s roadmap is explicitly calibrated to the Chinese market’s scale. The company has described Atlas superclusters capable of hosting hundreds of thousands — and eventually more than a million — Ascend processors across multiple SuperPoDs. That kind of scale feeds a second strategic aim: to create enormous internal demand that can amortise R\&D and manufacturing investments and to keep domestic cloud, telecom and government customers within China’s technology ecosystem.
 
The economics matter. Large cloud and AI companies often make procurement decisions based on total cost of ownership, supply-chain risk and geopolitical considerations as much as raw performance. Huawei is selling a narrative that a domestically sourced stack, backed by aggressive product cycles and systems engineering, will offer a safer and more controllable option for Chinese customers concerned about export controls or supply interruptions.
 
Huawei’s timing and public posture also reflect the broader geopolitical context. The company’s push comes at a moment of heightened U.S.-China tensions over advanced semiconductors and concerns about national security. Chinese regulators have signalled pressure on firms to prioritise domestic suppliers in some procurement channels, and Beijing has stressed technological self-sufficiency as a strategic priority.
 
Huawei’s strategy therefore doubles as a political and economic message: demonstrate the capability to supply the most demanding AI workloads domestically, and reduce leverage that Western suppliers might exert through export controls or licensing restrictions. By showcasing in-house memory, a steady release cadence, and massive on-premises systems, Huawei is both reassuring Chinese customers and attempting to narrow the policy-induced gaps that have benefited incumbents like Nvidia.
 
What this means for Nvidia and global suppliers
 
Huawei’s approach does not guarantee immediate parity with Nvidia. Industry engineers and independent testers continue to find Nvidia’s chips strong on many AI workloads, and Nvidia’s ecosystem – including CUDA software, third-party optimisations and global manufacturing partners — remains a powerful moat. But Huawei is attacking the problem on multiple fronts simultaneously, and that multipronged assault could blunt Nvidia’s market share in segments where China’s domestic demand is concentrated.
 
If Huawei can deliver credible performance, supply at scale, and a compelling total-cost proposition, it could redirect a meaningful portion of domestic AI spend away from foreign GPUs. That would have ripple effects: third-party manufacturers such as TSMC and HBM vendors could see shifts in order flows, and Western AI software vendors might be compelled to prioritise multi-stack compatibility to retain access to Chinese customers.
 
Huawei’s play also raises choices for global cloud providers and chipmakers. Some may continue to rely on Nvidia for peak performance, while others — especially those operating large China-based infrastructure — might adopt Ascend-based supernodes to avoid regulatory risk and supply uncertainty.
 
Delivering on this strategy will not be straightforward. Scaling production, ensuring yield at advanced packaging nodes, and proving sustained software and systems performance across diverse workloads are non-trivial engineering tasks. Manufacturing constraints and access to advanced process tools remain a long-term impediment for many Chinese chipmakers, and Huawei will need to navigate these limits even as it expands system design and memory capabilities.
 
Moreover, convincing the broader software ecosystem to adopt a competing stack — including compilers, libraries and frameworks tuned for Ascend — will be essential for wide adoption beyond trial deployments. Huawei’s advantage in networking and its direct relationships with Chinese cloud operators provide a runway, but expanding beyond that sphere will require demonstrable economics and interoperability.
 
Markets and competitors reacted swiftly to Huawei’s unveiling. Chinese semiconductor and cloud stocks showed renewed investor interest as analysts reassessed the competitive landscape. For global observers, Huawei’s public roadmap removed uncertainty about the company’s ambitions and forced rivals to reassess product roadmaps and go-to-market plans for AI infrastructure.
 
Huawei’s playbook is thus a test case in modern industrial strategy: marry product cadence with vertical control and scale, and lean on domestic market dynamics to fund a sustained challenge. Whether that will translate into a material loss of Nvidia’s global dominance is an open question — but Huawei’s announcement makes clear that the contest for the future of AI silicon will be fought at the systems level as much as at the transistor level.
 
(Source:www.bloomberg.com)

Christopher J. Mitchell
In the same section