New Chinese Exascale Supercomputer Runs 'brain-scale AI'

Back in October, reports surfaced that China had achieved exascale-level supercomputing capabilities on two separate machines, one of which is its Sunway "Oceanlite" system, which is built with entirely Chinese components, from CPU to network.

While there have been few architectural details to date, a paper [PDF], published today, outlines the compute, memory, and other aspects, in addition to showing off the capabilities via a system-spanning AI workload for a pre-trained language model with 14.5 trillion parameters with mixed precision performance of over one exaflop.

The system has "as many as 96,000 nodes" the paper reveals, based on the Sunway SW26010-PRO compute units (manycore with built-in custom accelerators) with custom memory configuration and a homegrown network fabric.

Although the exascale achievement results using the supercomputing standard "Top 500" benchmark were verified, though not published, it is important to note that this "brain-scale" workload is not itself running at full exascale capability. Generally in supercomputing performance measurements, the standard is 64-bit floating point (FP64) but this work was based on mixed precision. The new Sunway system can handle FP64, FP16, and BF16 and can trade those around during training for maximum efficiency.

Even though mixed precision fails to make this a true sustained exascale workload in traditional terms, it does show evidence of some impressive hardware/software co-design thinking, especially as the supercomputing world wraps its collective head around how AI/ML is supposed to integrate with "old school" modeling and simulation.

The Chinese team provides detailed chip and node-level details for tuning HPC systems for AI, including scheduling, memory, and I/O operation optimizations and a unique parallelization strategy that mixes parallel models and then cuts down on compute time and memory use. They also developed a distinct load balancer and strategy for using mixed precision efficiently.

"This is an unprecedented demonstration of algorithm and system co-design on the convergence of AI and HPC," the paper's authors say.

The model and optimization set, called BaGuaLu, "enables decent performance and scalability on extremely large models by combining hardware-specific optimizations, hybrid parallel strategies, and mixed precision training," the team adds.

The authors, which include Alibaba employees in addition to academics from major Chinese universities, add that with current capabilities, a 174-trillion parameter model train is within the realm of possibility.

For avid readers of the architecture-centric The Next Platform, you can be sure there is a deep dive into the chewy bits of the architecture later today. Information about the machine's architecture has been light but there is finally some detail to sink our teeth into. ®

RECENT NEWS

Gyrostat December Outlook: The Market Does The Work.

Harnessing Natural Volatility for Consistent Returns   Markets have always moved mor... Read more

Gyrostat Capital Management: Why Advisers Must Scenario-Plan Both The Bubble And The Bust

The Blind Spot: Why Advisers Must Scenario-Plan Both The Bubble and The Bust In financial m... Read more

Gyrostat Capital Management: The Hidden Architecture Of Consequences

When Structures Themselves Become A Risk In portfolio construction, risk is rarely where we look for it.... Read more

Gyrostat November Outlook: The Rising Cost Of Doing Nothing

Through the second half of 2025, markets have delivered a curious mix of surface tranquillity and instabi... Read more

Gyrostat Capital Management: Blending Managers - From Style Diversification To Scenario Diversification

The Limits of Traditional Diversification For decades, portfolio construction has ... Read more

Gyrostat October Outlook: Beneath The Calm, The Cost Of Protection Rises

 Even as global equity indices remain near record highs, the pricing of risk is shifting quietly ben... Read more