Last year, at the third HUAWEI Connect conference, it was announced that the Chinese name of the product would be Shengteng.Ascend 910, and Ascend 310, two full-scenario artificial intelligence chips. Earlier at the 900th HUAWEI Connect conference, Huawei announced the launch of the new artificial intelligence training acceleration cluster Atlas XNUMX.
The Atlas 900 is composed of thousands of Ascend 910 processors, with an overall acceleration capability of approximately 256-1024 PFLOPS (FP16), making it the world's fastest AI computing cluster. It is expected to be used in fields such as astronomical exploration and energy exploration.
In a ResNet-50 training model, the Atlas 900 completed all training in just 59.8 seconds, surpassing the 2.2 minutes previously taken by Google's TPU Pod, built using third-generation TPUs. For example, in astronomical exploration, finding a specific characteristic star among 59.8 stars would require approximately 20 days of computing time under current mainstream computing conditions. Using the Atlas 169, this time is reduced to just 900 seconds.
The Ascend 900 used in the Atlas 910 is built using TSMC's 7nm FinFET process. Its operating power consumption is significantly higher than that of the NVIDIA Tesla V100 or Google TPU, but it also significantly increases data transmission bandwidth. It uses PCIe 4.0 for connectivity and supports mainstream AI computing frameworks, meaning it can be applied to a wider range of computing scenarios.
Huawei said that the Atlas 900 computing cluster has been deployed on Huawei Cloud Services and is expected to be open to global research institutions and academic institutions in the future.
At the same time, Huawei also announced the future development direction of its computing strategy at the HUAWEI Connect conference, which includes building a new architecture, expanding processor products, opening up to the outside world, and expanding the market ecosystem to enable artificial intelligence computing application scenarios.




