NVIDIA announced during Computex 2025NVLink Fusion Semi-Customized AI Infrastructure Solution Project, allowing third-party chip manufacturers to use its NVLink and NVLink-C2C connection technologies, opening up the possibility for other companies to combine their customized processors with NVIDIA's Blackwell GPUs, or even the next-generation Rubin CPUs, to create artificial intelligence computing platforms that are more tailored to customized needs. Companies including Alchip, Marvell, Qualcomm, Cadence, and Synopsys have already established partnerships with NVIDIA.
In further explanation at the Hot Chips event, NVIDIA stated that NVLink Fusion is not limited to connecting with NVIDIA products. This means that chip manufacturers using this design can use it to connect customized processors and any accelerated computing components, such as GPUs, NPUs, XPUs, etc., thereby expanding the possibilities of more heterogeneous computing applications and accelerating the popularization of NVIDIA NVLink technology.
This move also means that NVLink Fusion will become a new platform for NVIDIA to expand its technology. In addition to attracting more companies to build more heterogeneous computing platforms using NVLink, it will also encourage companies to use NVLink technology to connect NVIDIA's accelerated computing products, such as the current Blackwell display architecture GPUs or NVIDIA's Grace CPU products built on the Arm architecture, thereby expanding close cooperation with market players.
Flexible combination supports various CPU/XPU configurations
NVIDIA said that NVLink Fusion can support three combination modes, including customized CPU, customized XPU, or a combination of CPU and XPU, where XPU refers to any computing element, thereby providing greater combination configuration flexibility.
If one end is a single XPU, it can beOpen UCIe connection interfaceBridging and connecting to other computing components via NVLink is not limited to NVIDIA computing products.
For custom processors paired with NVIDIA GPUs, NVIDIA recommends using NVLink-C2C technology within the processors to achieve native high-speed interconnection. However, if both ends are third-party custom processors, NVIDIA has not yet specified the specific connection architecture, but it is expected that interconnection via NVLink-C2C technology will still be possible.
Heterogeneous computing demands drive chip interconnect innovation
As the demand for computing bandwidth in AI and HPC systems rapidly increases, the importance of high-speed inter-chip interconnects becomes increasingly prominent. NVLink Fusion's open approach allows chipmakers to customize processors by connecting a wider range of heterogeneous computing components via NVLink technology. Leveraging the proven market-proven NVLink technology, the ecosystem will be built to accelerate the growth of inter-chip computing and system integration.
Currently, NVLink Fusion Alliance members have different application directions. Among them, MediaTek uses NVLink to provide automotive chips and cooperates with NVIDIA to createGB10 computing platform, and Qualcomm plans to use NVLink technologyRe-planning data center development, which is expected to be held in Hawaii this yearSnapdragon Summitannounced related products, while Fujitsu is collaborating with the RIKEN Institute of Physical and Chemical Research.Next-generation supercomputer "Fugaku Next"Designed with NVIDIA GPUs using NVLink technology.
Open ecosystem accelerates rack-level system deployment
NVLink Fusion is built on the Open Compute Project (OCP)MGX modular open computing rack solutionPartners can leverage this ecosystem to build rack computing systems with NVLink. For AI or HPC systems requiring high bandwidth and low latency, NVLink Fusion can be used to create a scalable, flexible, and efficient chip interconnect.
Overall, NVLink Fusion's open strategy not only expands NVIDIA's own product application ecosystem, but also enables third-party chip manufacturers to quickly build high-performance systems in the era of heterogeneous computing, accelerating the rapid and large-scale deployment of AI and HPC platforms.




