The Information website obtained the newsRefers toNVIDIA is currently undergoing an internal organizational restructuring, planning to reduce its...DGX Cloud, the public cloud divisionThe department was not completely eliminated; instead, it was incorporated into the core engineering business group and led by Senior Vice President Dwight Diercks. Its main task in the future will shift from "external leasing" to "internal research and development."
From "selling shovels" to "operating mining farms," is this a strategic shift for NVIDIA?
In 2023, NVIDIA launched its DGX Cloud service with great fanfare. The initial concept was to partner with cloud service providers such as Oracle, Microsoft Azure, and Google Cloud to set up NVIDIA's dedicated DGX server clusters within these providers' data centers. This would allow enterprise customers to directly rent computing power (including top-tier GPUs like the H100 and Blackwell) and a complete stack of AI software from NVIDIA through subscriptions, touting its ability to solve the pain point of enterprises "not being able to buy GPUs." It even aimed to enable more enterprises to quickly obtain massive accelerated computing resources without having to stockpile large numbers of GPUs.
This restructuring announcement indicates that NVIDIA intends to downplay the external operational aspect of its DGX Cloud business. The future DGX Cloud will primarily be used by NVIDIA's internal engineers for chip development and testing, as well as for training internal AI models such as Isaac (robotics) and Nemotron (language model).
Avoid "internal network battles" with major clients.
Why this shift? It is widely believed that the main reason lies in eliminating conflicts of interest.
While NVIDIA is the arms dealer of the AI era, its largest customers are cloud service providers (CSPs) such as AWS, Microsoft, and Google. When NVIDIA steps in to operate its own DGX Cloud service, it is essentially competing with its major clients.
Reducing DGX Cloud’s external business will not only repair relationships with these cloud service providers, but also allow NVIDIA to focus its resources on the currently in-demand AI chip design, hardware platform delivery, and maintaining its strong CUDA software moat.
Analysis and viewpoint: Return to core competencies and let professionals handle it.
In my opinion, NVIDIA's move is a rather clever "strategic retreat".
Although Jensen Huang has consistently emphasized that NVIDIA is not just a chip company but a "platform company," in the fiercely competitive cloud services (IaaS) market, companies like AWS and Azure have already built high walls. Since NVIDIA is already making a fortune selling hardware, there's really no need for it to raise its own cow just to get milk, and even risk offending the distributors who sell its milk.
Converting DGX Cloud services for internal use could actually accelerate NVIDIA's own R&D iteration speed. After all, to design more powerful next-generation chips or smarter autonomous driving AI, NVIDIA itself is the super user that needs massive computing power the most. Keeping the best tools for internal development and handing over cloud services to partners may be the long-term strategy to maintain its AI dominance.



