Beyond the annual barrage of technological innovations, 2026 also marks the 20th anniversary of CUDA, a key technology in NVIDIA's computing landscape. During his keynote address at GTC 2026, NVIDIA CEO Jensen Huang repeatedly recounted how this platform, known as the "core flywheel," evolved from a low-level parallel computing model used only in academia to gradually permeate every cloud and every computer company globally, ultimately becoming an indispensable infrastructure driving the modern artificial intelligence revolution.
Jensen Huang pointed out that as Moore's Law slows down, "accelerated computing" has become the only way to drive AI development, and this path is expanding at an unprecedented speed. He boldly predicts that as AI applications shift from simple model training to the "inference era," the global demand for AI infrastructure will reach at least one trillion US dollars by 2027, and even frankly admits: "This number may still be an underestimate." This is not just a game of financial figures, but highlights that a new era of computing supported by the CUDA ecosystem has arrived.
From cuDF to cuVS: CUDA redefines the data science workflow
To enable seamless integration of GPU computing power into the core of data processing, NVIDIA once again showcased the powerful capabilities of cuDF and cuVS libraries based on the RAPIDS ecosystem at GTC 2026.
As a GPU-accelerated DataFrame processing tool, cuDF's biggest advantage lies in its ability to directly accelerate common workloads like pandas or Apache Spark that originally ran on CPUs with "zero code changes." This means that enterprises that need to process hundreds of millions of structured data entries daily can drastically reduce computation time from hours to minutes. After deploying the cuDF library, Snap saw a significant 76% reduction in daily data processing costs and was able to complete analysis of 10 PB of data within 3 hours.
On the other hand, cuVS plays a crucial role in retrieving unstructured data. With the increasing prevalence of Retrieval Augmentation (RAG) architectures, cuVS's built-in advanced algorithms, such as CAGRA, can handle retrieval tasks involving billions of high-dimensional vector data with extremely low latency, providing a fast and reliable knowledge memory for the next generation of AI agents.
Ecosystems radiate outwards: AI-RAN reshapes the telecommunications industry, and AI factories penetrate the medical field.
CUDA's influence has long transcended data centers and is beginning to reshape the infrastructure of physical industries. In the telecommunications sector, NVIDIA is collaborating with companies like T-Mobile and Nokia to launch AI-RAN (Artificial Intelligence Radio Access Network) technology stacks. This solution transforms 5G networks into distributed, high-performance edge AI computing platforms, enabling physical AI applications such as smart transportation and industrial safety to react instantly with millisecond-level ultra-low latency. In a specific pilot case, AI software platform company Linker Vision is collaborating with the City of San Jose, using NVIDIA computing platforms integrated and deployed at the edge of the T-Mobile network to create a "city operations agent" capable of sensing, simulating, and optimizing traffic signals, aiming to reduce incident response time by five times.
In the life sciences field, Roche has partnered with NVIDIA to build the largest AI factory in the pharmaceutical industry. This supercomputing center, located in Europe and the Americas and deploying over 3500 NVIDIA Blackwell GPUs, will comprehensively empower the entire value chain, from drug discovery and clinical trials to diagnostic solutions. Roche is also utilizing NVIDIA Omniverse to create a digital twin platform for its manufacturing facilities, thereby simulating and optimizing complex production systems.
Open models and NemoClaw: spurring the development of "personal AI task systems"
In addition to hardware and basic software libraries, NVIDIA is also striving to expand its influence at the AI model level. To meet the needs of developers and enterprises for secure and trustworthy AI agents, NVIDIA launched the NemoClaw software stack, providing enterprise-grade security barriers and privacy sandboxes for the recently popular open-source AI agent platform OpenClaw.
Jensen Huang further elevated OpenClaw to the status of a "personal AI operating system," while NemoClaw fills the gap in the infrastructure layer needed to ensure the secure operation of these "digital employees." At GTC 2026, attendees could even use NVIDIA DGX Spark workstations or GeForce RTX laptops to deploy their own dedicated, always-online AI assistant with a single click through the "Build-a-Claw" event.
At the same time, Adobe also announced an expanded strategic partnership with NVIDIA. Adobe will leverage NVIDIA's accelerated computing technology and Omniverse library to develop next-generation Firefly generative AI models and explore combining the NVIDIA Agent Toolkit and Nemotron open models to drive more complex creative and marketing automation workflows.
Analysis of viewpoints
Looking back at GTC 2026, NVIDIA is no longer just a hardware company that provides cutting-edge chips. Instead, it is building a comprehensive ecosystem that encompasses everything from data science, telecommunications, medical, robotics to space computing, leveraging the deep expertise accumulated by CUDA over the past 20 years.
The "one trillion dollar" business opportunity mentioned by Jensen Huang is built on this ecosystem. As AI moves from "generation" to the "agent" stage where it can autonomously plan and execute, each complex task requires thousands of times more tokens than traditional dialogue, creating what he calls a "computational vacuum." NVIDIA's role is that of the one providing shovels, work clothes, and even building mining farms during the gold rush—from the Vera Rubin super chip in data centers, Jetson Thor for edge inference, and the NemoClaw software stack to ensure data security, to edge 5G computing nodes in cooperation with T-Mobile, NVIDIA is trying to fill every inch of the computational gap in AI, from the cloud to the edge, from the virtual to the physical world.
On the 20th anniversary of CUDA's debut, NVIDIA has successfully established itself as a standard setter for global AI infrastructure. In the future, a company's competitiveness will no longer depend solely on the amount of data or models it possesses, but rather on its ability to efficiently transform computing power into productivity within this vast ecosystem called "NVIDIA".







