Amidst the ongoing impact of US export controls on the global AI chip supply chain, NVIDIA is employing a two-pronged strategy to actively re-enter the Chinese market. (Reuters)Get the messageIt has been pointed out that after CEO Jensen Huang confirmed that production of the compliant H200 chip has been restarted, NVIDIA is also reportedly preparing to develop a Groq AI inference chip that can be sold in the Chinese market, which is expected to be ready as early as May this year.
From H200 reboot to Groq layout
At the GTC 2026 conference currently being held in San Jose, California, Jensen Huang stated that after obtaining export licenses from the Trump administration and receiving purchase orders from Chinese customers, production of the H200 AI chip has now resumed. This chip, based on the previous generation Hopper architecture, is one of the few high-end AI chip products that NVIDIA can legally sell to China under strict export restrictions.
Reuters, citing two sources familiar with the matter, reported that NVIDIA is preparing to develop a Groq AI inference chip for the Chinese market. NVIDIA launched this chip late last year...A deal worth up to $170 billionThey acquired a key technology license from AI startup Groq and announced several products designed based on Groq technology and manufactured using Samsung's processes at GTC 2026.LPU Inference Chip Product LineupThis is seen as a proactive move to position oneself in a market that demands faster response and lower operating costs.
Locking in the enormous opportunities of AI "inference"
NVIDIA plans to use its Groq chips in the field of inference, which is where AI systems answer questions, write code, or users perform tasks. In its product showcase at GTC, NVIDIA confirmed that it will combine the Groq chip with the Vera Rubin, which is expected to ship in the second half of this year, to create an AI computing platform with enhanced training acceleration and inference speed.
Jensen Huang stated that the inflection point of his theory has arrived, and market demand will continue to rise. He also predicted that by 2027, the market value of AI infrastructure will exceed $1 trillion.
The inference market will face even fiercer competition from the training market. In addition to NVIDIA's Groq chip, Google has already accelerated inference using its own TPUs, and AWS has also accelerated inference speed with its own Inferentia chip. Chinese companies such as Baidu and Huawei also have their own inference chips. Therefore, if NVIDIA competes in the AI inference market with just the Groq chip, it may be a bit weak. It may consider pairing it with GPUs used for accelerated training to attract Chinese companies to purchase a complete solution.
Non-downgraded version of the China strategy
It is worth noting that sources told Reuters that this chip, which is being prepared for the Chinese market, is neither a downgraded version nor a "special edition" manufactured specifically for the Chinese market. This is different from NVIDIA's past "special edition" strategies, which forced it to release lower-performance A800, H800, or H20 chips due to export controls.
Sources indicate that this new Groq chip variant is adaptable and can work seamlessly with other systems, and is expected to be ready in May. However, NVIDIA has not responded to this rumor.



