Just as Meta pledged to continue purchasing millions of GPUs from NVIDIA, CEO Mark Zuckerberg actively pursued a strategy to move away from a single vendor. Recently, Meta not only announced a multi-billion dollar lease of Google's TPUs for model training, but also revealed a cross-generational strategic partnership with AMD on a scale of up to 6 GW, definitively signaling the arrival of a new era of fierce competition in the AI chip market.
According to a report by The Information, citing sources familiar with the matter, Meta has reached a long-term agreement with Google worth "billions of dollars." Over the next few years, Meta will lease Google's AI accelerator chips, TPUs (Tensor Processing Units), to develop entirely new AI models.
A market-shaking decision: using TPUs for "model training"
The reason why this agreement between Meta and Google has attracted so much attention from the market is that Meta intends to use TPUs for the most crucial "AI training" process.
In the past, the industry generally believed that because NVIDIA had absolute dominance in the CUDA software ecosystem and NVLink chip interconnect technology, other competitors (such as AMD or Google) could only get a share of the "model inference" market, which had lower ecosystem requirements. However, Meta's move to directly transfer the training task to the TPU breaks the market myth that "training can only rely on NVIDIA".
Behind Meta's decision, besidesMTIA, a self-developed AI training chipProgress has not been as expected. Real-world factors include the production ramp-up issues caused by technical glitches and hardware complexity encountered during the large-scale deployment of NVIDIA's latest Blackwell chips last year. This has made Meta realize the urgency of establishing a "second option" to diversify risks.
Teaming up with AMD: An epic 6 GW deployment, encompassing customized MI450 GPUs and EPYC processors.
While embracing Google TPU, Meta is also collaborating with another chipmaker, AMD, on hardware procurement.
AMD and Meta jointly announced an unprecedented 6 GW (gigawatt) infrastructure deployment agreement. This multi-year, generational collaboration will comprehensively cover AMD's Instinct GPUs, EPYC CPUs, and rack-mount AI systems.
Key highlights of this massive project include:
• Customized chip debut:The first Gigawatt-level devices are expected to begin shipping in the second half of 2026, featuring custom AMD Instinct MI450 GPUs optimized for Meta workloads.
• CPU strategic importance elevated:As AI infrastructure becomes increasingly complex, the coordination and scalability of CPUs become more critical. Meta has confirmed that it will be a major customer for AMD's 6th generation EPYC processors (codename Venice) and the next-generation "Verano".
• System-level integration:Both parties will base their decisions on the announcement made at the OCP Global Summit.AMD Helios rack architectureDeployment was carried out, and the ROCm software ecosystem was deeply integrated.
• Deep equity binding:To ensure alignment of interests, AMD issued performance-based warrants to Meta, allowing them to purchase up to 1.6 million shares. These warrants will unlock gradually as Meta achieves specific purchase targets (such as initial shipments of 1 GW) and AMD's share price thresholds.
Google's plan: to turn TPU into the next money-printing machine.
Returning to Google, securing Meta as a major client is undoubtedly a significant victory for its TPU externalization strategy.
To compete head-on with NVIDIA, Google is actively pushing forward with the commercialization of TPUs. In addition to leasing them directly to Meta, Google has even adopted the "special purpose vehicle" (SPV) model from the financial industry, partnering with large investment institutions to establish joint ventures, purchasing TPUs through financing and then subleasing them out, attempting to make the TPU business a new engine that can contribute billions of dollars in revenue.
Analysis of viewpoints
Mark Zuckerberg is well aware that while NVIDIA is currently the world's leading AI arms dealer, allowing Jensen Huang to monopolize the market would result in a complete loss of future bargaining power. Therefore, he is simultaneously maintaining the partnership with NVIDIA (by continuing to purchase millions of GPUs), courting Google to fill the training computing power gap, and investing heavily in and deeply tying down AMD's equity stake to cultivate a powerful ally capable of competing with NVIDIA at the hardware level.
The biggest variable in this computing power war will ultimately come down to the capacity allocation of wafer foundries (mainly TSMC). When NVIDIA's GPUs, AMD's Instinct, Google's TPUs, and even Meta's potential future self-developed chips all vie for TSMC's most advanced CoWoS packaging and advanced process capacity, this three-way battle is about to enter its most intense and brutal phase.



