OpenAI, which has long relied on Microsoft Azure cloud resources, is now officiallyAnnounced to expand cooperationFor the first time, OpenAI has adopted Google Cloud to support the huge computing needs of its ChatGPT and API services. This move not only symbolizes a major shift in OpenAI's cloud deployment strategy, but also reveals that its cooperative relationship with Microsoft is quietly adjusting.
Although Microsoft remains an important partner of OpenAI and retains exclusive API licensing rights and the right of first refusal to supply, OpenAI has now also begun to simultaneously use the infrastructure of cloud providers including Google Cloud, CoreWeave, and Oracle to meet the growing user demand and generative AI computing pressure.
Google Cloud will provide cloud services to OpenAI in the United States, Japan, the Netherlands, Norway, and the United Kingdom. For Google, this is more than just a single partnership; it symbolizes its breakthrough and increased competitiveness in the cloud market. This is particularly true given the long-standing dominance of AWS and Azure. While Google Cloud is relatively small, it has rapid growth momentum. This successful partnership with OpenAI will also help promote its proprietary hardware solutions, such as the Tensor Processing Unit (TPU).
In fact, OpenAI CEO Sam Altman publicly stated in April of this year that with the rapid adoption of AI applications, OpenAI faced a severe computing bottleneck and even sought additional GPU computing power. In March of this year, OpenAI signed a five-year partnership agreement with cloud service provider CoreWeave worth nearly $4 billion, demonstrating a strong intention to expand its infrastructure.
Although some media have previously pointed out that OpenAI has “largely” rented Google’sTPU computing resourcesHowever, at that time OpenAI only said that it was conducting preliminary tests. Now it has officially chosen to cooperate with Google Cloud, which means that its adoption of TPU will be further expanded in the future, enhancing the deployment and training efficiency of the overall AI model.
Overall, OpenAI's shift to a multi-vendor computing resource model not only improves service stability and flexibility, but also allows for greater initiative in cloud capacity scheduling. For Microsoft, while it loses its exclusive cloud position, it retains its core licensing advantage. For Google, this partnership offers another opportunity to gain a foothold in the generative AI development race and strengthen its cloud computing presence.



