Financial Times reportRefers to, OpenAI is ready to enter from 2026Self-developed AI chips are in mass production, will be developed in collaboration with Broadcom, with the goal of meeting the huge demand for AI computing while reducing the high dependence on NVIDIA's GPU.
Broadcom CEO Hock Tan recently revealed that the company has received a chip design order worth up to $100 billion. Although the customer was not disclosed explicitly, relevant sources indicated that the customer was OpenAI, and its customized chips were only used for internal budgets and were not planned for external sales.
Build your own computing core to cope with computing power pressure
As early as 2023, OpenAI CEO Sam Altman publicly complained about the shortage of GPU supply, which limited the speed and stability of API services. At that time, it was reported that OpenAIActively explore the possibility of self-developed chips, and is in talks with companies such as Broadcom and TSMC for cooperation.
With the launch of GPT-5 driving a rapid increase in computing demand, OpenAI not only plans to double the scale of its computing servers within five months, but also hopes to offset potential future GPU shortages by producing its own chips, while also reducing long-term hardware costs.
Broadcom and TSMC play roles
It is not clear whether TSMC will continue to participate in this collaboration, but given Broadcom's experience in network communication chip design and TSMC's leading position in advanced process mass production, they are still considered two key partners for OpenAI to establish chip autonomy.
AI chip market competition
OpenAI's self-developed chip plan not only responds to short-term supply issues, but will also affect the layout of the AI semiconductor industry.
NVIDIA remains the market leader in AI technology, reporting a 56% year-over-year revenue growth in its second-quarter earnings report. Even with the H20 chips yet to ship due to U.S. export restrictions on China, NVIDIA still demonstrated strong growth momentum.
With Google, Amazon, and Microsoft all investing in developing their own AI chips, OpenAI's move to build its own chips would further highlight the market's accelerating trend toward customized "XPU-like" heterogeneous computing architectures, which are intended to improve the efficiency of AI workloads and significantly reduce reliance on a single supplier.
A new strategy for self-sufficiency
From OpenAI's perspective, investing in its own chips isn't about challenging NVIDIA's market share, but rather ensuring the stability of its AI models and services, avoiding constraints like supply bottlenecks or cost spikes. Making its own chips also means OpenAI can more tightly integrate AI model features with the hardware architecture, optimizing for energy efficiency and latency.
As mass production approaches next year, the success of OpenAI's self-developed chips will not only impact the company's growth momentum, but will also likely further drive the trend of autonomous AI infrastructure, allowing the global AI technology competition to move from "algorithm competition" to a new stage of "hardware and software integration."



