OpenAI has formalized a strategic partnership with Broadcom to design and manufacture its own artificial intelligence processors. The news had been disclosed by the press in September.
According to the joint press release, OpenAI will design its own AI accelerators, while Broadcom will handle the development, production, and integration of the hardware systems. The two companies will co-develop complete “racks” combining OpenAI’s chips with Broadcom’s networking, optical, and PCIe solutions.
Deployment will begin in the second half of 2026, with a phased rollout through to the end of 2029. The goal is to reach a total capacity of 10 gigawatts of accelerators, i.e., a power equivalent to the electricity consumption of more than 8 million American households.
The Race for AI Infrastructure
OpenAI says it intends to embed in these chips “lessons learned from developing its models” to optimize performance and energy efficiency for its future systems. This approach underscores the company’s aim to control the entire technology stack, from software models to hardware infrastructure.
With Broadcom, OpenAI takes a new step by internalizing chip design, joining cloud giants like Google (Alphabet), Amazon, and Microsoft, which have developed their own architectures (TPU, Graviton, Maia, etc.).
However, this strategy remains uncertain. Several companies — including Meta or Microsoft — have faced technical difficulties or delays in developing their own chips, without managing to match Nvidia GPUs’ performance. Analysts estimate that Nvidia’s dominance in the AI market should not be challenged in the short term.
The press release notes that the systems developed for OpenAI will rely exclusively on Broadcom’s Ethernet networking technologies, offering an alternative to Nvidia’s InfiniBand solution. In September, Broadcom had already disclosed a $10 billion order for AI chips for an anonymous client, which several analysts suspected to be OpenAI.
An Operation with No Financial Details
Neither OpenAI nor Broadcom disclosed the investment amount or the financing terms. The project, however, represents a significant scale-up, involving the construction or modernization of large compute and energy-storage infrastructures.
The alliance allows the inventor of ChatGPT to secure its computing power amid the global GPU shortage, while optimizing costs per query through a direct integration between hardware and software.
If the project’s success depends on the two partners delivering competitive and reliable systems on time, it confirms the underlying trend: next-generation artificial intelligence will rely as much on software advances as on mastery of hardware power.
Prior to Broadcom, OpenAI concluded a string of major deals in recent months, including an agreement with AMD to provide 6 GW of AI capacity with an option to invest in the manufacturer’s equity, as well as a commitment from Nvidia, announced in early October, to invest up to $100 billion and to supply data-center systems with a power output of 10 GW.