OpenAI has taken another decisive step in the intensifying global race for artificial intelligence dominance by signing a massive long-term computing agreement with Cerebras Systems, a U.S.-based chipmaker often described as one of Nvidia’s most serious challengers. The deal, valued at more than $10 billion over its lifetime, reflects how central raw computing power has become to the future of advanced AI models, especially as systems grow more capable, more complex, and far more demanding to run at scale.
At the heart of this agreement is OpenAI’s plan to purchase up to 750 megawatts of computing capacity from Cerebras over the next three years. To put that number into perspective, this is the kind of energy footprint usually associated with entire industrial zones or small cities. For OpenAI, however, this scale is no longer extraordinary. It is increasingly the baseline required to meet the expectations of hundreds of millions of users who rely on tools like ChatGPT for real-time answers, reasoning, and creative work.
Unlike earlier AI infrastructure deals that focused heavily on training new models, this partnership is designed primarily around inference. Inference is the process that happens after a model is trained, when it responds to user queries, reasons through problems, and generates output. As AI systems move toward reasoning-based architectures that pause, evaluate, and think before answering, inference has become both more valuable and more computationally expensive. OpenAI’s leadership has made it clear that faster response times and smoother performance are now as critical as raw intelligence.

“Integrating Cerebras into our mix of compute solutions is all about making our AI respond much faster,” OpenAI said in a post on its website.
Cerebras entered discussions with OpenAI last August after demonstrating that some of OpenAI’s open-source models could run more efficiently on its proprietary hardware than on conventional graphics processing units. This efficiency claim matters in an industry where marginal gains in speed and energy use can translate into billions of dollars saved or earned. After months of technical evaluation and negotiation, the two companies settled on a structure in which Cerebras will provide cloud services powered by its own chips, rather than simply selling hardware.
Under the terms of the agreement, Cerebras will either build or lease data centers filled with its wafer-scale engines, while OpenAI will pay to access this computing capacity through Cerebras’ cloud platform. The rollout will happen in stages, with capacity coming online in multiple phases through 2028. This gradual approach allows both companies to adjust to demand, technological improvements, and broader market conditions without committing everything upfront.
The deal highlights a deeper shift taking place across the AI sector. For years, Nvidia’s GPUs have been the default foundation for both training and inference. While Nvidia remains dominant, companies like Cerebras are betting that specialized architectures can outperform general-purpose chips for specific AI workloads. Cerebras’ wafer-scale engines are among the largest chips ever built, designed to handle massive models with fewer communication bottlenecks. In theory, this allows for faster computation and lower latency, two qualities that matter immensely for real-time AI applications.
From Cerebras’ perspective, the partnership carries strategic importance beyond revenue. The company has been working to reduce its reliance on a small number of major customers, including UAE-based technology firm G42, which has been both an investor and a significant source of business. Securing OpenAI as a long-term customer not only diversifies Cerebras’ income but also strengthens its credibility ahead of a planned return to public markets.
Founded in 2015, Cerebras has spent nearly a decade positioning itself as a serious alternative in the AI hardware ecosystem. Its ambitions are well known, and so are its challenges. The company previously filed for an initial public offering in 2024, only to delay and eventually withdraw the plan later that year amid market uncertainty. Recent reports suggest Cerebras is preparing once again for an IPO, potentially targeting a listing in the second quarter of this year. A high-profile agreement with OpenAI could significantly bolster investor confidence.
The relationship between the two companies is also shaped by personal ties. OpenAI chief executive Sam Altman is an early investor in Cerebras, a connection that has fueled both optimism and scrutiny. While such overlaps are common in Silicon Valley, they inevitably raise questions about governance, transparency, and long-term alignment. Still, the technical rationale for the deal appears strong, particularly as OpenAI looks to diversify its computing stack rather than depend on a single hardware supplier.
OpenAI’s appetite for compute shows no signs of slowing. The company is reportedly laying the groundwork for its own eventual IPO, with valuations discussed as high as $1 trillion. Altman has previously stated that OpenAI is committed to spending an astonishing $1.4 trillion to develop 30 gigawatts of computing resources, a figure large enough to power roughly 25 million homes in the United States. These numbers underscore how AI development is no longer just a software story but an infrastructure challenge on a national, even global, scale.
At the same time, the scale of investment has prompted growing unease among investors and industry observers. Comparisons to the dotcom era have become increasingly common, with critics warning that exuberant spending and soaring valuations could outpace real-world returns. The need for ever-larger data centers, massive energy consumption, and specialized hardware raises questions about sustainability, regulation, and long-term profitability.
Still, for OpenAI, standing still is not an option. Competition from rival labs, rising user expectations, and the rapid evolution of AI capabilities mean that access to reliable, high-performance compute can determine who leads and who falls behind. The Cerebras deal reflects a calculated bet that specialized hardware and diversified partnerships will provide an edge in this environment.



