Why does OpenAI need six giant data centers?
1 day ago / Read about 18 minute
Source:ArsTechnica
OpenAI's new $400 billion announcement plan reveals both growing AI demand and circular investments.


Credit: OpenAI

On Tuesday, OpenAI, Oracle, and SoftBank announced plans for five new US AI data center sites for Stargate, their joint AI infrastructure project, bringing the platform to nearly 7 gigawatts of planned capacity and over $400 billion in investment over the next three years.

The massive buildout aims to handle ChatGPT's 700 million weekly users and train future AI models, although critics question whether the investment structure can sustain itself. The companies said the expansion puts them on track to secure the full $500 billion, 10-gigawatt commitment they announced in January by the end of 2025.

The five new sites will include three locations developed through an OpenAI and Oracle partnership: Shackelford County, Texas; Doña Ana County, New Mexico; and an unspecified Midwest location. These sites, along with a 600-megawatt expansion near the flagship Stargate site in Abilene, Texas, can deliver over 5.5 gigawatts of capacity, which means the computers on site will be able to draw up to 5.5 billion watts of electricity when running at full load. The companies expect the sites to create over 25,000 onsite jobs.

Two of the sites will be developed through a partnership between SoftBank and OpenAI. One site in Lordstown, Ohio, where SoftBank has broken ground, is on track to be operational next year. The second site in Milam County, Texas, will be developed with SB Energy, a SoftBank Group company. These two sites may scale to 1.5 gigawatts over the next 18 months.

The new sites will join the flagship Stargate campus in Abilene, Texas. Oracle began delivering Nvidia hardware to that site in June, and OpenAI has already begun training (building new models) and inference (running ChatGPT) using the data center.

Here's a rundown of those announced Stargate sites so far:

  • Abilene, Texas: Flagship campus, already operational with Nvidia GB200 racks, plus planned 600-megawatt expansion
  • Shackelford County, Texas: New Oracle-developed site
  • Doña Ana County, New Mexico: New Oracle-developed site
  • Midwest location (undisclosed): New Oracle-developed site
  • Lordstown, Ohio: New SoftBank-developed site, operational next year
  • Milam County, Texas: New SoftBank/SB Energy site

The July agreement between OpenAI and Oracle to develop up to 4.5 gigawatts of additional Stargate capacity represents a partnership worth over $300 billion between the two companies over five years. The companies say they selected the five new sites after reviewing over 300 proposals from more than 30 states in a nationwide process launched in January.

Why OpenAI wants massive computing power

The kind of numbers OpenAI and friends throw around—10 gigawatts here, $500 billion there—are somewhat staggering and mind-boggling in scope for the layperson who might not be familiar with the massive scale of Internet infrastructure. For example, ten gigawatts is equivalent to the output of roughly 10 nuclear reactors, which is enough electricity to power millions of homes. But what does OpenAI really need those data centers for? It all goes back to OpenAI CEO Sam Altman's dream of providing intelligence as a service to billions of people.

"AI can only fulfill its promise if we build the compute to power it," OpenAI CEO Sam Altman said in the announcement. "That compute is the key to ensuring everyone can benefit from AI and to unlocking future breakthroughs."

Altman's statement reflects optimism about the usefulness of future AI systems, but despite warnings of an AI bubble and criticism of the underlying technology, there is still actual unmet demand for generative AI capacity today. ChatGPT serves 700 million weekly active users, more than double the US population, who regularly use the AI assistant to develop software, provide personal advice, and compose or edit correspondence and reports. While the outputs may be imperfect at times, people apparently still want them.

OpenAI regularly faces severe capacity constraints to produce those outputs, which leads to limits on how often its users can query the chatbot. ChatGPT Plus subscribers frequently encounter these usage limits, particularly when using more compute-intensive features like image generation or simulated reasoning models. Free users, who represent a gateway for future subscriptions, face even stricter limitations. OpenAI lacks the computing capacity to meet current demand, let alone room for future growth.

Training next-generation AI models compounds the problem. On top of running existing AI models like those that power ChatGPT, OpenAI is constantly working on new technology in the background. It's a process that requires thousands of specialized chips running continuously for months.

The circular investment question

The financial structure of these deals between OpenAI, Oracle, and Nvidia has drawn scrutiny from industry observers. Earlier this week, Nvidia announced it would invest up to $100 billion as OpenAI deploys Nvidia systems. As Bryn Talkington of Requisite Capital Management told CNBC: "Nvidia invests $100 billion in OpenAI, which then OpenAI turns back and gives it back to Nvidia."

Oracle's arrangement follows a similar pattern, with a reported $30 billion-per-year deal where Oracle builds facilities that OpenAI pays to use. This circular flow, which involves infrastructure providers investing in AI companies that become their biggest customers, has raised eyebrows about whether these represent genuine economic investments or elaborate accounting maneuvers.

The arrangements are becoming even more convoluted. The Information reported this week that Nvidia is discussing leasing its chips to OpenAI rather than selling them outright. Under this structure, Nvidia would create a separate entity to purchase its own GPUs, then lease them to OpenAI, which adds yet another layer of circular financial engineering to this complicated relationship.

"NVIDIA seeds companies and gives them the guaranteed contracts necessary to raise debt to buy GPUs from NVIDIA, even though these companies are horribly unprofitable and will eventually die from a lack of any real demand," wrote tech critic Ed Zitron on Bluesky last week about the unusual flow of AI infrastructure investments. Zitron was referring to companies like CoreWeave and Lambda Labs, which have raised billions in debt to buy Nvidia GPUs based partly on contracts from Nvidia itself. It's a pattern that mirrors OpenAI's arrangements with Oracle and Nvidia.

So what happens if the bubble pops? Even Altman himself warned last month that "someone will lose a phenomenal amount of money" in what he called an AI bubble. If AI demand fails to meet these astronomical projections, the massive data centers built on physical soil won't simply vanish. When the dot-com bubble burst in 2001, fiber optic cable laid during the boom years eventually found use as Internet demand caught up. Similarly, these facilities could potentially pivot to cloud services, scientific computing, or other workloads, but at what might be massive losses for investors who paid AI-boom prices.