OpenAI, Oracle and Meta in the race for the largest gigawatt supercomputers
The Stargate joint venture hyped up by the US government has reportedly not yet signed any contracts. Oracle and OpenAI are building without Softbank.
OpenAI's and Oracle's construction site in Abilene, Texas. The first two building complexes are at the front, with six more under construction at the back.
(Image: OpenAI)
Oracle and OpenAI are building huge supercomputers to train AI algorithms. A data center is currently being built in Abilene, Texas, which is scheduled for completion in 2026 and will then require 1.2 gigawatts of electrical energy. At the same time, OpenAI and Oracle are planning further data centers with an energy requirement of an additional 4.5 gigawatts. In this way, the partners want to reach over two million accelerators.
All of this is apparently taking place outside of the "Stargate Project" joint venture, i.e. without Softbank, even though OpenAI and its CEO Sam Altman refer to the data centers as "Stargate Sites". The first of these is a 200-megawatt data center as part of the Lancium Clean Campus in Abilene. It is being built together with partners who were previously mainly active in the crypto mining business: Coreweave, for example, is building the server infrastructure to connect the myriad components and will receive USD 11.9 billion over the next few years. The two closely associated companies Crusoe and Lancium will take care of the power supply.
Oracle has booked so-called Remaining Performance Obligations (RPO) totaling 138 billion US dollars until the summer of 2026. A large proportion is likely to be attributable to the joint data centers with OpenAI. OpenAI wants to break away from Microsoft, whose servers the company has been using to date.
Empfohlener redaktioneller Inhalt
Mit Ihrer Zustimmung wird hier ein externes YouTube-Video (Google Ireland Limited) geladen.
Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen (Google Ireland Limited) übermittelt werden. Mehr dazu in unserer Datenschutzerklärung.
Cooperation so far without Softbank
Meanwhile, the Stargate Project is off to a slow start. In January, the partners caused a stir because they wanted to build huge AI data centers worth 500 billion US dollars. More than half a year later, however, there is still no significant progress. There are apparently still no concrete contracts for new buildings.
This was reported by the Wall Street Journal, according to whose sources the Stargate Project is now only planning to build a single small data center by the end of the year. Back in June , analysts at Semianalysis wrote that they were not aware of any progress on the joint venture. In March, Oracle's head of technology Larry Ellison admitted at an analyst conference that he had not yet signed any contracts.
Hundreds of thousands of accelerators
The first completed phase of the Lancium Clean Campus comprises two building complexes that will require 200 megawatts of electrical power at full load in the future. At the time of completion, 50.000 Nvidia GB200 accelerators will be operating in each of them, i.e. a total of 100.000 Grace CPUs and 200.000 Blackwell accelerators. It took less than a year to construct the building.
Due to the sheer number of accelerators, the system is likely to make every supercomputer in the Top500 list look old. In this list, computing power is determined using 64-bit floating point numbers (FP64). With perfect scaling, the Abilene data center would achieve eight exaflops, i.e. eight trillion computing operations per second. Even under real conditions without optimal scaling, the computing power should be higher than that of the Top500 leader El Capitan. The latter has a peak performance of a good 2.7 exaflops (1.7 exaflops permanently).
Meanwhile, more compact data formats such as INT8 and FP4 are sufficient for AI training, where so many Blackwell accelerators could reach the zettaflops range.
And many more on the horizon
By mid-2026, OpenAI, Oracle and their partners want to complete the second phase with six more identical building complexes. This would give the data center in Abilene a total of 400.000 GB200 boards or 800.000 Blackwell accelerators. Those responsible estimate 1.2 gigawatts of electrical energy for this. The other capacities with 4.5 gigawatts of energy requirements are additional.
Empfohlener redaktioneller Inhalt
Mit Ihrer Zustimmung wird hier ein externer Inhalt geladen.
Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen übermittelt werden. Mehr dazu in unserer Datenschutzerklärung.
In addition to local wind energy, Crusoe and Lancium are relying on gas generators to produce electricity, with Chevron behind them. They are investing venture capital in Energy No.1, which has secured options on seven of GE Vernova's currently most powerful gas turbines, the so-called 7HA.
Lancium is also working with the Texas government to stabilize the power grid. It has already proved susceptible to outages in extreme weather conditions.
Videos by heise
Meta wants in on the action
Meanwhile, OpenAI and Oracle are in a race with other hyperscalers. Meta in particular is said to have recently accelerated its own plans considerably. Semianalysis reports that part of a new building has been torn down because the power supply in Meta's old blueprint is said to be unsuitable for modern AI data centers.
One new building is to be designed for one gigawatt, with a second announced for 2027 with two gigawatts. Meta CEO Mark Zuckerberg likes to emphasize the size comparison with Manhattan.
(mma)