Tension between Nvidia and OpenAI

Nvidia CEO Jensen Huang reportedly dislikes OpenAI's business strategy. OpenAI is said to be looking for alternatives. Nvidia's stock is falling.

listen Print view
Nvidia CEO Jensen Huang on stage with Blackwell accelerators

Nvidia CEO Jensen Huang on stage with Blackwell accelerators.

(Image: Nvidia)

4 min. read
Contents

Things don't seem to be running smoothly between partners Nvidia and OpenAI at the moment. Recent rumors prompted OpenAI CEO Sam Altman to share a statement via X: “We love working with Nvidia; they make the best AI chips in the world. We hope to remain a gigantic customer for a long time. I don't understand where all this madness is coming from.”

However, there are indications of underlying issues. A deal worth 100 billion US dollars in September 2025 has yet to materialize. Nvidia CEO Jensen Huang recently said that while he would like to invest a lot of money in OpenAI during a funding round, it would be “nothing like” 100 billion dollars. Internally, Huang is said to have dismissed the September agreement as non-binding.

Shortly after a Wall Street Journal report about the paused deal, the Reuters news agency reported that OpenAI is allegedly unhappy with Nvidia's inference accelerators. These run pre-trained AI models, thus answering ChatGPT requests or generating code for OpenAI's Codex agents.

Inference accelerators require less computing power than chips for training AI models. Instead, they benefit even more from tightly integrated, fast memory. OpenAI is therefore reportedly looking for alternatives that primarily integrate a lot of memory, such as SRAM, directly into the chips. This reduces latency compared to accelerators like those from Nvidia, which primarily rely on external memory modules like GDDR7 or HBM.

Videos by heise

Reports of a switch from Nvidia to competitors could now be interpreted as a defiant reaction from OpenAI, which isn't getting its 100 billion dollars. However, OpenAI already finalized a concrete deal with AMD for inference hardware in early October 2025. The discussions must have run parallel to the Nvidia negotiations.

According to the deal, OpenAI will purchase AMD accelerators with a total capacity of six gigawatts over five years, starting with the Instinct MI400-series this year. The specifications are not yet entirely clear, but AMD already integrates larger caches than Nvidia today, which is beneficial for inference.

In January 2026, a partnership with Cerebras followed. OpenAI is purchasing its so-called Wafer Scale Engines (WSE), massive chip constructs with abundant integrated memory. The current WSE 3 comes with 44 GByte SRAM. The agreement extends until 2028 and includes a capacity of 750 megawatts.

OpenAI is also said to have negotiated with Groq about its inference-specialized accelerators. However, Nvidia preempted OpenAI with a 20-billion-dollar deal: Nvidia has been licensing Groq's technology since December 2025 and has taken over large parts of the design team as part of the partnership. Nvidia could thus strengthen its inference offering in the coming years. For OpenAI, an agreement with Groq no longer makes sense.

Low-latency accelerators in particular are expected to account for around ten percent of OpenAI's entire inference fleet in the future, Reuters reports, citing its sources. They are intended for well-paying customers who are to receive the fastest possible responses. Nvidia hardware could therefore continue to make up the majority of OpenAI's hardware.

The friction is meanwhile reflected in the stock market. While most semiconductor companies are slightly to significantly up this week, Nvidia's stock is several percent down. Since the beginning of the year, there has been stagnation.

Empfohlener redaktioneller Inhalt

Mit Ihrer Zustimmung wird hier ein externer Preisvergleich (heise Preisvergleich) geladen.

Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen (heise Preisvergleich) übermittelt werden. Mehr dazu in unserer Datenschutzerklärung.

(mma)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.