OpenAI redefines its mission: Less AGI, more power question

OpenAI CEO Sam Altman is rewriting his company's principles. It's no longer just about the future, but increasingly about the present.

listen Print view
OpenAI logo on the glass facade of an office building against a blue sky with clouds.

(Image: Prathmesh T / Shutterstock.com)

3 min. read

A true Artificial General Intelligence (AGI), surpassing humans in many aspects, should benefit everyone – that's how OpenAI had written it into its charter in 2018. Now, in 2026, the company is maturing: The tone CEO Sam Altman strikes in the Principles published on Sunday is more personal, more conciliatory, but also more political: The new paper clearly aims not only to address fears of a future AGI. It also tackles issues for which OpenAI and other AI developers are already facing criticism today.

OpenAI thus positions itself in the new paper against the concentration of power in a few companies. Instead, there is talk of democratization and decentralization. However, Altman admits that OpenAI is a much greater force in the world today than it was just a few years ago.

At the same time, he defends the massive expansion of its own infrastructure. The construction of huge AI data centers, high energy consumption, and multi-billion dollar investments in hardware may seem contradictory to a decentralized future at first glance, Altman suggests. From the company's perspective, however, they are a prerequisite for making powerful AI available on a large scale at all.

It is also striking how strongly OpenAI emphasizes the economic benefits of AI. The new Principles promise "universal prosperity" – a future in which productivity gains through AI are intended to benefit as many people as possible. This brings the company's focus more strongly onto social and economic issues: Who benefits from automation? How are new values distributed? And what role should states play in this? These are also questions raised by the boycott campaign QuitGPT: It criticizes OpenAI's close ties to US politics and calls for consequences.

Videos by heise

Compared to the 2018 charter, the focus shifts. At that time, safety, long-term risks, and the responsible development of AGI were primarily in the foreground. Now, OpenAI describes AI more as a tool that is already changing societal structures today – from work to education to administration. How far AI as a tool already reaches is shown, for example, by GPT-5.5, which OpenAI explicitly positions as an agent-like model for software development, research, and data analysis. The AGI as the holy grail is thus rhetorically defused. At the same time, OpenAI indicates that it wants to play a greater role in shaping the rules of today's AI era. The company is thus moving away from the purely technological sphere and striving for a socio-political role. Internally, this is also reflected in the restructured leadership with which Altman more closely integrates research and global growth.

For critics, however, the new paper may still leave questions unanswered. After all, OpenAI itself has now become one of the most powerful companies in the industry, closely linked to major investors and reliant on enormous computing power. The commitment against power concentration could therefore also be interpreted as a reaction to growing political pressure and looming regulation. Altman had previously spoken out in favor of international AI regulation.

(mki)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.