OpenAI: From non-profit to profit-oriented company

Sam Altman is said to have told shareholders that OpenAI should become a profit-oriented company [--] without the control of the non-profit board.

Save to Pocket listen Print view
The OpenAI logo on the facade of the office building in San Francisco.

The OpenAI logo on the facade of the office building in San Francisco.

(Image: Shutterstock/ioda)

3 min. read
This article was originally published in German and has been automatically translated.

The company OpenAI could soon be realigned – towards more profit and less charity. CEO Sam Altman spoke about this with shareholders. According to this, OpenAI could, for example, be converted into a "for-profit benefit corporation". This is a profit-oriented company that still aims to serve society and the public. But other models are also being discussed.

OpenAI is currently divided into OpenAI Inc. and OpenAI Nonprofit. The former is a registered Limited Liability Company (LLC), which houses the research and development of AI systems, meaning that the actual work takes place under this umbrella. However, this small company, which it is after registration as an LLC, is subordinate to the OpenAI Nonprofit, i.e., the non-profit organization – a public charity, as it is called in the USA. It controls and monitors the research and should ensure that OpenAI maximizes the public benefit.

In the proposed transformation, the non-profit organization would therefore primarily lose control over the area of research and development. When asked by Reuters, an OpenAI spokesperson said: "We remain focused on developing AI that benefits everyone. The nonprofit is core to our mission and will continue to exist." Of course, the fact that the organization will continue to exist does not mean that it will retain the same control.

There has clearly been a long-standing dispute at OpenAI about the direction of the company. The dismissal of Sam Altman in the fall of 2023 is also said to have been about differing views. Former employees have already publicly complained that the company is solely concerned with profit and that security plays no role in the development of AI models and work on Artificial General Intelligence (AGI).

In fact, OpenAI has disbanded its security department after its managers left the company. Ilya Sutskever, one of the co-founders of OpenAI, and Jan Leike, a security researcher, resigned a few weeks ago. Sam Altman then took the helm of a newly established security committee. Sutskever and Leike did not comment further on the reasons for their departure.

The situation is different for Helen Toner and Tasha McCauley, the two former board members sharply criticize OpenAI and Sam Altman. They accuse Sam Altman in particular of having a difficult and manipulative management style; Toner even speaks of a "toxic atmosphere". In the Ted AI podcast, Toner also refers to Altman's CV, which is not straightforward and would reflect his leadership qualities. Altman allegedly tried to force Toner off the board and spread lies about her. All of this happened after he didn't like a paper she had written. She explains that all employees threatened to resign after Altman's dismissal by saying that they were worried that the importance of OpenAI would dwindle without Altman. Microsoft immediately offered Altman a job. They are a major investor in OpenAI and the AI company is dependent on the funds.

Toner and McCauley are calling for state regulation instead of self-regulation of AI companies.

(emw)