OpenAI: Next big AI model to be trained, new committee examines safety

Several people from OpenAI's top management are to form a new committee to examine whether the company is doing enough for security.

Save to Pocket listen Print view
ChatGPT-App auf einem Smartphone

(Image: Tada Images/Shutterstock.com)

3 min. read
This article was originally published in German and has been automatically translated.

OpenAI has set up a new committee for protection and security shortly after disbanding the Superalignment team. The US company has now announced this in a blog post. The group, led by Chairman Bret Taylor, Adam D'Angelo, Nicole Seligman and CEO Sam Altman - all of whom are members of the management team - will develop proposals for critical decisions relating to these two areas. The aim is to minimize the risks associated with the further development of AI technology. OpenAI would like to see a "robust debate at this important time". In the article, OpenAI also explained that it had "recently" started training the next major AI model. This should one day replace GPT-4 and will take the technology a further step towards Artifical General Intelligence (AGI), OpenAI promises.

The OpenAI superalignment team was actually supposed to take care of the control and monitoring of a future superintelligence, but was then disbanded around two weeks ago. The researchers responsible for this had resigned from their posts and the employees entrusted with this task had been assigned to other teams. As a result, there was no longer a permanent team at OpenAI responsible for security in the face of ever-improving AI technology. Those responsible had criticized that obstacles had been placed in their way and that they had been denied the necessary computing power for their work, for example. OpenAI had explained that the team had merely been disbanded as the entity that "dealt with the most distant AI version". Other teams were responsible for more specific threats. Nevertheless, action has now been taken and the new committee has been set up.

The new committee has been given 90 days to review and further develop the processes and protective measures. The company only made details of this public a few days ago. As soon as this work has been completed, the results will be presented to the entire Board of Directors. The results will also be made public as soon as they have been discussed. There are no further details on this or on the current GPT-5 training. News on this was expected at the recent product presentation, but OpenAI made it clear in advance that there is currently no more to announce. The company was then in the headlines due to the allegations made by US actress Scarlett Johansson and was forced to prove with internal documents that there was no truth to them.

(mho)