Artificial General Intelligence: OpenAI dissolves AGI consulting team

OpenAI has dissolved its advisory board for Artificial General Intelligence. Miles Brundage, outgoing team leader, warns about AI safety.

listen Print view
The OpenAI logo on the facade of the office building in San Francisco.

(Image: Shutterstock/ioda)

3 min. read

OpenAI has disbanded its "AGI Readiness" team, which was tasked with advising the company on topics such as how to deal internally with increasingly powerful Artificial General Intelligence ( – AGI) and its impact on society. This is according to a report by CNBC. OpenAI did not give any reasons for the dissolution of the team. The outgoing head of the team, Miles Brundage, warns of the dangers of AGI in a statement published on Substack.

"Neither OpenAI nor any other frontier lab is ready, and neither is the world," writes Brundage in his post.

Above all, he takes a stand against OpenAI's efforts to make the company profit-oriented and pay less attention to security in the development of an AGI.

Videos by heise

OpenAI had already disbanded its super-alignment team in May, which had been working on assessing the long-term risks of AI. The head of the team at the time, Jan Leike, criticized at the time that at OpenAI "the safety culture and processes had to take a back seat to polished products".

Three managers then left the company in September: CTO Mira Murati, Head of Research Bob McGrew and Vice President of Research Barret Zoph. OpenAI co-founder Ilya Sutskever and senior executives Andrey Karpathy and John Schulman had already left the company, having criticized OpenAI CEO Sam Altman's approach, in some cases severely. Altman was already facing headwinds last fall, which culminated in the CEO's brief dismissal due to a loss of confidence in the Supervisory Board.

Brundage now also sees a dangerous tendency for AI companies to prioritize financial concerns over safety in the development of AI. Among other things, he criticizes the desire for AI companies to be less regulated in order to avoid supervision. However, in order to develop safe AGI, conscious decisions need to be made by governments, companies and society. "It is unlikely that AI will be as safe and beneficial as possible unless we make a concerted effort to make it so," he writes.

Brundage plans to found a non-profit organization or join an existing one that focuses on AI research and advocacy. The former members of the AGI team will be given other roles within the company, according to Brundage's statement. Many former OpenAI employees have moved to competitor Anthropic, which is committed to the safe and trustworthy development of AI. Former CTO Mira Murati is currently raising money for a new start-up. Ilya Sutskever has also set up his own business – His company is called Safe Superintelligence.

OpenAI has stated that it supports Brundage's decision, writes CNBC. The company wants to learn from his independent work in the future.

(olb)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.