AGI: OpenAI complicates work to control a superintelligence

The work of OpenAI's superalignment team to monitor a future AGI became increasingly difficult. The reasons for Sutskever's and Leike's departure.

Save to Pocket listen Print view

(Image: Phonlamai Photo/Shutterstock.com)

3 min. read
Contents
This article was originally published in German and has been automatically translated.

OpenAI's Superalignement team, which was responsible for the control and monitoring of a future Artificial General Intelligence (AGI), has been disbanded. The team's tasks, which according to the resigned senior scientists Ilya Sutskever and Jan Leike were increasingly torpedoed, will now be distributed to other departments. According to several reports, the ChatGPT developer no longer has a permanent team responsible for the security of an AGI.

After the reason for Leike's dismissal was initially unclear, he later explained in several tweets that, among other things, the computing power required for the superalignement team was no longer provided by OpenAI for security research. Like is therefore concerned that OpenAI is no longer on the right track to be able to assess or guarantee safety, security, trust (in an AGI) or the impact on society, for example.

"Over the past few months my team has been sailing against the wind," writes Leike on X (formerly Twitter). According to the Financial Times, his team is committed to 20 percent of computing resources to ensure that AI is aligned with human interests, even as it becomes exponentially more powerful. However, requests for a fraction of the promised benefits have often been turned down, preventing Sutskever and Leike's team from doing their work, TechCrunch reports.

In general, these problems are difficult to solve and Leike, a former researcher at Google's Deepmind, believes that "not enough attention is being paid to the safety and societal impact of more powerful AI models". He is also concerned that we are not on the right track. "Building machines that are more intelligent than humans is inherently a dangerous endeavor." Elon Musk – who co-founded OpenAI in 2015 and later left the company - is of the same opinion, having warned against AI as "the greatest risk of our time" back in 2017.

This brings us full circle to the reasons why co-founder Ilya Sutskever voted for the dismissal of CEO Sam Altman. One day later, however, negotiations were resumed in November 2023 to bring Altman back to his old post. Three days after negotiations began, Altman was back in his old post.

According to the report, the now defunct super AI team was founded in July of last year. Before the departure of the two leading scientists, at least six other researchers had already left the team since November last year – who had formerly worked in the Alignement team, from which the Superalignement team emerged, reports Wired.

Risk research into more powerful AI models will in future be led by Jon Schulman, who heads the team at OpenAI responsible for fine-tuning AI models after training. According to Wired, the superalignement team was also not the only team working on AI control. The now disbanded team – which has no solution for controlling a super AI – was publicly portrayed as the main team "working on the furthest AI version" and its potential impact.

(bme)