"Reckless": New criticism of how OpenAI & Co. deal with security concerns

For weeks, OpenAI employees have been spreading the image of a company that is acting ruthlessly to dominate the AI industry. Now they are making demands.

Save to Pocket listen Print view
Roboter mit roten Augen

(Image: Shutterstock/Usa-Pyon)

3 min. read

Former and current employees of OpenAI and Google's Deepmind are publicly warning of the risks associated with the development of AI technology and calling for more transparency and the ability to take criticism. A total of 13 people from the industry have summarized this in an open letter, which was also signed by six anonymous employees of OpenAI. The group calls for risk-related concerns to be voiced anonymously and without fear of consequences. In addition, the companies should support a culture of constructive criticism and not take action against the people who make it public - if internal attempts have previously failed. The letter reinforces the image of an industry in which risks are ignored far too much.

The public criticism of the AI industry's handling of safety concerns follows weeks of downright turmoil at OpenAI and is unlikely to calm things down. Three weeks ago, the company behind ChatGPT disbanded the so-called superalignment team, which was actually supposed to take care of the control and monitoring of a future superintelligence. This was accompanied by prominent departures and further resignations. And even if the demands in the open letter are formally directed at all AI companies, the focus is likely to be primarily on OpenAI again, as it was signed almost exclusively by former and current employees. Concerns that the company is ignoring security issues are therefore likely to continue to grow.

Signatories of the open letter have toldthe New York Times that they believe OpenAI is putting profits and growth above all else, while working flat out on super intelligence ("Artificial General Intelligence"). "OpenAI is really excited about developing an AGI and they are ruthless in trying to be first in the race to get there," quotes former OpenAI researcher Daniel Kokotajlo. The group he organized points out that there are strong financial incentives for the industry to avoid effective oversight. Information on risks does not really need to be shared with governments or the public. As long as this is the case, the industry is reliant on insiders and therefore needs to be protected.

In a statement, OpenAI said it was proud of its record in providing the most powerful and secure AI systems. It believes in the scientific approach to avoiding risks, supports the debate and will participate in it. Google, whose subsidiary Deepmind also had two employees sign the open letter, declined to comment, writes the New York Times. Unlike OpenAI, the company has not yet been the focus of criticism. And now OpenAI in particular is likely to be at the center of attention again. So far, the AI company has not managed to calm the debate. Quite the opposite. Even the establishment of a committee for protection and security has probably not allayed concerns.

(mho)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.