OpenAI founds independent body: "Safety Board"

Safety and security at OpenAI will in future be monitored by an (almost) independent body.

listen Print view
The OpenAI logo on the facade of the office building in San Francisco.

(Image: Shutterstock/ioda)

3 min. read

The former OpenAI Safety and Security Committee will be transformed into an independent body. This will then be called the "Board Oversight Committee". In the event of safety concerns, the future committee will have the power to stop the publication of AI models if there are concerns about safety. They will have 90 days to act.

As OpenAI writes in the blog post, the board includes well-known AI experts: Adam D'Angelo, Nicole Seligman and Paul Nakasone. It is headed by Zico Kolter. The professor at Carnegie Mellon University joined the OpenAI Board of Directors a month ago. D'Angelo is a school friend of Mark Zuckerberg and joined Facebook early on as CTO. He has invested in and founded several tech start-ups and has been a member of OpenAI's Supervisory Board since 2018. Nicole Seligman is a lawyer and worked at Sony before OpenAI. Paul Nakasone is a former US Army General and is considered an expert in the field of cybersecurity.

As the aforementioned committee members are all also members of the supervisory board, it is not entirely clear how the newly created committee is supposed to be independent. The members are said to have already reviewed the current AI model GPT-o1 prior to publication. They should also be regularly informed about the status of OpenAI developments. The same applies to US authorities; in future, OpenAI must make new models available to them for review before publication.

The issue of security seems to be a constant source of controversy at OpenAI. Sam Altman's short-term dismissal last fall is also said to be related to the issue of security at the company. Several employees and supervisory board members complained about the conditions. Senior scientists left OpenAI and switched to competitor Anthropic –, which advertises a responsible approach to AI.

Videos by heise

Meta also set up an independent body, the Oversight Board, years ago following criticism of the security of its platforms. However, this board is actually made up of people who do not also hold other positions at Meta. The Oversight Board can be consulted by Meta itself if decisions regarding the handling of content are disputed. Meta must follow the decisions of the board. The Oversight Board is currently looking into how Facebook and Instagram deal with AI-generated deepfakes.

(emw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.