OpenAI: Whistleblowers seek contract audit by stock authority
OpenAI has internal issues. Employees seek regulatory review of contracts.
(Image: Shutterstock/ioda)
Several whistleblowers have written to the US Securities and Exchange Commission (SEC) in the hope that it will review OpenAI's employment contracts. Specifically, it is about the company's very restrictive non-disclosure agreements. These would also apply if risks were involved. There are obvious inconsistencies at OpenAI when it comes to security. There is repeated public criticism.
The company is said to prohibit its employees from warning the supervisory authorities about the serious risks that its technology could pose to humanity. The ban is unlawful, say the signatories. If the employees turned to federal authorities, they would forgo the protection for whistleblowers that exists in the USA for such cases. In addition, they are contractually obliged to obtain approval from OpenAI before contacting a federal authority. This violates the Federal Whistleblower Protection Act.
The employment and severance agreements are also criticized. The letter, which is available to the Washington Post, is seven pages long and was submitted to the US authority at the beginning of July. The signatories call on the SEC to act quickly, impose penalties on OpenAI and inform employees of their rights.
Videos by heise
An OpenAI spokeswoman told the Washington Post that the company wants to protect the rights of its employees. "In addition, we believe that a rigorous debate about this technology is essential and have already made important changes to our exit process to remove non-disparaging clauses." Disparagement means the disclosure of information.
OpenAI and the security concerns
The quarrels about security and the direction of the company, which was originally launched as a non-profit, have been known since Sam Altman's short-term dismissal. Since then, several former employees have confirmed that things are bubbling up. Security researcher Jan Leike and co-founder Ilya Sutskever have left OpenAI at their own request. A former board member speaks of a toxic working environment. Helen Toner also says that Sam Altman himself withheld information and misrepresented developments to the board.
Most recently, there were allegations that the safety test of GPT-4o, the so-called omni-model, only lasted one week. This was also reported by the Washington Post. The security team is said to have been put under intense pressure to meet a deadline set by the management. According to the insider, the new model was celebrated internally even though the tests had not yet been completed. An OpenAI spokeswoman said that although it had been stressful for the employees, the security processes had not been shortened.
Meanwhile, a security incident has already come to light. Attackers had been able to read chats.
(emw)