AI access only with verification: OpenAI restricts use for corporate customers

Companies will have to verify themselves with OpenAI to access AI models and additional functions. This is intended to protect against misuse.

Save to Pocket listen Print view
Logo and name of OpenAI on a smartphone, in the background enormously enlarged red pixels

(Image: Camilo Concha/Shutterstock.com)

2 min. read

OpenAI has introduced a new verification process for enterprise customers who want to access upcoming AI models via the company's own interface. The US software company announced this in a support document. With the verification, the company wants to "curb unsafe use of AI" and enforce its terms of use, which had previously been violated by a small number of developers. Already published language models are not affected by the change.

With verification, OpenAI unlocks access to new AI models and additional functions within the platform. The company does not provide any specific details on this. Verification requires an official identification document from the respective country, which must be presented by a representative of the organization, such as an identity card or passport. Only one verification per person is possible within 90 days. According to the document, verification is not yet available for all organizations. Currently, verification is possible in 200 countries, which the company does not name.

iX-Workshop: Deep Dive into the OpenAI API: Integrating AI into your own applications

A workshop for experienced developers to learn how to integrate the OpenAI API into their projects to develop innovative AI solutions. Participants will learn how to authenticate the API, use official SDKs, develop system prompts and use the Assistant API for specific use cases.

Registration and dates at https://heise.de/s/2AjNj

As OpenAI recently announced in its own analysis, the company is confronted with the malicious use of its language models. It found that ChatGPT was used to generate content for job applications from supposed IT employees from North Korea. Accounts with potential links to the North Korean government were also found to have used ChatGPT to research and debug code to attack remote desktop connections.

While OpenAI restricts corporate customers' access to its own language models, the company is demanding fewer restrictions from the EU when dealing with artificial intelligence, for example in terms of data protection. OpenAI is calling for AI laws to be simplified and rules to be harmonized within the EU. There should also be more investment in infrastructure, such as the construction of data centers, fiber optic networks and renewable energy plants. Funding is also needed for AI research and training.

(sfe)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.