Model Specs: OpenAI wants more "intellectual freedom" for ChatGPT
OpenAI is changing its model specifications. In future, AI models will be able to run without "arbitrary restrictions".
(Image: Novikov Aleksey/Shutterstock.com)
"This update underscores our commitment to customizability, transparency, and intellectual freedom to explore, discuss, and create AI without arbitrary restrictions," writes OpenAI in a blog post announcing changes to the model specification. Model Specs is a document that defines how models should behave, i.e. what is desired by the manufacturer.
Despite the greater openness, there should still be guard rails in the AI models, specifically for ChatGPT, for example, in order to reduce "the risk of real damage". Conversely, this means that OpenAI's models have so far wanted to prevent harm that was not real. The statements fit in with the current environment in Silicon Valley and the so-called tech bros who are currently seeking to get close to US President Donald Trump.
Meta boss Mark Zuckerberg has also promised to allow more freedom of expression on his platforms. According to leaked documents, this includes, for example, allowing people from the LGBTQIA* community to be insulted. In the EU and Germany, however, this only goes as far as it does not constitute a criminal offense. Meta is also paying Trump 25 million US dollars in compensation for blocking the platforms after the storming of the Capitol. X is paying Trump ten million US dollars for the same reasons. As is well known, xAI operates the chatbot and integrated image generator Grok, which has virtually no guard rails.
Is the earth flat now according to ChatGPT?
OpenAI wouldn't be OpenAI if they didn't write in their blog post that they want to develop models and artificial general intelligence (AGI) that benefit all of humanity. However, it is sometimes difficult to find the balance between guard rails and possibilities. The model specification is used to give models clear instructions on how they should behave in which cases. It also determines when a model takes into account the instructions provided by OpenAI and when a developer or user has the final say. The latter is now apparently more frequently the case.
Videos by heise
The blog post also states that people should be able to view relevant topics from any perspective they wish. So if I want to view the world as a slice, this should soon be possible according to this definition. In future, the aim is to "search for the truth together". OpenAI wants its own models to help people make their own decisions.
Finally, OpenAI emphasizes in the blog post that it is "expressly committed to intellectual freedom, i.e. the idea that AI should enable people to research, debate and create without arbitrary restrictions." No idea is taboo from the outset. However, if you ask ChatGPT about lizard people and a flat earth, the chatbot replies that both are conspiracy narratives.
OpenAI also writes of the restrictions that the models are still not allowed to provide detailed instructions on how to build a bomb or contribute to the violation of privacy. The company also recently removed warnings informing users that they are in sensitive territory, i.e. that questions are about topics that the chatbot does not answer.
The model specs are published under a Creative Commons CC0 license and can be adapted by developers and researchers. Ultimately, it is a document that explains which rules the AI models learn during training.
(emw)