ChatGPT gets Canvas, OpenAI offers new APIs

OpenAI presented new functions for developers at DevDays. There is also a new chatbot interface – Canvas.

Save to Pocket listen Print view
The OpenAI logo on the facade of the office building in San Francisco.

(Image: Shutterstock/ioda)

3 min. read

OpenAI has received a new cash injection. And the company is immediately announcing new functions. These include a new user interface. Canvas is intended to make working with ChatGPT more intuitive – but this applies in particular to text and code, not, as the name might suggest, to images. There are also new tools for developers, a real-time API, an API for fine-tuning vision and prompt caching.

Canvas is a separate window that can be opened in ChatGPT – and opens automatically when ChatGPT thinks it might be helpful. The function serves to improve collaborations with ChatGPT. The AI should therefore also be a better help here. In the future, it will be possible to work on text and code. So far, it has only been possible to ask a question and ask again if you are not satisfied with the answer. Canvas then allows you to mark areas and change them. ChatGPT can also provide feedback on individual passages on request – with the entire project in mind.

A number of shortcuts are available, for example to suggest changes and linguistic improvements, to make something longer or shorter, to adjust the level of difficulty for reading comprehension and to insert emojis. The latter certainly makes the often AI-generated LinkedIn posts even easier. There are more shortcuts for coding.

As usual, it is an early beta version that can now be tested and is based on GPT-4o. According to OpenAI's blog post, downstream training to optimize the model for the collaboration task was carried out without additional data created by humans. Distilling was used to incorporate data from OpenAI's o1-preview. People with ChatGPT Plus and Team can access Canvas, Edu and Enterprise customers will be added in the next few days. Canvas will only become freely available once it is out of the beta phase.

At DevDays, a developer event in San Francisco, OpenAI also presented new functions for the APIs. A real-time API makes it possible to offer multimodal applications with low latency. It works with the preset voices of the Advanced Voice Mode.

Model Distillation is a way of transferring the knowledge of larger models into smaller models. This is now available for GPT-4o mini. According to OpenAI, prompt caching offers developers a 50 percent discount and faster processing times by reusing previous input tokens.

For GPT-4o, it is also possible to perform image fine-tuning. This can improve image comprehension, for example.

Just a few days ago, OpenAI received 6.6 billion US dollars from investors in the form of a convertible bond. This corresponds to a new company valuation of OpenAI of 157 billion US dollars. However, the investors are demanding a change in the corporate structure. As previously reported, the non-profit status is to be dropped and OpenAI is to become a profit-oriented company. This is the only way investors can get more money back. However, OpenAI would first have to earn more money. Insiders are currently predicting a loss of five billion US dollars at the end of the year.

(emw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.