AI model: Google releases Gemini 2.0 Flash

According to Google, the successor to Gemini 1.5 brings progress in multimodality and tools with – for CEO Pichai a step towards the "universal assistant".

listen Print view
Google lettering on a white wall

(Image: testing/Shutterstock.com)

2 min. read

Google introduces "Gemini 2.0 Flash", the first model of the next Gemini generation. The still experimental model can already be selected in the web-based Gemini app and will soon be added to the smartphone app, Google announced on Wednesday.

In Google's AI universe, Flash are the models designed for speed. The new version is based on the previous Gemini 1.5 Flash model and adds new features such as multimodal input and output, Google added. Gemini 2.0 will also be available in other Google products at the beginning of the year.

The model can be fed with text, image and audio data and can now generate images and audio as well as text. Gemini 2.0 Flash is also able to call up tools such as Google Search and execute user-defined functions or code.

Videos by heise

For developers, the new version is available via the Gemini API in Google AI Studio and Vertex AI. The multimodal version is initially only available to a select group of developers, but everyone will be able to access it in January.

Google CEO Sundar Pichai speaks of a "new era of agents": "With Gemini 2.0, we are introducing our most powerful model yet," says Pichai. "With advances in multimodality such as native image and audio generation and the use of tools, we can develop new AI agents that bring us closer to the goal of the universal assistant."

(vbr)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.