Google I/O: AI subscription for 250 US dollars and a better Gemini

This is already the third year in which AI has dominated Google I/O. From agents to luxury subscriptions, there are numerous new services.

listen Print view
Sundar Pichai on stage at Google I/O

Sundar Pichai on stage at Google I/O

(Image: Screenshot)

5 min. read

More intelligent, "agentic" and personalized – These are the new attributes for AI that Google's CEO Sundar Pichai announced at the start of the Google I/O developer conference. There is a general trend behind this: AI should be able to answer questions even better, more correctly and in more detail, but also complete tasks itself and always act in the interests of the user. And one day, as Demis Hassabis, CEO of Google Deepmind, dreams, we will have a world model that can do everything – like a real brain.

According to Hassabis, Google's most powerful models, Gemini 2.5 Flash and Pro, are already laying the foundations for this new AI world. Google wants to improve them further and make them accessible to more people. Gemini 2.5 Flash is moving into the Gemini app and is available to developers via Google AI Studio and Vertex AI – 2.5 Pro is set to follow soon. Google has already made a pre-release version available to developers. According to Google, Gemini 2.5 Pro is the leader in the WebDev Arena with an ELO of 1420 and in all LMArena categories – in such an arena, AI models compete against each other and are evaluated.

Another new feature in Gemini 2.5 is Deep Think, a so-called reasoning model. Reasoning means that the model should not only be able to reproduce content, but also link it logically. Both models will also soon be able to generate audio natively. The tone of voice, an accent and the style of a speaker can be determined. Whispering, dramatic speech, everything is possible.

Gemini 2.5 Flash and Pro are also suitable as agents that can control the browser. Google has been working on this for some time under the name Project Mariner. This is now being incorporated into the Gemini API and Vertex AI. Google also supports the Model Context Protocol (MCP) developed by Anthropic. Thanks to the MCP, Gemini can collect information and create lists, such as where a laundry is located. The protocol makes websites "readable" for agents.

Videos by heise

Project Mariner is already available to test for customers of Google's new AI Ultra subscription in the USA. Google calls this a "VIP Pass" – and makes you pay for it: it costs 250 US dollars a month. The highest usage limits for all AI services are included, and they also get first access to Deep Think, Flow and Whisk, for example, which can be used to turn images into short animations. The subscription also includes 30 TB of storage capacity for photos and other documents. AI Premium will be renamed AI Pro. You can keep this and get more access here too, for example to Flow.

Search is being expanded in the USA to include AI Mode. This was already available in a Google Labs test environment. Reasoning capabilities are also behind AI Mode. The aim is to be able to answer more complex questions in the search. Google also uses the web and its own Knowledge Graph, i.e. the knowledge that Google has collected in a gigantic database. This database also contains the shopping results. This is how Google Shopping gets AI functions via AI Mode. These include the ability to try on clothes virtually, something Google had already tested previously, as well as an agent-based checkout.

Imagen 4 and Veo 3 are Google's video and image generators. They will be able to generate native audio in future. With Flow, it should be possible to create longer films, for example by gaining better control over characters and styles. Google seems to want to prove this with a video right at the start of I/O. A zoo stirs up a classic Wild West town. Parrots fly around, a T-Rex made of bricks roars and there is dust. But the balloon-like letters welcoming you to I/O shine cleanly in the blue sky.

Canvas, Google's AI scribbling tool, Deep Research, the chatbot for scientific tasks, are getting updates, as are Gmail and Google Meet –, including emails that Gemini answers as you write them and synchronous translations in Meet. Initially, however, these are only available for a few languages.

Google Beam is new: a 3D communication platform based on AI. Here too, conversations between two people can be translated simultaneously. And not in a computer voice, but in the voice of the respective speaker. Android XR will bring Gemini to smart glasses and headsets. Gemini Live is also available on Android and iOS devices. This is the function that allows you to talk to the AI in real time about what you see – on the screen or via the camera.

(emw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.