Google introduces Gemini Intelligence for Android

At the Android Show, Google unveiled the next stage of its AI assistant for smartphones and other products in the ecosystem: Gemini Intelligence.

listen Print view
Graphic Gemini Intelligence with Android Bugdroid

Gemini Intelligence for Android.

(Image: Google)

8 min. read
Contents

With Gemini Intelligence, Google is giving Android new AI features that are intended to make the mobile operating system an “intelligent system.” The innovations are based on the first agentic capabilities that Google had announced for the Galaxy S26 and Pixel 10 series for the US market. These have been refined and expanded with new features.

According to Google, Gemini Intelligence will be rolled out gradually over the summer, “starting with the latest Samsung Galaxy and Google Pixel smartphones.” Over the course of this year, the new functions will be available on all Android devices, including smartwatches, cars, smart glasses, and laptops. Google confirmed to heise online that the AI functions will also be provided in Germany – however, the company did not provide a date.

In advance, after an initial teaser, it was speculated that Google might be orienting its design language towards Apple's Liquid Glass. Android ecosystem chief Sameer Samat quickly dismissed this. However, Gemini Intelligence introduces a revised design language based on Material 3 Expressive, which Google had introduced with Android 16.

While initial screenshots and animations suggest that Google seems to be adding a bit more transparency and glass accents. Google describes the new visual system as appealing and functional, while targeted animations are used to reduce distractions, allowing users to focus entirely on the task at hand.

Videos by heise

Gemini Intelligence is intended to help users “automate tedious tasks so you can focus on what matters,” Google writes in its announcement. The multi-stage automation features, which were first available on the Galaxy S26 and Pixel 10 for popular food and ride-sharing apps, have been optimized “to ensure every interaction feels seamless.”

These agentic capabilities will soon not only be usable for food orders and ride services, but also for other things: The company cites examples such as booking a spot for sports classes (specifically: a spot in the front row of a spinning class) or finding a course schedule in Gmail and adding the necessary books to the shopping cart.

Google

App automation becomes even more powerful when supplemented with screen or image context. Instead of “manually switching between apps and copying data, Gemini can turn visual context into instant actions,” explains Google. For example, by long-pressing the power button, you can instruct Gemini to add a long shopping list from the notes app to the shopping cart and prepare it for delivery. It should also be possible to take a photo of a printed travel brochure and instruct Gemini to “search for a similar tour on Expedia for a group of six people.” You can then follow the progress of the search in real-time in the notifications. Google also clarifies that you remain in control: “Gemini only acts on your command and stops the moment the task is complete. All that's left for you is the final confirmation.”

What Nothing can do, Google can apparently do even better: With Gemini Intelligence, the company is taking its first steps towards generative user interfaces. User-generated widgets are the starting point. With the “Create My Widget” function, users can specify in natural language what the widget should do, and the AI will do the rest.

Gemini Intelligence enables the creation of widgets by voice.

(Image: Google)

According to Google, with the function “you can create fully customized widgets by simply describing what you want in natural language.” As an example, Google cites a widget that suggests “three protein-rich recipes for meal prep” to users weekly. Cyclists can also have a widget created that displays wind speed and probability of rain. The generated widgets can be resized. The widgets can be used not only on Android smartphones but also on Wear OS watches with support for Gemini Intelligence.

Gemini Intelligence is also intended to help polish spoken text. While users can already convert speech to text relatively quickly and accurately with Gboard on Android, the spoken input sometimes needs to be optimized afterwards to remove “uhs,” other filler words, and repetitions.

Rambler: The Gemini Intelligence feature polishes voice input.

(Image: Google)

With the new “Rambler” function (from English “Rambling” - “wandering speech”), the spoken input is automatically refined: “Rambler captures the important parts and puts them together into a concise message,” explains Google. The function shows users when the feature is activated; the spoken input is only used for real-time transcription and is not stored.

Rambler also recognizes multiple languages simultaneously. This is achieved using Gemini's multilingual model, allowing Rambler to seamlessly switch between languages in a single message. “Whether you're blending English with Hindi or any other combination, Rambler understands the context and the nuance, ensuring your message sounds exactly like you – only more polished,” it says.

New or improved is the “Autofill with Google” function: Using the Gemini function “Personal Intelligence,” introduced a few months ago in some countries (not the EU), Android will be able to automatically fill in even more text fields in apps and Chrome in the future. Just last November, Google announced an expansion of the autofill function for Chrome.

Gemini Intelligence is intended to improve autofill for forms.

(Image: Google)

According to Google, relevant information from connected apps can be used to fill out forms for users. The connection of Gemini with “Autofill with Google” is purely voluntary, meaning you decide whether and when you want to establish the connection to Gemini – and you can activate or deactivate this connection at any time in your settings.

Gemini Intelligence - Chrome gets smarter.

(Image: Google)

New Gemini functions are also being integrated into Chrome for Android: users can use them to research, summarize, and compare content on the internet. An automatic browser search in Chrome will also take over everyday tasks, “whether it's making an appointment or reserving a parking space,” the company says. Given that previous Gemini functions for Chrome are not yet offered in this country, it is questionable whether they will reach Europe soon.

In an additional post, Google explains that it guarantees the “highest level of data protection” for Gemini Intelligence, based on three core principles. Users will always retain full control over how and when the AI acts when using it. Among other things, functions such as automated form filling or app automations are strictly opt-in. Users can activate or deactivate individual components at any time in the settings. Furthermore, Gemini only performs tasks on instruction in shared apps and requires confirmation before purchases. Users actively decide for themselves whether data is shared.

Google promises transparency and security for Gemini Intelligence.

(Image: Google)

In addition, according to Google, isolation technologies are used: for “proactive” functions (such as Magic Cue, which is not available in Germany), technologies such as the “Private Compute Core, Private AI Compute, and protected virtual machines (KVM) are used,” the company explains. Furthermore, the architecture includes hardware, process, and server isolation to prevent data leaks. In addition, safeguards against modern attacks such as prompt injection are integrated directly into Android, Google assures.

Gemini Intelligence: Notifications for ongoing automations cannot be dismissed.

(Image: Google)

Users should also maintain an overview while Gemini automates an app interface. They can follow actions live with the “View progress” function. A confirmable notification history indicates when the AI is active in the background. Furthermore, the Android Privacy Dashboard will show which AI assistants have been active in the past 24 hours and which apps they have used. In addition, important parts of the security architecture are open-source and audited by third parties to independently verify security promises.

Gemini Intelligence will be integrated with Android 17, which is scheduled for release in June. Google has not yet revealed the exact date for the major update.

(afl)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.