Googles Jeanine Banks on Gemma, Gemini and her favorite AI tools

"Not everyone will be able to program thanks to AI": Jeanine Banks, VP and General Manager for Developers at Google, in an interview with heise online.

Save to Pocket listen Print view

Jeanine Banks bei der Google I/O Connect in Berlin.

(Image: emw)

13 min. read
This article was originally published in German and has been automatically translated.

Jeanine Banks is VP and General Manager for Developer X, and Head of Developer Relations, at Google. She hold the developer Keynote at Google I/O in Mountain View, California, in May 2024. And she annouced the launch of Gemma 2 on stage at the Google I/O Connect in Berlin. heise online talked to Banks about the future of AI, how AI can help developers and where there are limits.

Everybody is talking about AI since ChatGPT launched. But you are in the field of AI for much longer. What did change for you since there is this big hype?

It's true. You know, Google has been an AI-first company since 2016, and so, in some ways, it's not new for us. We've been building models, building tools, and providing them to developers for a number of years now. When you think of the original Transformers paper, it's formed the foundation for the Large Language Models that we see available today, it is Google. And then also, we have been building for machine learning ecosystems, including the leading TensorFlow framework, the JAX framework, as well as TPU infrastructure that are best-in-class performance and scale for machine learning workloads. So it's been really nice to see that the way that we've been committed to this advancement of the field is really kind of growing and becoming more pervasive, beyond those efforts.

And what we've been doing since then is integrating AI into our products. We've had AI integrated into Google Search with our search generated experiences, integrated into the Pixel with amazing capabilities and the Android OS with things like AI Core. We're just going to continue to push the field forward and some of the announcements that we've made at Google, including our talk here today at I/O Connect Berlin is showing how we've been integrating AI into our tools and platforms from Google Cloud, Vertex AI and Firebase. But it's still early days.

We certainly expect that there's so many new ideas that developers will be able to invent and that's why it's critically important that we invest in accessibility, bringing the most capable AI models, whether through APIs like Gemini API or models that are available in the open, like our Gemma family of open models, so that developers can get their hands on the latest technology to apply and have the most choice to implement their ideas and use cases.

Google is doing like both: Gemini is closed and there's Gemma, that is Open-Source. Can you tell me the difference between them?

We like to talk about it as the Gemini era. And we say that to signify how we've introduced the Gemini family, which is our most capable AI model. When we first introduced Gemini, we had Nano, which is our smaller family model for on-device applications, for where you need the smallest footprint but with great performance for applications on device. With Pro was our most cost-effective performance model for a more variety set, a wider set of use cases and applications. And then we introduced our more advanced models as well with Ultra. Since then, we've expanded our family of models.

There are also some great news here at I/O Connect Berlin with the availability of Gemini 1.5 Pro as well as 1.5 Flash. These models continue to push the edge in terms of cost efficiency, lower latency as well as being multimodal from the ground up. And so we think that that's unleashing the power for developers to create even more interesting and differentiated kinds of applications.

On the other hand, Gemma our open models, are really designed for developers to have the flexibility to tune models to fit with their own data and their specific use cases, while giving them the flexibility and choice to deploy those models where they see fit. They're especially designed for smaller sizes, so we originally introduced the 2B model and being able to now introduce bigger versions. We've expanded to 27 billion and nine billion parameter models. That is still in the smaller class where it's easier to use, but still is centered on helping developers be able to fine tune their own data, deploy with choice in the cloud or even locally on device.

Do you think everybody will be able to code in a few years? Or just still a few people? And will it change the Internet the way we see it now?

Yeah, well, I love this question. It's something I think about often. Will everybody be able to code? I think of people like my mom. I don't know about that. But I think in general what I believe we will see are at least two trends. Professional developers who do know how to code are suddenly so much more productive because now they can leverage AI in their workflow to complete tasks that really aren't the most fun part of their jobs and quite repetitive sometimes, such as writing unit tests or writing certain regular boilerplate code in your code base. Isn't it? Genitive AI do that.

As a developer to be able to focus on more complex tasks, tasks that require more creative thinking and problem solving, architecture, design questions, evolution of my code base and the health system of my system – those are things that developers are gonna still need to have those skill sets. They will still be coding. Now, that's the first trend.

I think the second trend is, we do think that the barrier to entry for people to learn to code and learn to build software is going to go down. I don't know what timeframe that will happen, but we already see signs where newer developers are able to become efficient on board into their roles much more effectively and quickly because they have support and that assistance. And then also one of the interesting things is not just AI doing code. But there are other things that it can do that actually helps junior developers ramp up and become better developers over time. For example, you can ask the AI to explain code. And so, if I'm a developer onboarding to a team and a code base I've never seen before, and now I can ask Gemini to explain the code to help me learn what has been written by other developers on the team. In the past you'd have to read for weeks and study the code base for a long time and still it would be hard to start contributing and being productive. So we think that those are the opportunities for more people will be able to code and we're looking forward to that.

And do you think that will change the internet? When there are maybe more webpages, everybody is hosting one, and maybe even more bad actors?

Well, you know, even today, if we think about the past decade, there are millions of webpages. That's why Googles mission, we believe, is just even more imperative and more needed now than ever before. Which is to be able to organize the worlds information and make it universally accessible and useful. And we think that that remains imperative and a role that we can play to help organize that information on the Internet so that people can continue to find what they need, can continue to explore and learn and read and live their lives leveraging the Internet.

The other thing I want to mention is that, personally, I became a developer actually a web developer back when HTML 4.1 specification was open for comment. It wasn't even a published standard yet. And I was given the opportunity through an internship with Book Haven National Laboratory in Upton, New York, to be able to get my hands on compute servers, to be able to get my hands on HTML to learn, to be able to practice. I was given some documentation. And isn't it true? That's how developers still are able to build and able to contribute meaningfully to the internet when you fast forward to now. And so I think that Googles role is to see to it that developers still have the tools that they need to be able to create amazing web content, applications as well that are discoverable on the internet.

We do that, for example, when the latest advancements we launched in the Chrome browser, Chrome DevTools Console Insights, which are powered by Gemini. What you can do as a developer is easily identify, with help from Gemini, issues or errors in your website that is causing it not to load correctly, or causing it not to display well, or perform well, and solve those problems very quickly. So that you can ultimately have a very high quality web content and web access. By doing things like that, we think that's going to equip developers to continue to create a healthy internet that has more kinds of content.

Google I/O Connect Berlin

(Image: emw)

Do you see any limits? Limits where AI can even be useless?

At Google I/O this year, our CEO, Sundar Pichai, he said something that I felt personally was pretty profound, even though it was sort of a simple idea, which is making AI helpful for everyone. And so, really, that's sort of like a guiding point for us to think about how can people make use of this technology in meaningful ways. And a key way that we've been doing that is integrating it into Google's products. So far from what I hear from companies that I talk to, whether it's of different industries, from banking to healthcare to retail, they're finding really interesting ways to apply this technology that is helping them get a lot of value back into their businesses. For example, a large insurance company that has been implementing Gemini API in their application on Google Cloud has been able to have significant reduction in call time latency for their customer service representatives because now they can get that support from a chatbot. When you see those kinds of applications I think it's really promising what's possible.

What is your favorite AI moment or what are you using every day?

I take a lot of pictures. I recently just came back from a trip to Japan. And I traveled with my husband and one of my sons, our youngest son. It was a blast. We took so many amazing pictures. And Google Photos has just been super awesome to use. I use it extremely regularly to, you know, organize my photos, create photo albums. I love when it suggests, even when I'm not looking at Google Photos or have the app open, it suggests, like, memories or a collage of photos from a time, a point in time in the past, and kind of suggests them to me and brings them back to my mind, which is always really nice.

(emw)