Instead of picture puzzles: Google introduces QR code challenge against AI bots

Google expands reCAPTCHA to "Cloud Fraud Defense," a platform against fraud and abuse that also detects AI agents.

listen Print view
A QR code for verification is displayed on a checkout screen.

(Image: Google)

5 min. read
Contents

Google is expanding reCAPTCHA into a broader platform against fraud and abuse on the web. At its Next ’26 cloud conference, the company introduced “Google Cloud Fraud Defense.” The platform is no longer intended to distinguish only human users from classic bots, but also to detect AI agents. Google calls the offering the “next evolution” of reCAPTCHA and positions it as a trust platform for an “agentic web” – that is, for applications in which autonomous software agents perform tasks for their users.

reCAPTCHA was originally known primarily as a CAPTCHA and bot defense. However, in recent years, Google has significantly broadened the product and now markets it as a risk and fraud protection, for example for logins, account creations, or payment processes. Fraud Defense builds on this. According to Google, existing customers neither need to migrate nor adjust their site keys, integrations, or contracts.

The announcement is centered on the assumption that web traffic will no longer primarily consist of humans and simple scripts in the future. Google expects significantly more activity from AI agents that independently retrieve information, prepare decisions, and initiate entire processes. One example is shopping assistants that compare products, fill shopping carts, and initiate purchases on behalf of their users. While such systems may be desirable, from a security perspective, they open up new attack surfaces.

One of the most important innovations is therefore a dashboard for measuring agent activity. It is intended to show operators which AI agents and other automated systems are accessing their websites. Google aims to identify, classify, and analyze this traffic and link the identities of agents and users to better assess risks. What is particularly interesting technically about this is the approach of no longer treating automated access as bot traffic across the board, but distinguishing it by trustworthiness, type, and identity.

To this end, Google is also relying on new protocols and emerging standards. The announcement mentions, among others, Web Bot Auth and SPIFFE (Secure Production Identity Framework for Everyone). The idea behind it: Legitimate agents should no longer assert their origin and identity solely through easily falsifiable characteristics such as user-agent strings or IP addresses, but prove it cryptographically. A verified shopping agent could then be treated differently than a scraper that merely disguises itself as a regular browser.

As a second central element, Google is introducing a policy engine. This allows companies to define rules for different phases of a session – from registration and login to payment and order completion. The decisions are based, among other things, on risk values, automation types, and the identity of an agent. In practice, a verified AI agent could query product data and availability, but be subject to stricter rules when accessing a customer account or initiating a payment.

In addition, there is a so-called “AI-resistant challenge.” This is a verification mechanism via QR code that requires human confirmation for suspicious activities. Unlike classic picture or text puzzles, this challenge is intended to make automated attacks economically unattractive. For example, an application might display a QR code during a risky order process that the user must scan with their smartphone to prove their presence.

Videos by heise

Google justifies the restructuring with a changed threat landscape. The risks are shifting from classic bot automation and invalid traffic to more complex attacks, such as the takeover of agent identities or large-scale fraud with synthetic identities. By synthetic identities, Google means accounts or profiles that consist partly of real and partly of invented characteristics, making them appear legitimate at first glance.

The company is backing the announcement with far-reaching performance promises. Fraud Defense uses the same global threat data that secures Google's own ecosystem. According to the company, the underlying network protects half of the Fortune 100 companies and more than 14 million domains. Furthermore, Google states that account takeovers are reduced by an average of 51 percent when operators correlate risks across the entire session.

More information about Google Cloud Fraud Defense and reCAPTCHA can be found in the blog post.

(fo)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.