Google explains its outrageous AI answers and promises improvement

In principle, Google is satisfied with the introduction of AI responses in the USA. However, in view of certain results, the company wants to make improvements.

Save to Pocket listen Print view
Beijing,,China-,April,29,,2017:,Google,Sign,Is,Seen,At

(Image: testing/Shutterstock.com)

3 min. read
This article was originally published in German and has been automatically translated.

After a series of outrageous responses went viral, Google has announced that it will restrict the selection of sources for the AI-supported search function. The US company explained in a blog post that it has built better recognition mechanisms so that content from satirical sites and humorous content is not considered for the "AI Overview". In addition, user-generated content, for example from Reddit, is to be evaluated less frequently. In cases where the AI technology is not very helpful, it should be deactivated more quickly. The US company has also assured that "strong guard rails" have been drawn for search queries on topics such as news and health. Overall, the AI answers, which are initially only available in the USA, are very accurate.

Google's statement is a reaction to screenshots of real and partially manipulated outputs of AI answers that have gone viral. For example, it said that you should stick the cheese on a pizza with glue or that it is healthy to eat at least one stone a day. Users have also found search queries for which the AI-generated answer explained that Batman is a policeman, that dogs have played in US professional sports leagues and that the second US President John Adams, who died in 1826, graduated from university a total of 21 times between 1934 and 2003. According to Google, these are Google queries for which there are too few results, which is why satire sites or Reddit comments were evaluated. This has been improved.

While Google's head of search, Liz Reid, is now promising improvements and explaining the background, she also assures that the AI results are just as accurate as the text excerpts that are sometimes displayed above the hit list. Because the AI Overview works differently to other AI text generators, it typically does not invent ("hallucinate") anything and errors have other causes. This can happen, for example, when search queries are "misunderstood". In view of the billions of search queries that are made every day, it is normal for there to be "strange" or incorrect results. Violations of the company's own guidelines were only found in one in seven million searches.

Reid also wants to address another concern in the blog entry. She assures that the AI-generated answer can serve as a starting point for visiting the page from which the underlying information originated. These clicks are "of higher quality", meaning that users will stay on the pages longer because the search was more helpful. Considering AI-generated answers, there are fears that a considerable proportion of the traffic that Google delivers to news sites, for example, could be lost in the future. However, there are no figures on this and Reid does not provide any either. However, her explanations suggest that Google's AI answers are actually sending fewer people to other sites overall.

(mho)