AI chatbots can change political preferences more than election advertising
Experiments before several elections suggest that more people change their minds after conversations with AI chatbots than after viewing election advertising.
(Image: hxdbzxy/ Shutterstock.com)
AI chatbots can possibly change people's political views much more effectively in short conversations than traditional election advertising. This is at least suggested by extensive analyses that have now been made public. For this purpose, large-scale surveys and experiments were conducted and evaluated in the United States, Great Britain, and Poland before the respective elections. In some cases, preferences could be changed by 10 percentage points, Cornell University summarizes. This was achieved not through psychological manipulation but through a wealth of “factual claims” – although these were not necessarily correct.
A lot of information, plenty of untruths
For one of the two studies now published, those responsible surveyed thousands of people on political topics and their preferences before a democratic election and then had them conduct conversations with AI chatbots. These were set up to support one of two competing candidates. As a result, preferences changed measurably, and to a greater extent than is known from video advertising. The analysis suggested that the AI models had cited “relevant facts and evidence,” “but not all of them were correct.” AI models that argued for candidates from the right-wing spectrum made more false statements.
AI researcher Hendrik Heuer from the Center for Advanced Internet Studies (CAIS) Bochum finds the study very impressive, he told the Science Media Center. It is reassuring that the density of information is so important for persuasiveness, which could improve political discourse. Felix Simon from the University of Oxford sees it similarly in his statement to the organization, but points out the crucial catch: “The approaches that increase persuasiveness systematically reduce factual accuracy.” In the most extreme configuration, 30 percent of the statements were false. He also points out that changed political preferences do not automatically lead to voting differently.
Videos by heise
Christian Hoffmann, an expert in political communication from the University of Leipzig, is even more reserved. The percentage sizes described as substantial in the study are comparatively small and therefore difficult to interpret: “I would not venture to claim that study participants have really significantly changed their minds here.” People are quite willing to adjust their knowledge on controversial topics, but not necessarily their attitudes. The unreliability of information through AI chatbots, on the other hand, is worrying. The studies have been published in the journals Science and Nature.
(mho)