AI content and digitalization terms pose a challenge for many

Many people are unable to distinguish the increasingly realistic representations by AI from human-made content. 40 percent do not know what AI is.

listen Print view
Roboterhände bedienen eine Computertastatur

Many people can no longer tell whether content was created by an AI machine or is man-made.

(Image: Shutterstock)

4 min. read

Artificial intelligence is generating ever more convincing content. A study in Germany, China and the USA shows that many people can no longer tell whether an AI or a human was at work. At the same time, according to another survey, many people do not seem to be concerned with the topic: More than a third of people in Germany have never heard of "deepfakes", according to the study.

The Ruhr University Bochum, Leibniz University Hannover, TU Berlin and the CISPA Helmholtz Center for Information Security have conducted a study to confirm what some people may have already noticed about themselves: Distinguishing AI content from man-made content is a major challenge. A good 3,000 people took part in the online survey and around 2,600 responses were included in the analysis (USA: 822, Germany: 875, China: 912). The results were presented this week at the 45th IEEE Symposium on Security and Privacy in San Francisco. Between June and September 2022, the research team showed the participants half human-generated and half AI-generated content: depending on the random group, news texts, photorealistic portrait images or audio excerpts from literature. The test subjects also had to provide information on their socio-biographical background, their knowledge of AI-generated media and other factors.

The results surprised the scientific team. According to Thorsten Holz, professor at the CISPA Helmholtz Center for Information Security, the participants did not recognize AI content as such across all media types and countries, but instead classified it as man-made. At this point, it should be noted that the development of artificial intelligence has gained massive momentum since the survey was conducted.

Participants also answered questions about their media skills, general confidence, cognitive reflection, political orientation and holistic thinking. How well people recognized the AI content does not appear to depend on their level of education or age. "We were surprised to find that there are very few factors that can be used to explain whether people are better at recognizing AI-generated media or not," says Holz. "Even across different age groups and factors such as educational background, political views or media literacy, the differences are not very significant."

Videos by heise

A survey by Bitkom e. V. showed that 34% of over-16s in Germany do not know what deepfakes are – i.e. digitally altered or AI-produced images, videos or audio. Another 34 percent had heard of the term, but did not know what exactly it was. 22 percent said they could explain the term. Fake images and audio have hit the headlines because celebrities such as German Chancellor Olaf Scholz and musician Taylor Swift have already fallen victim to fake videos without their consent.

According to the representative Bitkom study of 1004 respondents, a similar number of people has never heard of ransomware (36%) or at least do not know what it is. Another 22 percent believe they can explain the term malware. Other terms that puzzled participants were metaverse (70 percent), blockchain (65 percent), cryptocurrency (61 percent) and chatbot (54 percent). Meanwhile, cookies (74% were able to explain the term), 5G (67%) and artificial intelligence (60%) were better known.

(are)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.