AI fraud: Germans overestimate their ability to detect deepfakes
Cybersecurity Monitor reveals dangerous knowledge gaps in recognizing AI manipulations and online investment fraud.
(Image: tete_escape / Shutterstock.com)
Artificial intelligence has arrived in the everyday lives of German citizens, but awareness of the risks is lagging technical development. As a special evaluation of the Cybersecurity Monitor 2026 reveals, there is a large gap in the population between self-assessment and actual knowledge. While almost half of the surveyed internet users in Germany claim to be able to identify AI-generated content as such, in practice very few look closely.
AI-generated images and videos are normal for a long time, according to the results of the representative survey commissioned by the Federal Office for Information Security (BSI) and the Police Crime Prevention Office (ProPK) among more than 3000 people: Seven out of ten respondents have already encountered such content online. Among those under 30, it is even nine out of ten.
47 percent of respondents believe they can recognize the fakes, but when it comes to concrete verification measures, there is reluctance: a third of Germans have never used any of the common verification methods. Only 28 percent specifically looked for graphical inconsistencies such as faulty shadows or deformed limbs. Only 19 percent checked the reliability of the source.
Deepfakes promote crypto investments
BSI President Claudia Plattner urges that it is now essential for consumers to identify AI content. Only then can they recognize risks and misinformation early on. The BSI is therefore increasingly focusing on awareness and offers orientation aids to strengthen media literacy in dealing with generative AI.
The need for this is great, as many technically feasible fraud scenarios are still considered impossible by the public. For example, only 38 percent of respondents consider the manipulation of an AI agent to disclose personal data to be realistic. The danger posed by invisible, malicious instructions in documents that can trick AI language models when summarizing is also only known to a minority.
Fraud in the area of investments takes on particularly insidious forms. According to ProPK chairwoman Stefanie Hinz, fraud related to online trading is a criminal offense that occurs increasingly frequently in police work. Criminals use AI for deepfakes of prominent personalities, for example, who advertise lucrative cryptocurrencies in deceptively real videos.
Videos by heise
Call for mandatory labeling
The statistics underscore the danger: 15 percent of respondents have invested in cryptocurrencies. Of these, almost one in three has fallen for a fraudulent offer. In most cases, victims were made aware of the scams through targeted advertising on the internet.
Meanwhile, trust in state protection mechanisms is high. A broad majority of the population wishes for consistent intervention by the authorities. At the top of the wish list are swift police action against fraudulent websites and a mandatory labeling requirement for all content created or modified with AI.
(wpl)