AI swarm attacks: How simulated majorities threaten democracy
A research team warns of coordinated AI agents that could manipulate public opinion through feigned consensus and social dynamics.
(Image: tadamichi/Shutterstock.com)
The era of clumsy social media bots, which flooded the internet with obvious copy-and-paste patterns, is drawing to a close. An international research team warns in the journal Science about a new level of escalation in the form of AI swarms. These are fleets of AI-controlled personas that maintain a consistent identity and have a digital memory. These agents do not act in isolation but coordinate their behavior autonomously. In this way, they could create an artificial reality that is hardly distinguishable from human interaction.
Social data researcher David Garcia from the University of Konstanz, as co-author of the article, describes these systems as highly adaptable. By merging large language models (LLMs) such as GPT, Gemini, or Claude with multi-agent systems, “malicious AI swarms” are created that authentically imitate social dynamics. They infiltrate groups, discuss with real users, and react to events in real-time. This chorus of seemingly independent voices creates the illusion of broad public consensus. In reality, it deliberately spreads disinformation.
Illusion of the majority as a psychological weapon
According to the scientists, the danger lies less in individual false reports than in the gradual shift of social norms through an “artificial consensus.” When users encounter a multitude of seemingly independent profiles that express the same opinion, social pressure arises. This false impression that “everyone is saying it” massively influences beliefs. Jonas Kunst from BI Norwegian Business School, as a co-author, sounds the alarm: The basis of democratic discourse – independent voices – could collapse if a single actor controls thousands of AI profiles.
Videos by heise
According to the analysis, the threat is far-reaching: in the long term, such swarms could manipulate the language, symbols, and identities of communities. Furthermore, there is a threat of “contamination” of the digital environment. As AI swarms flood the internet with fake claims, this manipulated data flows into the training of future AI models. In this way, they indirectly extend their influence to established AI platforms. Studies suggest that such tactics are already being used in initial forms.
Low hurdles for complex manipulation
The technological hurdle is frighteningly low, as powerful language models are often freely accessible. Techniques such as “chain-of-thought prompting” could be misused to construct human-like chains of reasoning for falsehoods. Other researchers have already demonstrated that AI-generated misinformation is often rated as more credible than human texts. Since these swarms require minimal supervision and operate across platforms, classic moderation of individual posts seems doomed to fail.
Garcia therefore calls for a paradigm shift: it is important to investigate the collective behavior of large AI groups using methods from behavioral science. Only then will dangers become apparent that only arise from the interaction of many AI actors. The sheer volume and variance of content simply overwhelm traditional fact-checking systems.
Strategies for a resilient digital democracy
The researchers advocate for protective measures that focus on coordinated behavior rather than individual content. Algorithms should be trained to detect statistically improbable patterns of coordination. Another pillar would be distributed observation centers that collect evidence of AI influence. Furthermore, verification options for real users must be created that protect data privacy but facilitate the distinction between humans and machines.
Ultimately, according to the team, economic levers are also needed: the monetization of fake interactions must be prevented, and accountability for operators of AI infrastructure must be increased. Only through a combination of technical detection, independent monitoring, and regulatory guardrails can it be prevented that artificial swarms suppress genuine diversity of opinion.
(mki)