Before the election: Meta and X accept advertising with hate and agitation

Meta and X failed a test to see if they would remove ads with illegal content such as calling for the killing of migrants on their platforms.

listen Print view
Digital,Contents,Concept.,Social,Networking,Service.,Streaming,Video.,Global,Communication

(Image: metamorworks / Shutterstock.com)

3 min. read

When it comes to ads calling for the imprisonment and murder of immigrants and the burning down of mosques using inhumane language and equating migrants with animals and pathogens, Meta and X don't initially see a problem. This was the result of a test conducted by the international consumer organization EkĹŤ shortly before the German parliamentary elections. The association, which aims to curb the power of corporations, submitted ten banners to Meta and X last week that contained such clear examples of extremist hate speech and incitement to violence. Although the operators of the social media platforms would actually have to block the ads, most of them went through.

Most of the content is likely to be recognizably illegal in this country. They were enriched with images that were generated using artificial intelligence (AI) systems and showed violent scenes. These included burning synagogues and interior shots of a gas chamber. According to the researchers, Meta approved half of the submitted ad banners within 12 hours, as reported by Euractiv magazine. X had approved all submitted ads for publication. However, the EkĹŤ testers removed the banner templates before they went online so that they were not seen by the platforms' users.

The ads were supposed to be geographically limited to Germany and appear in German. The platforms operated by Meta, Instagram and Facebook, are signatories to the EU Code of Conduct on combating illegal hate speech online, as is X. The EU Commission revised this code together with stakeholders from the industry last month and integrated it into the Digital Services Act (DSA). The voluntary commitment requires signatories to take proactive measures to combat potentially illegal hate speech and incitement to hatred and to block access to relevant content. The extended agreement is actually not just reactive and designed to remove such content. Rather, the code is also intended to encourage the service providers involved to do more to prevent and anticipate threats.

Videos by heise

According to the report, the experts at EkĹŤ have already submitted the results of their investigation to the Commission. The Commission is already scrutinizing Meta and X whether they comply with DSA requirements. According to the regulation, operators of very large platforms must carry out risk assessments and minimize identified threats to democracy, public safety, fundamental rights and the protection of minors, for example.

The Commission published an"election toolkit" on Friday to facilitate the implementation of the election guidelines from the DSA. The toolkit is aimed at the so-called Digital Services Coordinators of national regulatory authorities, such as the Federal Network Agency. It summarizes best practices to mitigate risks on very large online platforms during elections. The focus is on risks such as hate speech, online harassment and the manipulation of public opinion, which are to be countered. It also deals with AI-generated content and identity theft.

(nie)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.