Meta opposes the EU's AI plans

Meta, the US company behind Facebook and Instagram, rejects the EU code on AI regulation. The company sees overregulation and brakes on innovation.

listen Print view
Blue,Neon,Glowing,Weight,Balance,Scale,Holding,Red,Alphabet,Ai

(Image: Shutterstock)

4 min. read
By
  • Dorothee Wiegand

Shortly before another part of the EU AI Regulation ("AI Act") comes into force, Meta, the US company behind Facebook and Instagram, has decided not to sign the European Commission's voluntary code of conduct for AI providers. The company criticizes the code as legally uncertain, overregulating and anti-innovation.

In a LinkedIn post, Joel Kaplan, Chief Global Affairs Officer at Meta, writes: "Europe is on the wrong track when it comes to AI. We have carefully reviewed the European Commission's Code of Conduct for generic AI models (GPAI) and Meta will not sign it." According to Kaplan, the code hinders European AI innovation. It could slow down the development of advanced AI models and limit the opportunities for start-ups.

The fact that Meta is so openly confrontational is a remarkable step. The company wants to focus more on its own AI services in the EU, such as the Llama 3 language model. In future, these are to be used both on the company's own platforms and in cooperation with cloud and hardware providers. For example, Meta announced plans to integrate its own AI into Qualcomm smartphones and Ray-Ban glasses.

Screenshot of Joel Kaplan's post on LinkedIn.

(Image: LinkedIn)

The Code 4895378 presented by the EU Commission at the beginning of July is not binding. Among other things, the Union calls for transparent documentation of AI models offered, the exclusion of copyrighted materials during training and the consideration of deletion requests from rights holders.

The AI Act classifies AI systems according to their level of risk. According to this, applications such as translation software or simple chatbots have a "minimal risk", while systems such as generative AI have a "limited risk"; they are subject to transparency obligations, such as the labeling of content. Applications in human resources, education and product safety have a "high risk" according to the AI Act and are subject to stricter requirements. Applications with "unacceptable risk", including social rating systems or manipulative behavior control, are prohibited.

Videos by heise

The European AI Act is intended to generally regulate the use of artificial intelligence in Europe. It aims to make the use of AI safe, transparent and ethical, while protecting the fundamental rights of individuals and supporting innovation. The Act came into force in August 2024, although individual parts of it will only come into effect gradually.

From August 2, 2025, the labeling requirement for AI-generated content anchored in Article 50 of the AI Act will apply. This important chapter of the Act relates to so-called "general purpose AI" (GPAI), i.e. systems such as voice AIs, image generators and music composers. Almost all providers are based in the USA, for example the provider of GPT, OpenAI, as well as Anthropic (Claude), Google (Gemini) and Meta (Llama).

These companies will be hit with the full force of the regulation. They had therefore called for a postponement of the AI Act. However, the EU decided against a later entry into force and stuck to the agreed timetable. While OpenAI, Mistral and other providers, including Microsoft, at least officially welcome the code as sensible and pragmatic, Meta now rejects it and is taking a confrontational stance with the EU Commission. Even though the code has so far been voluntary, it is unclear what this will mean for Meta's future access to the European market.

(dwi)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.