X requires AI labeling for war videos – but only if monetized
X introduces a labeling requirement for AI-generated content for the first time, but with important limitations.
Against the backdrop of current political events, X is introducing a rule for AI-generated videos.
(Image: sdx15 / Shutterstock.com)
The microblogging service X has introduced a new rule: Users who publish AI-generated videos of an "armed conflict" without labeling them as such will be excluded from the monetization program for 90 days. Repeated violations risk permanent exclusion from the program. This was announced by Product Lead Nikita Bier yesterday in a post on X.
Relevant posts will be marked as such by the system through automatic detection of AI-based metadata or by users themselves via Community Notes, the post further states.
The rule is motivated by the current military conflicts between the USA, Israel, and Iran. "In times of war, it is crucial that people have access to reliable information on the ground," writes Bier. With today's AI technologies, it is trivial to generate content that can mislead people.
However, the vast majority of users are not affected by this rule and can continue to publish misleading AI videos about the conflict or other topics on the platform without consequences.
Videos by heise
X is apparently working on expanding the labeling requirement
It is the first time that X has explicitly introduced a labeling requirement for AI-generated content. Currently, however, it only affects "armed conflicts." Bier does not define what exactly falls under this in his post. The new rule is not yet listed in the official platform guidelines.
Because AI slop has increased sharply on X, the rule could be expanded in the future. For example, an app developer (X link) discovered in February that the platform is working on a user-side labeling requirement for AI-generated content. However, corresponding plans are not confirmed.
With its own AI chatbot Grok, X already automatically watermarks images and videos created. However, this has not prevented problematic content. In recent months, the chatbot has made headlines primarily for sexualized deepfakes generated with it, which prompted the EU to initiate proceedings against X. The case shows how difficult it is for X to effectively control AI-generated content.
(mki)