YouTube: More protection from deepfakes and unauthorized cloned voices planned

YouTube is working on more ways for artists to protect themselves from AI-generated content, such as unauthorized voice clones or deepfakes.

listen Print view
Finger clicks on the play button of a YouTube video

(Image: Dilok Klaisataporn/Shutterstock.com)

3 min. read

YouTube is working on more control options for artists and content creators to protect against AI copies. According to YouTube, the new technologies are intended to help protect copyrights and personal rights while promoting the "creative potential of AI". According to YouTube, content creators will have more freedom to decide how third-party companies are allowed to use their content on the platform. Details are to follow later this year.

To provide greater protection for its users' content, YouTube is currently working on a way to recognize cloned voices. The planned function is to be integrated into the "Content ID" tool launched in 2018, a system that YouTubers use to make it easier to recognize and manage their copyrighted content on YouTube. At the request of the creators, the content to be protected is stored in a database of audio and image files and compared with newly uploaded videos.

At the beginning of 2025, the tool – will initially be able to recognize cloned voices of itself in a pilot phase –. If a match is detected, "a content ID claim will be made on the matching video", explains YouTube. Measures such as blocking the content or monetization can then be initiated.

YouTube is also developing a technology that can be used to detect and manage AI-generated deepfakes on YouTube. In November 2023, YouTube called for stronger action to be taken against music cloned from artists based on existing music. The rapid development of generative AI music tools had raised fears among musicians regarding plagiarism and copyright infringements. In an open letter earlier this year, over 200 artists called for the responsible development of AI in order to protect artists' livelihoods.

Videos by heise

Another tool currently under development at YouTube is designed to recognize deepfakes of faces "from creatives and actors to musicians and athletes" on the platform. The system is still in active development and YouTube has not yet indicated when it is expected to launch.

YouTube has also announced that it will take tougher action against people who use the platform to develop AI tools. Unauthorized access to content creators' content violates the terms of use. Nevertheless, numerous companies such as Nvidia, OpenAI and Anthropic have trained their AI systems using thousands of scraped YouTube videos. Safeguards against this include blocking scraping and investing in content scraping detection systems.

(mack)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.