YouTube's free identity tool to protect celebrities from deepfakes
YouTube is making its deepfake detection more widely available, responding to the increasing prevalence of manipulated videos featuring well-known faces.
Deepfakes of celebrity faces could soon become rarer on YouTube.
(Image: Mijansk786 / Shutterstock.com)
Late last year, YouTube introduced a tool designed to detect likenesses of individuals in uploaded videos. Users were intended to be able to protect their identities from unauthorized AI impersonations. After an eighteen-month testing phase, the video platform initially opened the program to well-known YouTube personalities and selected politicians. Now, actors, musicians, and athletes are also gaining free access to YouTube's “Likeness Detection” to better protect their digital identities. Having a dedicated YouTube channel will not be necessary for this.
System searches for faces and reports matches
As YouTube explained to The Hollywood Reporter, the service is specifically aimed at individuals whose public image has economic relevance. The video platform is responding to the rapidly evolving technical capabilities of generative AI. Earlier this year, for example, ByteDance's Seedance 2.0 caused an outcry in Hollywood. The AI video generation model produced copyrighted material on an assembly line, leading to cease-and-desist letters from companies like Disney and Paramount Skydance.
YouTube's digital identity protection works similarly to the well-known Content ID system for copyrights. Those who wish to participate upload reference material of their face, such as a national ID card or driver's license. The platform then searches uploaded videos for matches and flags potentially problematic content for review.
Videos by heise
According to YouTube, affected individuals can then decide whether to tolerate a video or request its removal. However, this does not apply universally: parodies or satirical content are intended to remain online as long as they do not violate community guidelines. The system is primarily aimed at deceptive replicas of content that can cause economic or personal damage. For now, however, no feature is planned that would allow creators to earn a share of the generated revenue from reported content.
(joe)