YouTube: Deepfake protection for politicians and journalists
Creators, politicians, and journalists are being provided with a tool on YouTube to find and report deepfakes of themselves.
(Image: Claudio Divizia/Shutterstock.com)
"Likeness detection" is a tool to find similarities, as the name suggests. Initially, however, it only refers to clear similarities to well-known personalities. These should be able to find and report deepfakes of themselves on YouTube.
The tool works similarly to Content ID. This allows content that matches one's own content to be searched for. Now, faces are the focus. However, not everyone can start such a search for deepfakes of themselves. First creators from the YouTube Partner Program have been able to test "likeness detection" for several months. Now, politicians and journalists can also register for the test phase. They must identify themselves.
Affected individuals can then report found deepfakes on the platform. If they violate YouTube's guidelines, they will be deleted. So it's not the case that every use of someone else's face automatically constitutes a violation. Satirical content, for example, may be permitted. "YouTube has a long tradition of protecting free expression and content of public interest - including parodies and satire, even when used to criticize heads of state or government or influential figures," it says in the blog post. Requests to delete content are reviewed very carefully.
Videos by heise
AI imitations of people that are used to influence or even manipulate others are not permitted on the platform. Of course, this would work particularly well with well-known and possibly powerful personalities, such as politicians.
Deepfakes necessitate legal changes
In the blog post, YouTube also reaffirms that they are strong proponents of the so-called "No Fakes Act." This is a US bill that aims to regulate protection against the unauthorized use of a person's voice or image. The bill is supported across party lines.
In Germany, there are also efforts to adapt laws. The focus is on pornographic deepfakes, which primarily show women and children. Federal Minister of Justice Stefanie Hubig wants to tighten criminal law and adapt it to digital image manipulation, as well as make it easier for victims to report cases. A draft law is expected in the spring.
So far, there is no specific law dealing with deepfakes. Instead, for example, the Digital Services Act (DSA) at the European level stipulates that deepfakes must be labeled if the content can be mistakenly believed to be genuine. This then affects providers of AI tools and platforms, but in case of doubt, not the individuals responsible for their creation.
Affected individuals can invoke their personality rights. To assert claims for injunctive relief or damages, the German Civil Code (BĂĽrgerliches Gesetzbuch) is relevant.
(emw)