With OpenAI's recentGravel 2As advanced AI-generated tools like DeepFakes become increasingly powerful and ubiquitous, concerns about their potential misuse are rapidly escalating. Celebrities and ordinary users alike are at risk of having their likenesses tampered with or imitated.
To this end, YouTube announced last year that it would launchRelated featuresAfter that, it officially started to launch a new "likeness detection tool" to some creators earlier, hoping to combat unauthorized Deepfake content on the platform and allow creators to request the platform to remove these videos.
Initially available only to YouTube Partner Program users
In the initial stage of the rollout, this feature will only be available to members of the YouTube Partner Program (YPP). This strategy is clearly based on the fact that creators with monetization qualifications usually have higher fame and exposure, making their facial features more likely to be imitated by Deepfakes.
You need to submit your ID and selfie video to build a comparison model
According to YouTube, creators must first go through a verification process to enable this feature. Specifically, users need to submit a "government ID" and a "short video selfie" to YouTube.
This move has a dual purpose: first, to verify that the applicant is "really himself" and prevent the tool from being abused; second, to provide the AI system with sufficient "source material" to build an accurate facial feature model in subsequent scanning and comparison.
It works similarly to Content ID, but does not yet support AI voice matching.
Once verified, the system will operate much like YouTube's existing Content ID copyright detection system, automatically scanning newly uploaded videos for AI-modified content that matches the creator's facial features.
When the system detects a possible match, it notifies the creator, who can review the matches themselves and, if they confirm infringement, flag the video and request YouTube remove it.
However, this feature currently only covers situations where an individual's face has been altered by AI. As for situations where the voice has been altered by AI but the image remains unchanged, this system may not be able to effectively detect it at present, meaning that the misuse of AI-powered impersonation of voices remains a challenge for YouTube.



