YoutubeA new regulation requiring creators to indicate whether their videos contain content generated by artificial intelligence was recently announced, aimed at countering fake videos, deep fakes, and audio tracks that infringe on artists' copyrights. The decision was made in an effort to curb the spread of falsified content and to ensure that viewers have a clear perception of the video's authenticity.
YouTube's wording is as follows.
We will require creators to disclose when authentic altered or synthesized content has been created, including the use of AI tools. When creators upload content, we will provide new options for them to choose from to indicate that the content contains authentic altered or synthesized material. For example, this could be an AI-generated video that authentically depicts an event that never happened, or content that shows someone saying or doing something they didn't actually do.
Under the new rules, creators will face new options when uploading videos to indicate whether the video contains realistic synthesized material, such as content generated using AI tools. Specifically, if the video contains realistically realistic synthesized content, creators will need to add a tag to the video description indicating that the content has been modified or digitally generated. Creators also need to specifically tag videos that deal with important topics, such as elections, conflict, violence, public health issues, or well-known personalities.
Creators who fail to comply could face a variety of penalties, including, but not limited to, content removal, suspension from the YouTube Partner Program, and more.In a statement, YouTube VPs of Product Jennifer Flannery O'Connor and Emily Moxley warned that those who persist in not labeling their information could be subject to serious sanctions.
YouTube says they have tens of thousands of human reviewers around the world who help identify and improve potentially offending content through a combination with AI technology. This also includes the use of generative AI technology to train and improve the reviewers' classifiers to better spot undisclosed fake material.
In an effort to balance content creation and review, YouTube is encouraging creators to make transparent disclosures about whether their videos have been digitally altered. Supporting features for this new regulation will be rolled out progressively over the coming months and into 2024 to ensure that YouTube is able to meet the ever-increasing number ofAI-generatedMaintain transparency of content in the media.