YouTube unveils AI-generated labelling tool to address misinformation

YouTube has launched a new element within its Creator Studio, where creators will have to disclose when they upload AI-generated content.

YouTubers creators will now be required to check a box when the content of their upload “is altered or synthetic and seems real”, to avoid misinformation.

When the box is checked, a marker will appear on the video clip, highlighting to viewers that the footage is not real.


Subscribe to Mobile Marketing Magazine

Click here to get the latest marketing news free in your inbox every Thursday


In a statement, YouTube stated: “The new label is meant to strengthen transparency with viewers and build trust between creators and their audience. Some examples of content that require disclosure include using the likeness of a realistic person, altering footage of real events or places, and generating realistic scenes.”

However, the social media giant added that not all AI use will require disclosure.

It said: “We’re not requiring creators to disclose content that is clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance.”

The news comes as the platform announced its approach to responsible AI innovation in November last year, which included disclosure requirements and labels on all AI products and features, alongside an updated privacy request process.

“Creators are the heart of YouTube, and they’ll continue to play an incredibly important role in helping their audience understand, embrace, and adapt to the world of generative AI,” the statement continued.

“This will be an ever-evolving process, and we at YouTube will continue to improve as we learn. We hope that this increased transparency will help all of us better appreciate the ways AI continues to empower human creativity.”

Array