YouTube says creators will have to declare when they use AI in their content

  • YouTube has put the responsibility of declaring when content features AI-generated visuals into the hands of creators themselves.
  • Creators won’t need to declare content is made with AI if it meets a small set of exceptions.
  • The platform says it will label content that regularly uses AI but doesn’t declare it.

With the release of OpenAI’s Sora which is able to generate video from text prompts, our eyes are no longer as trustworthy as we once believed them to be. Until now, believable AI-generated video has been a pipedream but as AI models improve, so too does their ability to fool us.

To that end, YouTube has outlined how it plans to tackle AI-generated content on its platform and it’s poor.

“Generative AI is transforming the ways creators express themselves – from storyboarding ideas to experimenting with tools that enhance the creative process. But viewers increasingly want more transparency about whether the content they’re seeing is altered or synthetic. That’s why today we’re introducing a new tool in Creator Studio requiring creators to disclose to viewers when realistic content – content a viewer could easily mistake for a real person, place, or event – is made with altered or synthetic media, including generative AI,” the YouTube team wrote in a blog on Monday.

Creators won’t have to declare that AI has been used if the content is clearly “unrealistic, animated, includes special effects, or has used generative AI for production assistance”.

But self-classification seems like the recycling craze of the 90s. Big companies including Google which is part of the same company as YouTube, Alphabet, have spent billions developing AI only to now put the responsibility of declaring when this technology is used, into the hands of its users.

While we’re sure that some folks will declare when they’ve used AI, we don’t expect bad actors to do so which is the problem here. YouTube boasts over 800 million users and the potential for spreading misinformation on the platform is massive. Hell, it’s already happening.

One doesn’t need to scroll through YouTube, or any other social media platform really, for very long to find a scam featuring a big celebrity selling you a fake app, cryptocurrency or some other scam.

While stilted speech and visuals that have permanent citizenship in the uncanny valley may tip off some folks, many more will be conned by just how believable these videos can be.

YouTube’s policy then is pathetic at best given that the dangerous folks using AI simply won’t label their content appropriately. Beyond the self-reporting direction, labels indicating content is generated by AI will require users to expand the description in a video.

Did somebody at YouTube cobble this policy together five minutes before it was launched? This is awful and below the bare minimum we expect from a platform of this size.

YouTube says that it will enforce this policy but how effective that will be is unclear.

“In some cases, YouTube may add a label even when a creator hasn’t disclosed it, especially if the altered or synthetic content has the potential to confuse or mislead people,” the platform said.

This makes it seem as if YouTube has measures in place to be able to detect AI-generated content but it could also just mean that YouTube will respond to reports that content isn’t real.

While we understand that YouTube isn’t perfect, this policy is terrible and we expected better, especially in an election year.

Of course, these types of policies tend to be iterated upon and adjusted as time goes on so we live in the hope that YouTube will amend this policy as time goes on.

We’d love to have seen a system similar to YouTube’s copyright check that scans a video for infringing content before its published. Perhaps that is asking a bit much from YouTube given how much content is uploaded daily and how much compute power is required for AI.

For now then, don’t trust your eyes and be sure to check the description of videos for those who declare that they have used AI in their content.


About Author


Related News