advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

YouTube is figuring out that humans, not algorithms, are key in the misinformation fight

One of YouTube’s biggest strengths is also its biggest weakness – its almighty algorithm.

While the algorithm is very good at keeping users on YouTube fervently consuming content, when it comes to moderating said content, it’s not all that good.

Over time and given enough data, YouTube’s moderation tools can learn what content needs to be addressed but of late, misinformation moves faster than YouTube’s tools can learn.

“For a number of years, the misinformation landscape online was dominated by a few main narratives – think 9/11 truthers, moon landing conspiracy theorists, and flat earthers. These long-standing conspiracy theories built up an archive of content. As a result, we were able to train our machine learning systems to reduce recommendations of those videos and other similar ones based on patterns in that type of content,” explains chief product officer at YouTube, Neal Mohan.

“But increasingly, a completely new narrative can quickly crop up and gain views. Or, narratives can slide from one topic to another—for example, some general wellness content can lead to vaccine hesitancy. Each narrative can also look and propagate differently, and at times, even be hyperlocal,” Mohan adds.

The CPO goes on to say that it tries to connect viewers to authoritative videos where it can but sometimes, there just isn’t an authoritative source.

Perhaps the more concerning problem for YouTube is what it calls borderline content. This is content that skirts the line of what is acceptable and what isn’t. The trouble is that while YouTube can lower recommendations, that doesn’t mean that the video can’t be recommended on other platforms.

One potential solution to this problem that Mohan highlights is an interstitial. YouTube already uses these to warn viewers that the content they are about to watch is violent or graphic. These could work to warn users that they may be about to watch misinformation but again, how does YouTube decide what is and what isn’t misinformation?

This question is particularly important, outside of the US because as Mohan points out, state broadcasters aren’t always the bastion of truth they are seen to be in the US and UK.

“Countries also show a range of content within their news and information ecosystem, from outlets that demand strict fact-checking standards to those with little oversight or verification. And political environments, historical contexts, and breaking news events can lead to hyperlocal misinformation narratives that don’t appear anywhere else in the world,” says Mohan.

“Beyond growing our teams with even more people who understand the regional nuances entwined with misinformation, we’re exploring further investments in partnerships with experts and non-governmental organizations around the world. Also, similar to our approach with new viral topics, we’re working on ways to update models more often in order to catch hyperlocal misinformation, with capability to support local languages,” adds the CPO.

We are cautiously optimistic to learn that YouTube is growing its teams. As many social media platforms have learned, you can’t be a global company without a global team. As an example, while South Africa makes uttering the K-word an offense, one doesn’t have to look far to see it being flung around on the likes of TikTok and Twitter. The word can be used without consequence on those platforms because they don’t understand the context or indeed the meaning.

At the very least YouTube should be drawing on talent from around the world to help hone its models and misinformation solutions. Moderation teams are also important but we also understand this is a costly undertaking. With that having been said, YouTube’s reliance on machines created and trained by folks from one area of the planet to police its halls, must come to an end.

[Image – CC 0 Pixabay]

advertisement

About Author

advertisement

Related News

advertisement