advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Pandora’s Box is open, DeepFake detection is vital

Have you seen how good AI has become of late? Seriously, it’s terrifying how easy it is to create clones of just about anybody, especially if that person is constantly in the public eye. But you can quite easily clone yourself or somebody you know with just a few, easily accessible tools.

There is no denying that this is cool but it’s also terrifying when you consider that anybody, even those with bad intentions, can use this technology.

That means that bad actors in South Africa can use it as well and security lead for Accenture Africa, Boland Lithebe warns that local organisations need to invest in DeepFake and other AI detection tools.

“In South Africa, where misinformation already spreads rapidly on social media, deepfakes could amplify political unrest, financial fraud, and reputational damage. The ease with which fake videos can be created makes it difficult for the public to distinguish between real and false information, further eroding trust in institutions and media,” Lithebe warns.

Results of a study into how effective people are at identifying AI-generated people revealed that we’re terrible at this most of the time. When asked to identify whether an image was real or generated by AI, participants in a University of Waterloo study only identified the AI image 61 percent of the time. We’re being callous though because the truth of the matter is that AI is only going to get better as time goes on, it is fed more data, and asked to do new things.

This should be a five alarm fire in a country like South Africa where digital literacy is incredibly low and many believe that AI isn’t capable of doing what it’s currently able to do.

Most concerning is the fact that Africa is a boom market for disinformation campaigns. The Africa Center for Strategic Studies reports that such campaigns have grown four-fold since 2022 and show no signs of slowing and no country is immune to these attacks.

“A well-executed fake video of a bank CEO announcing financial distress could trigger panic and economic instability. Imagine a fake video showing a business leader or celebrity making offensive statements. By the time the truth is revealed, the damage is already done. Businesses must prepare for this new wave of digital threats to their reputation,” says Lithebe.

We use the AI to fight the AI

So how do you detect AI in a video or image? Quite simply, you use AI.

This isn’t a perfect solution and AI-powered AI detection can still be hit or miss. However, a super-powered tool is likely to do a better job than a human most of the time and it may spot things you don’t.

“Deepfakes are here to stay, but South Africa can take steps to mitigate their impact. Companies and media houses must use AI-powered verification tools to analyse videos and detect manipulations before they spread. The public must be educated on how to identify deepfakes. Schools, businesses, and government agencies should run awareness campaigns to help people critically assess digital content,” Accenture’s security lead says.

This is easier said than done though. All too often we see folks sharing obvious AI images on social media believing it’s real. Hell, you can see it in action today as people fall for April Fool’s pranks on a day where pranks are common.

This doesn’t mean we shouldn’t try to educate people though.

There are some signs that content may be AI. Looking for inconsistencies is one such way. While the world is messy and not exact, AI interprets that weirdly and you may be able to spot extra limbs or fingers in a deepfake. Faked voices are also somewhat easy to spot as the cadence can be unnatural or words aren’t pronounced correctly.

Anna Collard, SVP Content Strategy & Evangelist at KnowBe4 Africa suggests that fighting AI-powered disinformation needs to focus on giving people the tools needed to identify it. This starts, unfortunately, with distrust. Sadly we can no longer trust our eyes or our ears because anything could be faked. Just recently the Vice President of the US, JD Vance had to fend off allegations that arose on the back of a deepfake audio clip.

“The battle against disinformation isn’t just a technical one – it’s psychological. In a world where anything can be faked, the ability to pause, think clearly, and question intelligently is a vital layer of security. Truth has become a moving target. In this new era, clarity is a skill that we need to hone,” says Collard.

The toothpaste is out of the tube and we now need to do our best to protect ourselves from falling prey to AI-powered misinformation.

[Image – Tom from Pixabay]

advertisement

About Author

Related News

advertisement