Deepfakes are only going to get more convincing

Earlier this year Google and OpenAI signed a statement that called for caution when developing artificial intelligence (AI) platforms. While an admirable display, the single-sentence statement deals with humanity’s potential extinction at the hands of AI and doesn’t address some of the current problems the technology presents, like deepfakes.

Fears of a Terminator-like uprising of the machines may stoke some fears, but right now AI is a threat to jobs, artists, writers, and even ordinary folks scrolling through TikTok.

This week, James Donaldson, better known as MrBeast warned his followers about an advert some internet denizens had been served.

There are a number of giveaways that this is fake aside from the stilted speech. For one, the hands – long a tricky area for AI to recreate – feature fingers that disappear and reappear. The facial expressions are also firmly planted in the uncanny valley and of course the audio glitch at the end.

Of course, one can only spot these things if you know what to look for. In a world where groups like QAnon can thrive and gain traction, it’s not a stretch to see some people falling for the deepfake video above and becoming a victim of cybercrime.

But, the fact of the matter is that deepfake technology is only getting better as machine learning models get better and have more data to train on.

Trend Micro reported in 2020 that a UK energy firm was tricked into transferring $260 000 to an attacker after the attacker used deepfake audio technology to impersonate the firm’s CEO.

“Because of the potential malicious use of AI-powered deepfakes, it is therefore imperative for people to understand how realistic these can seem and just how they can be used maliciously. Ironically, deepfakes can be a useful tool for education people on their possible misuses,” the firm wrote.

That education, however, is the tricky part given the pace at which AI platforms are improving.

While the creators of these platforms such as OpenAI try to build in guardrails, even it admits that filtering out toxic data and responses is tricky. What’s more, is that cybercriminals are creating their own malicious AI platforms such as WormGPT which can do, well, anything you want.

We’re already seeing Meta launching personal assistants that use the likeness of celebrities and inevitably, this solution is going to be used by bad actors, you can bet your bottom dollar on that. One of those chatbots is, ironically, MrBeast.

Tricking AI into doing what you want is rather simple, even with guardrails in place. Granted, the image generated looks nothing like MrBeast, but the fact it tried after denying me is concerning.

The trouble as we see it is that there is little to no reason for the likes of OpenAI or Google to slow down the development of their AI platforms. The market is unregulated and lawmakers are slow to take action when it comes to anything with a binary heartbeat. Instead, innovations are being driven by the pursuit of profit and Silicon Valley’s insatiable hunger for infinite growth.

According to Bloomberg Intelligence, the AI market will be worth $1.3 trillion by 2032. The good news is that one of the biggest growth areas, per that research, is AI-based cybersecurity where spending is expected to grow 109 percent between now and 2032.

But even then, regular people who aren’t extremely online are the most at risk because, quite frankly, they aren’t seeing how AI is evolving and how quickly. Less than a year ago we were in awe of ChatGPT’s ability to write full essays, now we have AI singing songs and AI-generated art is everywhere, including our header image.

As it stands we’re all going to have to exercise a lot more skepticism when it comes to things we see online, especially when content features well-known faces.


About Author


Related News