advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

SA business not immune to AI impersonation

  • Luno was recently the target of a deepfake scam.
  • Thankfully red flags were raised before any damage was done.
  • Unfortunately this means that AI’s use in cybercrime is now hitting South Africa.

Recently, a Luno staff member was targeted in a scam that leveraged deepfake audio. Thankfully, the staff member was able to spot the scam before it spun out of control.

While that is good news, the bad news is that this means South African organisations are being targeted by deep fake attacks. In these attacks, cybercriminals will leverage generative artificial intelligence’s capabilities to emulate the voice or appearance of somebody in order to trick the target into giving up sensitive information.

The red flag that the employee spotted was related to the request the deepfake audio made. Thankfully, for now, there are ways to spot deepfakes although audio deepfakes are harder to assess.

When encountering a video, look for weird facial expressions and odd morphing. AI also isn’t good at emulating facial expressions and things such as blinking although that is getting better and before long deepfake video will be as convincing as the real things.

When it comes to audio, listen carefully for a lack of emotion and tonal variation. A lack of background noise is also a way to spot deepfake audio but it isn’t foolproof.

“Fraud is a concern for crypto service providers, as well as across the financial services industry in South Africa. To stay safe, remember that if it’s too good to be true, it probably is. Be extra careful (even paranoid) and verify the person or organisation you are dealing with by doing extensive online research and asking them specific questions about their business. Fraudsters often rush you into making hasty decisions. They are very persuasive! Take five minutes to think things through. Don’t be afraid to terminate a conversation and walk away,” advises Johan Hetzel, global head of compliance and anti-financial crime at Luno.

The big problem on the horizon is that AI is only getting better and eventually, telling fact from fiction will be a nigh impossible task. As such we’re going to have to rely on tools that are able to detect deepfakes but even those aren’t reliable yet and often, can only detect when a specific platform is used to create something using AI.

For now, keep your wits about you and never trust anything at face value.

advertisement

About Author

advertisement

Related News

advertisement