Cybersecurity expert shares her concerns about how AI is being used

A few years ago if you mentioned artificial intelligence (AI) the response would more than likely be followed by fears of how AI could take over nuclear weapons and the like.

These days, AI has become something most of us use everyday without even thinking about it. Whether it’s using AI to enhance your photos or AI silently helping you to navigate morning traffic.

Unfortunately, AI is also being used in ways it shouldn’t be and this isn’t only applicable to cybercrime. Make no mistake, cybercriminals are using AI but there are areas of the technology that need addressing, particularly that of bias.

We’ve spoken about bias and how it affects tech development before. The crux of the matter is that no one person from one sphere of life can understand what billions of users want or need.

Bias is a major problem and it needs addressing fast according to senior vice president of content strategy at KnowBe4 Africa, Anna Collard.

“According to an UNESCO report, only 12 percent of the artificial intelligence researchers and six percent of the software developers are women. Women of color are even less represented. The field is predominantly white, Asian and male. These white middle-class men simply cannot be aware of the needs of all of humanity. And the tech that they develop is inevitably biased towards white middle-class men,” writes Collard.

You can find the I’d blush if I could UNESCO report from 2019 here.

This bias can often seem small but when applied to machine learning and AI, it can have major consequences. To solve this, emphasis must be put on having greater representation of women, people of colour and non-binary people in positions relavent to AI development. This includes having a diverse set of people leading policy development around AI. The best way to prevent bias in AI systems is by making sure that bias never makes it into the applications or models.

Your boss wrote this sub-head, no really

Okay, so maybe your boss didn’t write that sub-head but AI is being used in a concerning sort of fakery – deepfakes.

“Deep fakes have a great future in the film industry, for instance in reproducing a shot without the actual actor having to be flown in every time. Or in the medical field, recreating someone’s voice if they lost it,” explains Collard.

“But deep fakes can also be used for more nefarious goals. In 2020, the FBI already warned about a combination of deep fakes as an addition to the highly successful social engineering attack form called Business Email Compromise. Effectively leveraging AI to add credibility to an attack by creating a deep fake audio message impersonating a legitimate requestor, such as the CEO of a company authorising a fraudulent money transfer,” the SVP adds.

Deep fakes are one of the more concerning forms of cyber threats because the more they are used, the better the models get, the more convincing the fakery is.

Even more concerning is the fact that with all of the virtual briefings and events taking place, there are multiple sets of data for cybercriminals to use when creating a deepfake. Worse, those spreading misinformation and disinformation can make use of deepfake to fake a politician’s voice to make it seems as if they said something they didn’t.

“These clips may then have a powerful and damaging impact when making the rounds on WhatsApp, Telegram or other chat apps. These platforms lend themselves to spread disinformation because they are not easily monitored and people are used to trusting voice notes from their groups. This means that the political views of potentially millions of voters could be negatively influenced. South Africa’s government stepped in to try to stop the spread of misinformation by introducing legislation that made spreading fake information a prosecutable offence, but how they will enforce this remains to be seen,” says Collard.

This also extends to businesses where the voice of the CEO or other high-ranking official

While AI and machine learning can be hugely beneficial pieces of technology, there are clear and present dangers regarding its application in the real world.

The good news is that the fight against deepfakes is happening but we’re far from having tangible defences. Unfortunately that means having to keep your wits about you online and questioning whether shocking news is indeed legitimate.

We’ve entered a murky area of technology and unfortunately, those will ill intentions are take advantage of the situation.

[Image – CC 0 Pixabay]


About Author


Related News