advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

AI being used to plug security gap as AI’s use in cybercrime rises

  • While ChatGPT may be fun to use, generative AI is also being used by cybercriminals.
  • Last week Meta highlighted how ChatGPT is being used as a lure to download malware.
  • However, cybersecurity teams could also use AI to assist in guarding the border.

The battle is on to see who can create the most widely used artificial intelligence platform. That’s a sentence we never thought we’d type, but with the likes of Google and Microsoft vying for increased users on their generative AI platform’s it’s our current reality.

Whether you’re on the AI boat or you see it as the next fad after the “metaverse” and Web3 failed to keep our attention, the technology is making a real impact on the world and folks are using the tech for everything from university work to cooking dinner. Unfortunately, the fervour with which the technology has been adopted has caught the eye of ne’er-do-wells.

Last week chief information security officer at Meta, Guy Rosen, detailed how the firm’s analysts had spotted AI being used as a lure to download malware.

“Since March alone, our security analysts have found around 10 malware families posing as ChatGPT and similar tools to compromise accounts across the internet. For example, we’ve seen threat actors create malicious browser extensions available in official web stores that claim to offer ChatGPT-related tools. In fact, some of these malicious extensions did include working ChatGPT functionality alongside the malware. This was likely to avoid suspicion from the stores and from users,” wrote Rosen.

Much like the days of Limewire, cybercriminals have found a way to bypass the need for phishing or complex threat campaigns because folks will seemingly download anything ChatGPT or Bard related.

While Rosen says that Meta’s teams have blocked 1 000 malicious URLs from peddling malware from its platforms by users, Meta is just one company. In addition, cybercriminals are a wily bunch and threats can evolve faster than security teams can.

However, AI could be of assistance here as well.

Speaking to Axios, the field CTO applied research at Sophos, Chester Wisniewski, said that AI could assist in the cybercrime field.

“We can train [generative AI] to do some pretty incredible things to enable less-skilled security practitioners to up their game in being able to analyze data more quickly, more accurately,” said Wisniewski.

This is important as there is a dearth of cybersecurity skills. Last year Tata Consulting Services published a report from a study it conducted into the state of cybersecurity.

The report revealed that a lack of cybersecurity professionals was one of the biggest threats facing businesses in the future.

“As businesses look to keep up with rapidly evolving complexities in cybersecurity, the talent gap is widening. Demonstrating a serious commitment to cybersecurity by sustained attention from senior leadership, funding, and process changes will be vital to recruiting and retaining top talent,” said Bob Scalise, managing partner for Risk and Cyber Strategy, TCS.

This also requires something of a balancing act as AI could also eventually replace humans if it gets good enough. That is a big if as AI in its current ChatGPT-esque form is simply just regurgitating existing information in a contextual manner but eventually, with enough information and training AI could get good enough to replace humans.

For now though, AI may just be the helping hand cybersecurity professionals need to guard the door as AI storms it.

advertisement

About Author

advertisement

Related News

advertisement