advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

How AI is changing security and what you can do about it

  • AI platforms like ChatGPT may be proving useful for all manner of tasks but it’s also empowering cybercriminals.
  • ESET Southern Africa has outlined how cybercriminals are making use of AI to enhance their attacks.
  • There are some ways that businesses can guard against these attacks.

From ideas for recipes to apology letters for significant others, ChatGPT is the latest toy the world is obsessed with. However, while it may be quirky and fun, the platform is also being used by cybercriminals for more nefarious purposes.

Despite the benefits ChatGPT may have for businesses, those same businesses need to be cognisant of how its abilities can negatively influence digital security.

“ChatGPT raises concerns due to its natural language processing capabilities, which could be used to create highly personalised and sophisticated cyberattacks,” explains sales and marketing director at ESET Southern Africa, Steve Flynn.

Steve Flynn, Sales and Marketing Director at ESET Southern Africa

Given how easy it is to access ChatGPT, there is potential for more sophisticated cyberattacks to be launched that are harder to detect.

One way AI can be used in a cybercrime is through the implementation of automated spear phishing attacks. Not only can these attacks be automated, but they could also potentially be more convincing given the vast amount of data AI has access to.

ChatGPT could also be deployed in social engineering attacks through fake social media accounts or chatbots that convince users by mimicking human behaviour. Taking this a step further, AI can be used to spread misinformation and propaganda on a massive scale.

Most concerning is the ability of AI to help enhance the development of malware.

While it sounds dire, there are ways businesses can protect themselves.

“However, as with any other tool, the use (or misuse) depends on the hand that wields it. Organisations like OpenAI are visibly committed to ensuring their technology is used ethically and responsibly and have implemented safeguards to prevent misuse. Businesses can do the same. To protect their digital assets and people from harm, it is essential to implement strong cybersecurity measures, and to develop ethical frameworks and regulations to ensure that AI is used for positive purposes and not for malicious activities,” writes Flynn.

Eight ways to guard against AI

Good security practices can go a long way to help protect against cyberattacks, AI-enhanced or otherwise.

These include establishing a strong culture of cybersecurity within an organisation and among employees.

Some ways that businesses can batten down the hatches, as outlined by ESET Southern Africa include:

  1. The implementation of Multi-Factor Authentication (MFA): MFA adds an extra layer of security, requiring users to provide multiple forms of identification to access their accounts. This can help prevent unauthorised access, even where a hacker has compromised a user’s password.
  2. Educating users about security dos and don’ts: Continuous awareness training about cybersecurity best practices, such as avoiding suspicious links, updating software regularly, and being wary of unsolicited emails or messages, can help prevent people from falling victim to cyberattacks.
  3. Leveraging Advanced Machine Learning algorithms: Advanced machine learning algorithms can be used to detect and prevent attacks that leverage OpenAI and ChatGPT. These algorithms can identify patterns and anomalies that traditional security measures might miss.
  4. Implementing Network Segmentation: Network segmentation involves dividing a network into smaller, isolated segments, which can help isolate the spread of an attack if one segment is compromised.
  5. Developing ethical frameworks for the use of AI: Developing ethical frameworks and regulations can help ensure that ChatGPT is used for positive purposes and not for malicious activities.
  6. Increasing monitoring and analysis of data: Regular monitoring and analysis of data can help identify potential cybersecurity threats early and prevent attacks from unfolding.
  7. Establishing automated response systems: Detect and respond to attacks quickly, minimising damage.
  8. Updating security software regularly: Ensuring that security software is up to date can help protect against the latest cybersecurity threats.

AI, in whichever form it takes – whether that be ChatGPT, AI voice mimicry or deepfake video – needs to factor into your disaster management and recovery plans.

“By leveraging the power of AI technology, businesses and individuals can drive innovation, improve productivity and business outcomes with powerful new solutions. However, it is important to balance the potential benefits of AI technology with the potential risks and ensure that AI is used ethically and responsibly. By taking a proactive approach to AI governance, we can help minimise the potential risks associated with AI technology and maximise the benefits for business and humanity. As AI technology evolves, so too must our cybersecurity strategies,” Flynn concludes.

advertisement

About Author

advertisement

Related News

advertisement