advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Cybersecurity watchdog highlights the risk of using AI in your business

  • AI and the large language models they use are the latest fashion in technology but their implementation should be approached with caution.
  • Very little is known about the ultimate capabilities of this technology and APIs in use today may not exist in a year or two.
  • Worse still, the threats that bad actors could leverage are very much an unknown at this stage and this could be dangerous in businesses that deal with sensitive data or even money.

The fervour surrounding artificial intelligence (AI) is unlike anything we’ve seen since, well cryptocurrency or the metaverse. Unlike those technologies – which have largely fallen to the wayside – AI is quickly being incorporated into a wide array of business processes.

This excitement to be at the bleeding edge of technology may do more harm than good as outlined by the National Cyber Security Centre in the UK.

In a blog post published Wednesday, the Technical Director for Platforms Research at the NCSC going by David C, urges businesses to exercise caution when implementing large language models (LLMs) into their processes.

“As a rapidly developing field, even paid-for commercial access to LLMs changes rapidly. With models being constantly updated in an uncertain market, a startup offering a service today might not exist in 2 years’ time. So if you’re an organisation building services that use LLM APIs, you need to account for the fact that models might change behind the API you’re using (breaking existing prompts), or that a key part of your integrations might cease to exist,” writes David.

What should ring alarm bells for businesses, however, is how little we know about AI. Even the creators of ChatGPT, OpenAI, and Google are concerned about what AI is capable of. So much so that both firms signed the Center for AI Safety’s statement calling for caution when developing AI. Not that this has stopped research or the rollout of products.

Of concern is the possibility that malicious prompts could be used by an attacker to glean information they shouldn’t have or make use of prompt injection attacks. You can read more about the various types of prompt injection attacks here but essentially this attack boils down to a malicious individual crafting a prompt for a LLM that leads to bad news for the company using the LLM.

The trouble here is as AI and LLMs are still new technologies, there are very few ways to mitigate this sort of attack. As such, David C advises folks incorporate LLMs into their business processes with caution.

“One of the most important approaches is ensuring your organisation is architecting the system and data flows so that you are happy with the ‘worst case scenario’ of whatever the LLM-powered application is permitted to do. There is also the issue that more vulnerabilities or weaknesses will be discovered in the technologies that we haven’t foreseen yet,” says the technical director.

It’s best then to consider LLMs and our current exploration with AI in a beta phase. While it may be useful for honing your website text or helping you make your writing a bit better, it may not be a great idea to incorporate LLMs into your withdrawal process at a bank. At least not yet.

[Image – Emiliano Vittoriosi on Unsplash]

advertisement

About Author

advertisement

Related News

advertisement