advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Do your customers know you’re using their data to train your business AI?

  • With all the hubbub about AI, business leaders may be tempted to dive into the technology but care is required.
  • Chief technology officer at PBT Group, Willem Conradie, highlights the ethical and security considerations that decisionmakers must account for.
  • While the tech is moving faster than legislation, eventually legislation will catch up.

When ChatGPT first hit the internet folks spent a lot (and we mean a lot) of time trying to break the chatbot. Eventually, folks discovered ways to trick ChatGPT into bypassing its guardrails through a series of well selected prompts. One of these is the Grandma Exploit where ChatGPT was tricked into revealing recipes for napalm or sharing WIndows activation codes.

While this trickery grabs headlines, it also offers insight into how ChatGPT was trained. That is to say when you use the entirety of the internet as an information source, eventually folks will try to access that information through your platform.

We’ve now reached a stage where individual businesses are harnessing the capabilities of large language models and deploying these models into their own solutions. Whereas ChatGPT uses a wide range of sources, bespoke artificial intelligence platforms can be trained on specific data to assist in business processes and operations.

The benefits of deploying AI within a business are tempting but as chief technology officer at PBT Group, Willem Conradie notes, decision makers should take a considered approach to AI.

“Responsible AI involves integrating privacy, security, inclusivity, transparency, and accountability from the outset. AI is not purely a technology. Instead, it is an organisational shift that requires structural adjustments within companies if they are to manage AI responsibly,” says Conradie.

One of the growing threats in the cybersecurity space are infostealers. This malware captures passwords and logs key inputs. These credentials are then sold on to other cybercriminals or used to conduct attacks with legitimate credentials. In order to train AI, the model requires as much data as possible and this can often mean collating this data into one or a few locations that the model can access.

As such, securing that data means securing the business as a whole because as we know, breaches can happen in an instant.

Permission pending

While as a custodian of data you may be inclined to use it as you see fit, it’s important to behave ethically when training AI. This means that if you are using customer data, a business should seek to get express permission from customers to use that data for training.

“The local regulatory environment must still catch up with AI. At the moment, AI adoption is faster than any of the previous phases of big disruption in the industry – and currently there is no set of comprehensive legislation to govern the adoption and use of AI and machine learning in the country. But that is not to say that business couldn’t still find themselves in hot water with the Information Regulator and other stakeholders if their AI deployment is not compliant with already enforceable data protection regulations such as POPIA and GDPR at a minimum. I anticipate some interesting scenarios where things will go wrong, and regulations will adopt and adapt. Until then, responsible AI deployment comes down to keeping with sound business ethics,” says Conradie.

This is a consideration that may be overlooked by decisionmakers within a business. An entire AI deployment could come undone if, in a version of the future, the Information Regulator insists that express consent must be obtained from customers to use their data for training AI. Eventually, legislation will catch up with AI as it has in the European Union. Beyond that, if AI can be tricked into revealing Windows activation keys, it could reveal sensitive customer information, no breach necessary.

Moreover, the implementation of AI needs to be driven by level heads. We’ve seen time and time again how biased training data can impact groups of people when they are placed before AI-driven systems.

As the International Association of Privacy Professionals notes, “While the benefits are great, the potentially discriminatory impact of machine learning necessitates careful oversight and further technical research into the dangers of encoded bias, or undue opacity in automated decisions.”

Perhaps then its best to consult with minds that are well versed in matters of data security and privacy when considering a jump onto the AI bandwagon.

[Image – aymane jdidi from Pixabay]

advertisement

About Author

advertisement

Related News

advertisement