advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

OpenAI report reveals how bad actors are trying to use its tech

  • Following an investigation, OpenAI disrupted five operations misusing its platform.
  • The operators were from Russia, China, Iran and Israel and used OpenAI tech to generate and translate content that was posted across the web.
  • OpenAI says that its enhanced detection means bad actors are identified in days rather than weeks or months.

For as long as ChatGPT has been publicly available there have been examples of how bad actors abuse the platform but now we have a closer look at those activities thanks to OpenAI itself.

On Thursday OpenAI detailed how it had disrupted what it calls covert influence operations. IO’s as OpenAI refers to them, seek to manipulate public opinion and influence the outcomes of elections.

“In the last three months, we have disrupted five covert IO that sought to use our models in support of deceptive activity across the internet. As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services,” OpenAI writes.

The firm says that these IOs use artificial intelligence platforms to generate comments and articles in a range of languages, making up names and bios for social media, research, debugging code, translation and even proofreading.

The five operations disrupted were:

  • Bad Grammar from Russia – Used OpenAI models to debug code for running a Telegram bot to generate short political comments in Russian and English.
  • Doppelganger from Russia – Used OpenAI models to generate comments in English, French, German, Italian and Polish that were posted to X and 9GAG, translate and edit articles which were then posted to related websites and convert news articles into Facebook posts.
  • Spamouflague from China – Used OpenAI models to research public social media activity, generate texts in languages including Chinese, English, Japanese and Korean that were then posted across platforms including X, Medium and Blogspot, and debug code for managing databases and websites, including a previously unreported domain.
  • International Union of Virtual Media from Iran – Used OpenAI tech to generate and translate long-form articles, headlines and website tags that were then published on a website linked to this Iranian threat actor.
  • STOIC/Zero Zeno from Israel – While OpenAI disrupted STOIC (a commercial entity) operations, it didn’t disrupt the company itself. The operation in question, Zero Zeno, saw articles and comments generated by AI that were then posted across multiple platforms, notably Instagram, Facebook, X, and websites associated with this operation.

“The content posted by these various operations focused on a wide range of issues, including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments. So far, these operations do not appear to have benefited from meaningfully increased audience engagement or reach as a result of our services,” OpenAI writes.

OpenAI says that it’s intent to introduce friction for less savoury use of its technology is having the desired effect. The company says that in many instances it observed its tools refusing to generate the text or images IOs requested. However, given that operations were disrupted, reading between the lines, there were some successes. Bypassing the guardrails firms like OpenAI put around their solutions has become a skill and its clear that these five operations were able to at least for a period, bypass OpenAI’s protections.

The company says that it is getting better at detecting malicious use of its tools. The investigation that lead to the disruptions outlined above for example, took a few days rather than weeks or months.

“We are committed to developing safe and responsible AI, which involves designing our models with safety in mind and proactively intervening against malicious use. Detecting and disrupting multi-platform abuses such as covert influence operations can be challenging because we do not always know how content generated by our products is distributed. But we are dedicated to finding and mitigating this abuse at scale by harnessing the power of generative AI,” the firm concludes.

The use of AI in cybercrime is only going to get more popular. Right now the compute requirements and the associated cost are inadvertently keeping the malicious use of AI in check but in future that could change. We just hope that firms like OpenAI are considering that future and not just the misuse of the platform right now.

advertisement

About Author

advertisement

Related News

advertisement