advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

OpenAI launches bug bounty program

  • OpenAI has launched a bug bounty program on Bugcrowd.
  • Rewards range from $200 for low-severity discoveries up to $20 000 for “exceptional discoveries”.
  • Tricking the likes of ChatGPT into breaking its own rules – among other things – won’t be rewarded.

With artificial intelligence platforms from Google and OpenAI tussling for top position within businesses and consumer applications, there are many eyes on the technology. Unfortunately, this includes the miscreants of the world who are hoping to take advantage of the hype to harm others.

In a bid to help guard against any potential vulnerabilities in its platform or solutions, OpenAI has launched a bug bounty program. Through this program, ethical hackers will be able to disclose vulnerabilities and flaws they discover in return for money.

“OpenAI’s mission is to create artificial intelligence systems that benefit everyone. To that end, we invest heavily in research and engineering to ensure our AI systems are safe and secure. However, as with any complex technology, we understand that vulnerabilities and flaws can emerge,” OpenAI said in an announcement.

“We believe that transparency and collaboration are crucial to addressing this reality. That’s why we are inviting the global community of security researchers, ethical hackers, and technology enthusiasts to help us identify and address vulnerabilities in our systems. We are excited to build on our coordinated disclosure commitments by offering incentives for qualifying vulnerability information. Your expertise and vigilance will have a direct impact on keeping our systems and users secure,” the firm added.

The program will be run through the Bugcrowd platform where discoveries can be submitted. So far 14 vulnerabilities have been discovered with an average payout of $1 287.50 over the last three months. On that note, rewards for vulnerabilities will range from between $200 for low-severity findings and up to $20 000 “for exceptional discoveries”.

Rather importantly, this program doesn’t allow for disclosure which means any bugs discovered will need to remain private.

Importantly, OpenAI has outlined a few things that won’t get you rewarded all of which relate to its AI models. These parameters include:

  • Jailbreaks/Safety Bypasses
  • Getting the model to say bad things to you
  • Getting the model to tell you how to do bad things
  • Getting the model to write malicious code for you
  • Getting the model to pretend to do bad things
  • Getting the model to pretend to give you answers to secrets
  • Getting the model to pretend to be a computer and execute code

Should you want to provide feedback regarding an AI model’s behaviour, you should head to this web page.

[Image – Levart_Photographer on Unsplash]

advertisement

About Author

advertisement

Related News

advertisement