advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

OpenAI is working on detection tools to tell if its AI made an image or video

  • OpenAI is developing a classification tool that can detect if DALL-E 3 created an image.
  • The tool has an accuracy of 98 percent but it only works for images created by DALL-E 3.
  • OpenAI has invited researchers to help develop the tool further.

The advances in generative artificial intelligence over the last few years have made it more difficult to tell real art from images created by a bot. As the technology gets better, telling fake from real will get more difficult over time, something OpenAI is acutely aware of.

The company has announced that it is working on a tool that will be able to detect whether generative AI created an image. The caveat here is that OpenAI is only developing this tool to detect whether its DALL-E 3 model was used to create the image and right now the detection tool isn’t 100 percent accurate.

“The classifier correctly identifies images generated by DALL·E 3 and does not trigger for non-AI generated images. It correctly identified ~98% of DALL·E 3 images and less than ~0.5% of non-AI generated images were incorrectly tagged as being from DALL·E 3. The classifier handles common modifications like compression, cropping, and saturation changes with minimal impact on its performance. Other modifications, however, can reduce performance,” writes OpenAI.

The modifications that have an impact on this performance include adjusting the hue of an image and introducing “moderate amounts of Gaussian Noise”. For the most part though, the classifier is quite accurate even if a real image is labeled as created by DALL-E 3. Of course, we should point out that OpenAI has been baking detection information into DALL-E3 since earlier this year.

OpenAI says it’s opening applications for access to this image detection classifier to research labs, journalists, and other testers. The goal is to enable independent research to assess the tool’s effectiveness and help to improve it. Those interested in testing the classifier should head here.

As mentioned, the tool isn’t for other image generators but OpenAI says the classifier flagged between 5 and 10 percent of images generated by other models in its dataset.

“In addition to our investments in C2PA [Coalition for Content Provenance and Authenticity], OpenAI is also developing new provenance methods to enhance the integrity of digital content. This includes implementing tamper-resistant watermarking – marking digital content like audio with an invisible signal that aims to be hard to remove – as well as detection classifiers – tools that use artificial intelligence to assess the likelihood that content originated from generative models. These tools aim to be more resistant to attempts at removing signals about the origin of content,” the AI company wrote.

While this is good news, the obvious question is whether this detection tool is coming too late. Artificial intelligence is abused by bad actors in many ways and while tools like this can help, that depends on how it’s developed and how quickly tools like this are developed and deployed.

OpenAI is developing a similar tool for its video generator, Sora.

[Image – Jonathan Kemper on Unsplash]

advertisement

About Author

advertisement

Related News

advertisement