advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

OpenAI targets cheating students with ChatGPT watermark

  • OpenAI has updated its tools when it comes to identifying AI-generated content for ChatGPT.
  • The company specifically highlighted a new watermarking detection method.
  • The feature could be used to more easily spot essays written by ChatGPT.

Only a few days after ChatGPT was made publicly available by OpenAI at the beginning of last year, those in the education sector had a new problem to deal with in terms of cheating.

While OpenAI has long eulogised the use of its generative AI platform as an effective research tool, as well as an aid for those who are non-native English speakers, more work still needs to be done in terms of detecting AI-generated content in the academic field.

The startup’s answer to this was shared over the weekend in an updated blog post originally published in May, as a ChatGPT watermark has now been added to the mix.

“Our teams have developed a text watermarking method that we continue to consider as we research alternatives,” OpenAI explained in the updated blog post.

“While it has been highly accurate and even effective against localized tampering, such as paraphrasing, it is less robust against globalized tampering; like using translation systems, rewording with another generative model, or asking the model to insert a special character in between every word and then deleting that character – making it trivial to circumvention by bad actors,” it added.

As such, it may prove a useful tool in terms of detecting cheating in academics, but it is far from a perfect solution at this stage.

“Another important risk we are weighing is that our research suggests the text watermarking method has the potential to disproportionately impact some groups. For example, it could stigmatize use of AI as a useful writing tool for non-native English speakers,” OpenAI continued.

Unpacking some of the alternatives to the ChatGPT watermark that OpenAI is exploring, highly detailed metadata could prove a more effective option moving forward.

“Our teams are also researching how metadata could be used as a text provenance method. We are still in the early stages of exploration, so it is too early to gauge how effective the approach will be, but there are characteristics of metadata that would make this approach particularly promising,” shared OpenAI.

“For example, unlike watermarking, metadata is cryptographically signed, which means that there are no false positives. We expect this will be increasingly important as the volume of generated text increases. While text watermarking has a low false positive rate, applying it to large volumes of text would lead to a large number of total false positives,” its update concluded.

With the academic world still grappling with the impact that generative AI platforms have had over the past 18 months, it still looks like these technologies are evolving at a faster rate than many educators and regulators can keep up with.

[Image – Photo by Solen Feyissa on Unsplash]

advertisement

About Author

advertisement

Related News

advertisement