Let CNET’s mistake serve as a lesson to publishers wanting to “hire” an AI writer

  • CNET recently made use of an AI platform to write roughly 77 stories published on its site.
  • Of those 77 stories, 41 of those needed correcting or edits, with plagiarism being an issue too.
  • The website has since come clean about the errors, showing that AI writers still have some ways to go.

This week tech publisher CNET had to address some slight controversy regarding a series of articles written by an AI platform on its website, where its AI writer was found to have made mistakes in the majority of stories it published.

More specifically 41 of the 77 reported stories written and published by the AI writer featured some form of error or mistake. Mistakes are commonplace when writing for a news publication, which is what the subediting and copy editing process is there for, but it was also found that the AI writer plagiarised its work.

Here CNET editor-in-chief Connie Guglielmo unpacked what the internal review process of the AI written stories revealed, with a number of sentences and phrases needing to be rewritten as they featured language that was not “entirely original” in the publication’s view.

“We identified additional stories that required correction, with a small number requiring substantial correction and several stories with minor issues such as incomplete company names, transposed numbers or language that our senior editors viewed as vague,” explained Guglielmo.

“Trust with our readers is essential. As always when we find errors, we’ve corrected these stories, with an editors’ note explaining what was changed. We’ve paused and will restart using the AI tool when we feel confident the tool and our editorial processes will prevent both human and AI errors,” she added.

While CNET espouses its belief in trust, it should also be pointed out that the website published numerous stories for months with the aid of an AI writer, as discovered by Futurism earlier this month. The publication also cited several errors in said stories, and after breaking the news, CNET began putting a disclaimer on its AI written stories.

Given the fact that this information was only disclosed after an investigation by another publication, CNET’s parent company Red Ventures has since placed a pause on any such content, as The Verge points out. Whether it will spin it up again, remains to be seen, but it is clear that the publication’s credibility has taken a bit of a dent.

“In a handful of stories, our plagiarism checker tool either wasn’t properly used by the editor or it failed to catch sentences or partial sentences that closely resembled the original language,” continued Guglielmo.

“We’re developing additional ways to flag exact or similar matches to other published content identified by the AI tool, including automatic citations and external links for proprietary information such as data points or direct quotes. We’re also adding additional steps to flag potential misinformation,” she noted when it comes to the steps it is taking moving forward.

While our opinion on the subject may seem bias, especially as an AI writing news potentially threatens our own human writers’ futures, it is more the way that CNET went about this project. Here the fact that it chose not to disclose what it was doing to the public, is what has truly created all this controversy.

A far better tact, would be the route that local gaming site NAG took with its Pixel AI writer. Having published a single story to date, the use of Pixel has been fully disclosed by the publication, as well as noting that its use does not impact the future of its human writers.

With the number of AI writers only set to increase in the coming months and years, if a publication does indeed plan to make use of one, go the NAG route instead of the CNET one.

[Image – Photo by Markus Winkler on Unsplash]


About Author


Related News