advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

CNET outlines its editorial AI policy

  • Technology news publication CNET has issued an update regarding its editorial policy on the use of AI.
  • The website’s in-house AI is called Responsible AI Machine Partner (RAMP).
  • CNET says it will use RAMP to either augment a journalist’s research, or for writing generic information like pricing or specifications.

It is not often that we write about a fellow technology news publication outside of citing it as a source for a story, but CNET is in the conversation again regarding how it implements artificial intelligence to write stories.

In January, CNET courted controversy for not disclosing its use of AI on a number of stories, many of which also featured mistakes or inaccuracies. The publication said it would refine its process to be more responsible, and now five months later we finally have an AI policy.

The publication says AI will be put to use in two ways on its website.

The first way, according to the company, is that, “Every piece of content we publish is factual and original, whether it’s created by a human alone or assisted by our in-house AI engine, which we call RAMP. (It stands for Responsible AI Machine Partner.) If and when we use generative AI to create content, that content will be sourced from our own data, our own previously published work, or carefully fact-checked by a CNET editor to ensure accuracy and appropriately cited sources.”

“Creators are always credited for their work. The use of our AI engine will include training on processes that prioritize accurate sourcing and include standards of citation,” it adds regarding the second.

While it is indeed promising to see the publication lay out a proper editorial plan for it wishes to implement the use of AI, there are still a few questions from those in the industry. As Engadget points out, these namely come from current CNET employees who have signalled their intent to unionise under the Writer’s Guild of America, East.

The union replied to the editorial AI policy, stating that, “Before the tool rolls out, our union looks forward to negotiating”. It also asked a few follow ups, “How & what data is retrieved; a regular role in testing/reevaluating tool; right to opt out & remove bylines; a voice to ensure editorial integrity.”

As such, the plan is not without question, but it remains to be seen whether the publication will address any further concerns.

With writers in the US facing a critical period right now, specifically when it comes to the use of AI in screenwriting and their ability to negotiate with producers wanting more content to push onto streaming platforms, this complex issue is still far from over.

As such, how it plays out will serve as a guideline for other publications and industries looking to leverage AI, particularly when it comes to writing.

advertisement

About Author

advertisement

Related News

advertisement