advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Here’s an interesting approach to using LLMs in academic writing

  • Martin Bekker, a lecturer at the University of the Witwatersrand, has proposed a tiered approach to using LLMs in academic writing.
  • While the tiers cover the extremes of banning and fully embracing LLMs, the others consider ways in which LLMs could help researchers while not tainting the research.
  • The lecturer says that use of LLMs should be guided by the principles of ownership and transparency.

The large language models (LLMs) that we have collectively decided to dub artificial intelligence can be rather useful.

When presented with large bodies of text, an LLM can summarise that text and highlight what it believes are key points in that text. However, ChatGPT and Bard Gemini can also be used to generate text. Once ChatGPT launched, it was used to write essays and suddenly there was a market for anti-cheating software that could detect when text was written by an LLM.

However, banning ChatGPT and other LLMs from academia would have about as much effect as banning the use of illicit substances at Burning Man. To that end Martin Bekker, a lecturer at the School of Electrical and Information Engineering at the University of the Witwatersrand in South Africa, has proposed a tiered approach to using LLMs in academia.

This approach was published in the article Large language models and academic writing: Five
tiers of engagement published in the South African Journal of Science on 30th January
.

In Bekker’s view, the use of LLMs should be considered against five tiers namely:

  1. Ban
  2. Proofing
  3. Editing
  4. Co-creating
  5. All

In each tier Bekker outlines the most obvious benefits and risks of using LLMs in academic writing.

Tier 1 and 5 are the most extreme ends of the tiers with Ban meaning no use of LLMs at all. The obvious risk here is that people will flout these rules as cheaters have for centuries. At Tier 5 the most obvious risk is that who authored the paper is unclear, and while research could be published faster, this may devalue that research given the tendency for LLMs to hallucinate and create information.

What is most interesting, however, are not the extremes but the middle ground Bekker presents. For instance, using LLMs for proofing is a boon for academics as this can be time-consuming and costly. Editing can also be useful especially if the author tends to drone on and needs to stick to a word count while not omitting important details. There are risks here such as researchers becoming lazier and writing lacking some flavour. What Bekker does well is highlight how LLMs are neither all bad nor all good, they are simply tools and we need to consider how we use them.

In addition to these tiers, Bekker also says that the use of LLMs in academia should hinge on two principles. The first is ownership if a human author uses an LLM they assume responsibility for any errors or hallucinations that may appear in their work.

The second principle is transparency and this is somewhat trickier. Authors using an LLM should “show their work” so to speak. How this would be accomplished is for brighter minds than us to determine but Bekker suggests a repository where academics can show the prompts and interactions that brought them the content the LLM produced.

We highly recommend giving Bekker’s article a read using the link above. It’s incredibly thought provoking and by far the most level-headed approach to the use of LLMs we’ve seen so far.

[Image – Dean Moriarty from Pixabay]

advertisement

About Author

advertisement

Related News

advertisement