It’s been a busy few months for OpenAI. The creators of ChatGPT recently signed a deal with Microsoft to integrate the AI platform into its Azure business models to the unconfirmed tune of $10 billion.
The free platform, reached here, has seen its popularity explode since its launch in November last year thanks to OpenAI’s model of making the chatbot completely free and easily accessible for anyone.
All someone needs to talk to ChatGPT is an email address and a device like a PC or even a smartphone. You can use it to write near-authentic essays, summarise research papers, answer questions well enough to pass exams and even generate code to build applications.
Due to its high accessibility and free nature, educational institutions around the world have grown concerned about students using the powerful chatbot to cheat.
Academics and the international research community are also fretting that students and even scientists could use the platform to deceitfully pass off ChatGPT’s text as their own.
The problem is that ChatGPT makes use of a large language model (LLM), it doesn’t just “come up” with its own words. It trawls the internet and other locations to train itself off of the writing of others.
LLMs use this information to produce patterns of words based on a statistical association within their training data and the prompts given.
Apart from the fact that this process can be seen as plagiarism, it also can lead to errors and incorrect information which would decrease the value of the research.
While using ChatGPT and other LLMs to aid during the often tedious process of writing research papers isn’t necessarily frowned upon, citing it as a source is not permitted in academic writing, according to academic journal publisher Nature.com.
Instead, Nature has come up with a set of rules that it says other scientific publishers will probably follow or adapt in the future.
Firstly, LLM tools like ChatGPT cannot be credited as an author on a research paper. So you can’t copy and paste a section directly from ChatGPT and credit the platform for the section.
“That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility,” Nature writes.
Secondly, researchers who do use LLM Tools should document the use of these tools, for example, if they use ChatGPT to summarise paragraphs or to shorten an introduction, in an appropriate section of the research paper.
This could be in the Method or Acknowledgements sections, or wherever else they may feel it appropriate.
These rules from Nature firmly place ChatGPT and similar platforms as nothing more than a tool to aid your research, like using AI-powered software like OtterAI to make transcribing audio interviews less tedious.
If academic institutions do adopt this method across the board, it will give some insight into how other AI technology will be credited in the future when it comes to research and academic work.
But the risk of people abusing AI tools to deceive institutions is still a factor as long as LLMs can produce work that is near-authentic. That is why websites like ChatGPT Zero and similar are popping up.
These platforms use the fact that AI writing is bland, uniform and simple to discern if it is written by a person or not. But LLMs are only going to become more powerful and capable of more authentic material as their training becomes further refined.
Eventually, it is possible that these platforms will be able to produce academic work that is indistinguishable from that created by humans. Will this set of rules be maintained even then?