ChatGPT has to stay away from research papers
The artificial-intelligence chat bot ChatGPT cannot be credited as author, Springer Nature, the world’s largest academic publisher has announced.
ChatGPT is a large language model (LLM) that can come up with sentences by imitating the statistical patterns of language from a database of text collated from the Internet. Usage of the bot is being questioned by universities around the world, especially when it comes to essays and research papers.
The trusted scientific journal Nature, and all other journals published by Springer Nature, will follow two rules that are expected to be taken into consideration around the world.
First, LLM tools will not be “accepted as a credited author on a research paper”. Any attribution of authorship carries an element of accountability for the work, and AI tools cannot take such responsibility. Second, researchers using LLM tools should “document this use in the methods or acknowledgements sections”.
At the moment the biggest problem is detecting whether an essay has been generated by LLMs. ChatGPT’s output — under careful inspection — can be analysed, especially when a few paragraphs are involved while subjects are related to scientific work. “This is because LLMs produce patterns of words based on statistical associations in their training data and the prompts that they see, meaning that their output can appear bland and generic, or contain simple errors,” Nature has said.
ChatGPT and earlier large language models have been named as authors in a small number of published papers, pre-prints, and scientific articles, according to The Verge. According to Nature, Alex Zhavoronkov, chief executive of Insilico Medicine, an AI-powered drug-discovery company in Hong Kong, credited ChatGPT as a co-author of a perspective article in the journal Oncoscience last month. He says that his company has published more than 80 papers produced by generative AI tools while the latest paper discusses the pros and cons of taking the drug rapamycin, in the context of a philosophical argument called Pascal’s wager.
“We would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism,” said Holden Thorp, editor-in-chief of the Science family of journals in Washington DC.