Advertisement
academics

The AIntelligent Researcher

Gina Kolata
Posted on 29 Jul 2025
10:10 AM
istock.com/sompong_tom

Scientists know it is happening, even if they don’t do it themselves. Some of their peers are using chatbots like ChatGPT to write all or part of their papers.

In a paper published recently in the journal Science Advances, Dmitry Kobak of the University of Tübingen in Germany and his colleagues report that they found a way to track how often researchers are using artificial intelligence chatbots to write the abstracts of their papers. The AI tools, they say, tend to use certain words — like “delves”, “crucial”, “potential”, “significant” and “important” — far more often than human authors do.

The group analysed word use in more than 15 million biomedical abstracts published between 2010 and 2024, enabling them to spot the rising frequency of certain words in abstracts.

Advertisement

The findings tap into a debate in the sciences over when it is and it is not appropriate to use AI helpers for writing papers.

When ChatGPT was introduced in November 2022, a collection of words started showing up with unusual frequency. Those words, the investigators report, were not used so often before the release of ChatGPT. They infer that the change in word usage is a telltale sign of AI.

In 2024, there were a total of 454 words used excessively by chatbots, the researchers report. Based on the frequency of the AI-favoured words, Kobak and his team calculate that at least 13.5 per cent of all biomedical abstracts appeared to have been written with the help of chatbots. And as many as 40 per cent of abstracts by authors from some countries writing in a few less selective journals were AI-generated.

Those numbers, said Adam Rodman, director of AI programs at Beth Israel Deaconess Medical Center in Boston, US, “are almost certainly a lower bound”, because they don’t account for human editing of what the chatbot wrote or the chatbot editing of what humans wrote. Rodman was not involved in the study.

In an interview, Kobak said he was “somewhat surprised” to see so much use of AI in abstracts, summaries of papers’ results and conclusions that often are the only things people read.

“I would think for something as important as writing an abstract of your paper, you would not do that,” he said.

In the academic sciences, some researchers have grown wary of even a whiff of AI assistance in their publications.

Computer scientists are aware that AI favours certain words, although it’s not clear why, said Subbarao Kambhampati, a professor of computer science at Arizona State University, US, and the past president of the Association for the Advancement of Artificial Intelligence. Some scientists, he said, have been deliberately refraining from using words like “delve” for fear of being suspected of using AI as a writing tool.

Other scientists seem blasé about the risk of being caught using chatbots.

Kambhampati gives some examples, like a case report in a radiology journal that includes: “I’m very sorry but I don’t have access to real time-information or patient-specific data as I am an AI language model.”

The journal Nature recently surveyed more than 5,000 researchers and asked when, if ever, is it okay to use AI to write a paper.

There was no consensus.

Opinions varied, depending on whether AI was used to write an abstract, or the entire paper, and whether it was used to edit or summarise.

For the situation analysed in the new paper — writing an abstract — just 23 per cent of the Nature respondents said it was okay to use AI without acknowledging the assistance. Some 45 per cent said it was acceptable only if the researcher reported using AI, and 33 per cent said it was never acceptable.

“It’s all very ambiguous right now,” said Dr Jonathan H. Chen, director for medical education in artificial intelligence at Stanford University in the US. “We’re in this grey zone. It’s the Wild West.”

Sometimes it is difficult to see the hand of AI, an issue that raises questions of whether an AI-generated submission to a journal should be rejected simply because there is no human author.

Keith Humphreys, professor of psychiatry and behavioural science at Stanford, says he
was once tricked by a letter to the editor of the journal Addiction.

Humphreys, who is deputy editor of the journal, said he thought the letter about a recently published paper made reasonable points. As is the journal’s custom, he sent the letter to the authors of the paper to give them a chance to reply.

They told him that they had never heard of the authors of the letter, who were purportedly from China, and that “our field isn’t that big and no one has ever heard of these people”, he said.

It seemed quite likely that someone had run journal articles through a chatbot and had asked it to generate letters to the journals’ editors.

NYTNS

Last updated on 29 Jul 2025
10:10 AM
academics research papers Researchers artificial intelligence (AI) ChatGPT
Advertisement
Similar stories
TS EAMCET 2025

TS EAMCET 2025 Phase 2 Seat Allotment Result Out - Download Link and Key Dates

South Point High School

South Point High School to Celebrate 100 Years of Quantum Mechanics with Student Exhi. . .

University of Bristol

University of Bristol Receives UGC Nod - Data, AI, Fintech to Lead Mumbai Campus Curr. . .

IBPS Clerk

IBPS Clerk 2025 Notice Out - Application Starts August 1; Check Prelims & Mains Sched. . .

read next