ADVERTISEMENT

Plug the gaps

It is high time to frame clear guidelines on how AI can be used in academic research. This responsibility falls on the Univeristy Grants Commission. The sooner it acts, the better

Representational image File picture

Angshuman Kar
Published 29.04.25, 07:54 AM

An incident has highlighted the necessity to decide how Artificial Intelligence should be used in academic research. This incident showed how a mistranslation by AI can do irreparable damage to academic research. It also revealed the risks of using unchecked automation in academia.

The controversy erupted when researchers began noticing a strange term — vegetative electron microscopy — appearing in multiple scientific papers. At first glance, the phrase seemed like a technical term. However, upon closer examination, experts realised that the term is nonsensical.

ADVERTISEMENT

The anomaly was initially flagged on PubPeer, an online research forum, by a Russian chemist using a pseudonym. However, Alexander Magazinov, a software engineer, ultimately traced the origin of the error. His investigation led him back to a single mistranslation by AI from a 1959 scientific paper. The original phrase used in that paper, electron microscopy of vegetative structures, refers to a well-established method for studying plant tissues. Unfortunately, due to AI’s inaccurate interpretation, the text spread across multiple columns and the words got jumbled up inadvertently, creating an entirely new — and nonsensical —term.

Alarmingly, this error managed to slip through the peer review system. It went unnoticed by reviewers and was subsequently repeated in nearly two dozen published papers. This has raised serious concerns about the reliability of present-day academic review processes. Some critics blamed peer reviewers, arguing that their failure to detect such a glaring mistake points to the declining standards of scrutiny in academic publishing. Others defended the reviewers, pointing out that their expertise is often limited to specific aspects of a study and that such errors can be difficult to catch, especially when AI-generated text is involved.

While AI has undoubtedly transformed research — streamlining data analysis and accelerating new discoveries — this incident underscores a significant downside: the dangers of blind trust in AI-generated content. As academic institutions increasingly integrate AI into their research workflows, maintaining rigorous human surveillance on research has become an imperative. Without stringent quality control measures, such errors could proliferate, ultimately eroding the integrity of scientific literature and undermining public trust in academic research.

What about the uses of AI in research in the humanities and social sciences? Today, it is possible for a student to generate an entire term paper or research paper using AI. The problem is that there are no specific rules in our country to decide whether an AI-generated paper can be considered as legitimate work, or whether it should be treated as plagiarism. Some research journals are now mentioning in their calls for submissions that they will not accept AI-generated papers. However, as of now, the University Grants Commission has not issued any specific circular on the use of AI in research and PhD work.

Currently, the UGC regulations require plagiarism checks before submitting a PhD thesis. However, checking for AI-generated content is not yet mandatory. Some plagiarism-detection softwares can identify AI-generated content; some universities have already started implementing such checks. But until the UGC officially includes AI-generated content under the definition of plagiarism, can it truly be considered as such?

There is another question. If researchers correct grammatical and syntactical errors in their theses using AI, should the text be considered as AI-generated content or should it be treated as an act of plagiarism? Various software tools are available
for correcting grammar. Many researchers and educators use such tools. Is this a crime?

It is high time to frame clear guidelines on how AI can be used in academic research. This responsibility falls on the UGC. The sooner it acts, the better.

Angshuman Kar is Professor, Department of English and Culture Studies, and Director, Centre for Australian Studies, The University of Burdwan

Op-ed The Editorial Board Artificial Intelligence (AI) Academia Research Papers University Grants Commission (UGC) Plagiarism
Follow us on:
ADVERTISEMENT