In academia, citations have historically been seen as a measure of credibility. The frequency of a researcher’s citations correlates with the perceived value of his or her work. This conviction has moulded careers, affected promotions, dictated financing prospects, and enhanced institutional standings. Recent discussions on the Stanford-Elsevier ‘Top 2% Scientists’ list demonstrate that equating high citation numbers with research quality is not a simple matter. Regarded as a global ranking of the most important scientists, the Stanford-Elsevier list purports to identify the top 2% of scientists in the world based on citation data from Scopus. However, the methodological faults compromise the list’s authenticity — it is a superficial commemoration of citations as a substitute for true brilliance.
Citations are intended to recognise the utilisation and the significance of previous work. They ought to emphasise creativity, intellectual contribution, or methodological rigour. However, when converted into inflexible measures, they can be easily manipulated. The Stanford-Elsevier database is fraught with irregularities that underscore this issue. For instance, Nobel laureates such as Katalin Karikó and Drew Weissman — scientists whose contributions have transformed medicine and human evolutionary research — are absent. Simultaneously, journalists from The British Medical Journal seem to be ranked above those laureates and even deceased researchers are noted for their published legacies spanning generations. These anomalous inclusions reveal the deficiencies of the database when citations are the sole criterion.
The prevalence of hyper-prolific authors is equally concerning. Some authors generate over 100 publications a year, with self-citation rates above 97%. Some oversee extensive networks comprising hundreds of colleagues, co-authoring publications at a frequency that no one could rationally defend. Such tactics reveal a system manipulated to enhance visibility. Citation mills — entities that provide references to improve citation counts — have developed into a flourishing industry, further obfuscating the significance of metrics.
The misuse of citation-based rankings has emerged as a significant issue in India. Universities prominently promote their academics featured amidst the top 2%, presenting it as a testament to institutional superiority. This fosters a false perception of high-quality research environments for students and parents. Many prominent Indian scholars lack access to sophisticated laboratories or research facilities. They generate review articles en masse, frequently co-authored with numerous partners globally, with each manuscript extensively supplemented by self-citations.
The competition for citations is driven by incentives. Rankings such as the National Institutional Ranking Framework, the Quacquarelli Symonds and Times Higher Education significantly emphasise publication and citation measures. Promotions, financing possibilities, and institutional reputation are correlated with citation metrics. Private universities frequently subsidise open-access fees for faculty, as open-access journals generally produce elevated citation rates. Senior professors may need co-authorship from junior colleagues or students as a condition for mentorship, whereas visiting professorships are occasionally extended to international researchers contingent upon their inclusion of the institution’s name in all publications. The outcomes are evident: in 2023, India secured the third position worldwide in retractions, with 2,853 papers retracted within a single year. Countries that yield the highest number of the top 2% scientists also have the greatest retraction rates.
A business model is concealed beneath the exaltation of citation counts. Open-access publishing has emerged as a profitable sector, frequently bolstered by ranking systems that prioritise quantity over quality. Publishers capitalise on the consistent influx of articles, while universities allocate funds to cover publication fees. The Stanford-Elsevier database operates within this framework, reinforcing this cycle. The contradictions of these systems are not beyond the reach of even the architects. Professor John Ioannidis, the creator of the Stanford-Elsevier list, is consistently ranked among the world’s top 50 scientists, according to his own methodology. He published 71 papers in 2023, 73 in 2024, and 51 in 2025. His prolific output is indicative of the hyper-productivity that his system encourages, which prompts enquiries regarding potential conflicts of interest.
The misuse of citation metrics is a textbook example of Goodhart’s Law, which states that “when a measure becomes a target, it ceases to be a good measure”. Initially, the purpose of citations was to serve as a crude indicator of the extent to which research influenced subsequent work. Today, citations are frequently indicative of visibility, networks, and publishing strategy, rather than genuine advancements. Review articles, which consolidate existing knowledge rather than presenting new findings, receive an excessive number of citations. Citation counts are influenced by predatory journals that have inadequate peer review, just as much as by high-quality publications. In this environment, genuine innovation is at risk of being overshadowed.
Citations are not without significance, though. They are beneficial for tracing influence and identifying concentrations of active scholarship. But they cannot function as a substitute for quality. Quality in research is not limited to quantitative metrics; it necessitates originality, rigour, integrity, and societal relevance. It is determined by the impact of one’s work on the field, the resolution of issues, or the guidance of the next generation of academicians. When citations are considered to be synonymous with quality, it motivates researchers to prioritise short-term visibility over long-term contributions, to produce papers rather than pursue hazardous but potentially transformative ideas, and to concentrate on popular topics rather than neglected but socially vital ones. It penalises individuals who invest time in the creation of significant work that may not immediately generate citations but could have an enduring impact.
The academic community has to re-evaluate the manner in which it measures excellence. The most effective method of determining the novelty, rigour, and overall impact of research is still through thoughtful evaluations by specialists and a qualitative peer review. Research should also be judged based on how effectively it tackles important issues, such as health, environmental concerns, technology, and equity. It is equally important to acknowledge the contributions that scientists make to the development of research cultures that are conducted in an ethical manner, as well as the contributions that they make to the education of future academics. Although quantitative indicators can be used to augment evaluation, they should always be contextualised and can never be considered to be absolute measures of quality.
These challenges are starting to become more apparent to policymakers. Recent studies highlight the increasing worry around metric tampering and its impact on the public’s faith in science. To maintain its reputation as a credible organisation, academia must relinquish its obsession with citations and return to the fundamentals of research, which include curiosity, rigour, and responsibility. Lists and rankings will continue to be monuments to the distortions of a system that confuses visibility with merit until then. They will not be monuments to excellence.
Biju Dharmapalan is the Dean, Academic Affairs, Garden City University, Bengaluru, and adjunct faculty at the National Institute of Advanced Studies, Bengaluru





