探花视频

Protecting the integrity of university rankings

Addressing rare cases of attempted manipulation is built into the DNA of THE’s rankings. But we will go further, says Phil Baty

Published on
十月 3, 2025
Last updated
十月 3, 2025
Magnifying glass and file icons
Source: iStock/tadamichi

Success in global rankings brings great rewards for universities: it makes them more attractive to top global talent; it opens up new opportunities for partnerships and collaborations with other ranked universities; it can attract investment, unlock scholarships and – in many countries – provide performance-based funding boosts.

To earn a place at the top international table for knowledge creation and innovation can also be a tremendous source of great national pride, gaining approval from politicians, and the start of a virtuous circle, with success driving further success.

But as a responsible ranker, 探花视频 has an obligation – a duty – to ensure that success in our rankings is built on solid, long-term foundations; that it is built on reality, for the good of the institution and the nation it represents, with lasting, real-world impact.

We need to make sure that gains in the rankings are not attainable by “gaming” the system and that we weed out perverse incentives to manipulate the data. While the vast majority of universities worldwide are built on the ideals of evidence and truth (Harvard’s very motto is, after all, “Veritas”) sadly a tiny minority of institutions, or individuals within them, do seem ready and willing to attempt to massage and manipulate their way to rankings gain – even if such short-term, superficial, cosmetic success serves no one’s interests – not least the universities themselves.

So THE can confirm today that we are stepping up our ongoing “rankings integrity” reviews for our World University Rankings, to openly explore the increasing ways that academics and institutions, sadly, might attempt to “game” the rankings, and how we can respond to potential abuses.

First, we should be clear that we believe the THE World University Rankings is the most rigorous and robust ranking system in the world – that’s why it is so widely trusted by students, academics, university leaders and governments worldwide. Our balanced approach with a range of data sources and a comprehensive basket of 17 separate performance indicators, ranged across the teaching environment, research quality, international outlook and industry relations, makes manipulation particularly tricky and highly unlikely.

We use a combination of three distinct data sources: data from THE’s invitation-only, statistically representative annual academic reputation survey; some core institutional data provided by the universities themselves, such as staff and student numbers and financial information; and bibliometric data (18 million outputs with 170 million citations for the 2026 edition of the World University Rankings) from our partner Elsevier’s Scopus database.

In this post, we focus on the bibliometric data for the world rankings, provided by Elsevier.

First we must be clear: our methodology rewards quality over quantity when it comes to research publication data. We do have a metric that?looks at “research productivity” (a simple papers-per-faculty ratio) but the weighting for this (5.5 per cent) is far lower than the combined weight of the basket of research metrics that disregard the quantity of published research outputs in favour of measurements of their quality and impact (combined at a weighting of 30 per cent).

This guards against incentives to churn out high volumes of lower-quality papers – by, for example, salami-slicing findings, publishing fraudulent results or abusing artificial intelligence to cut production corners – and helps discourage predatory publishers exploiting academics’ desire to publish.

We should also say upfront that we do not use journal-based metrics, such as Journal Impact Factors, in any of our assessments and that we normalise all our bibliometric data to take account of the huge variations in citation volumes between different subjects and indeed, publication types and publication years.

However, it is a matter of record that there is a mounting issue with research metrics manipulation – given the links between bibliometric metrics, personal reputation and success with grants and careers, as well as in global rankings. The latest of a welcome list of initiatives?that has attracted attention is the “” produced by Lokman Meho at the American University of Beirut. This was set up “in response to growing concerns about how global university rankings incentivise volume- and citation-based publishing at the expense of scholarly integrity” and has generated widespread attention and interest.

It focuses on three areas that could signal a risk of abuse: highlighting the percentage of an institution’s publications appearing in journals that were recently removed or suspended from Scopus or Web of Science for failing to meet quality or publishing standards; examining the number of retracted articles per 1,000 publications, capturing evidence of serious methodological, ethical or authorship violations; and the percentage of citations of an institution’s articles that originate from the same institution, emphasising citation practices aimed at artificially boosting institutional metrics rather than reflecting genuine scholarly influence.

At THE we take our responsibilities very seriously to disincentivise poor practices.

In terms of Meho’s three areas of integrity risk, what is THE’s position?

Discontinued journals

In the case of academic journals being removed or suspended from the Scopus database for failing to meet the standards required, our position is simple: we have always excluded them and will continue to do so for all of our rankings analyses.

Retracted articles

With regard to retracted articles, our position is also very simple and very clear: they are not included in our analysis.

When an academic publisher issues a retraction notice, this is indexed in Scopus as an “erratum” document type and linked to the original article. The original publication in Scopus remains in the database but is marked as “retracted”, making it easy to filter out of the rankings analysis.

This process ensures transparency and maintains the integrity of the publication record while clearly indicating articles that have been retracted.

THE uses analyses of five document types for the world rankings: articles, reviews, conference proceedings, books, and book chapters, with all retracted documents filtered out.

Self-citations

The issue of self-citations is interesting because while some people may indeed cite their own work or work of colleagues in their own institution as a way to boost their own citation profile, self-citations are often an entirely legitimate signal of incremental, innovative and ground-breaking research.

Research is of course often an iterative process, with small steps of discovery made in the same field, or related fields. Many researchers, not least those in pioneering new arenas, build on their previous research to push the boundaries of discovery. So it can be entirely legitimate for researchers to cite themselves or colleagues who have done the previous iteration of the research of their interest – indeed, it can be a very healthy practice when there are clear signals of research being gradually augmented through a sequence of publications.

Self-citations extend beyond researchers citing their own work; they also appear through citations networks, commonly known as citation cartels, or when researchers from the same institution cite each other at a rate that becomes difficult to justify.

To help ensure that abusive self-citations are not rewarded in the rankings, THE developed a new metric – “research influence” – which examines not just the citations to each paper but the quality of the papers where the citations are coming from.

Some suggest that all self-citations should be fully removed from the rankings’ analyses, to prevent abuse, but that would be to arbitrarily exclude some of the most innovative and impactful research in the field. We prefer a more nuanced approach, one that recognises great science, but we’ll be consulting over the coming months to explore if we need to do more to further reduce any perverse incentives to self-cite, or indeed, to indulge in any other forms of manipulation.

We are working closely with our data partners at Elsevier to analyse and monitor the full population of ranked universities for integrity risks and red flags – those with unusually high proportions of retracted articles, self-citations or publications in discontinued journals, or indeed, those with high combinations of these elements. We can pick out all outliers for special scrutiny and investigation.

We are delighted that the vast majority of universities display high-integrity publication behaviour. But we will remain highly vigilant.

We will be exploring all these issues – best practices, methodological developments to reduce perverse incentives and the opportunity for “gaming” and indeed potential punitive measures – in open forums, such as the THE World Academic Summit and world summit series ahead of the work on the 2027 edition of the World University Rankings, due to be published in autumn 2026.

Watch this space.

Phil Baty is chief global affairs officer at?探花视频.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
Please
or
to read this article.
ADVERTISEMENT