Last month, I noted my strong suspicions that a manuscript of mine had been reviewed by AI. However, it turns out that AI is not only reviewing papers. As the associate editor of a scientific journal, I am also seeing mounting evidence that it is writing papers too – or, rather, that it is plagiarising already published ones.
In recent months, I have handled more than 20 manuscripts that, at first glance, appear to present new research but that closer inspection reveals to be AI-generated rewrites of previously published studies. The structure, methods and even figures and tables closely mirror those of the originals, but the papers actually have abnormally low text similarity scores. While genuine manuscripts typically show about 10?15 per cent similarity in iThenticate, these fakes register only around 2?5 per cent, as if they were deliberately designed to evade detection.
No doubt those low scores are in part accounted for by the awkward or vague phrasing they contain, replacing commonly accepted scientific terms with less precise expressions. Moreover, many references appear to be inserted at random, and some are unrelated to the study’s topic.
In nearly every case I have seen, the submission is attributed to a single author (or, at most, two authors), who uses a non-institutional email address and lists a well-known university as their affiliation. However, when I try to verify the author’s identity, no such person exists. Indeed, in one case, a reviewer from the country where the study was conducted noted that foreign researchers are not permitted to collect environmental samples alone in that country, as the paper claimed that they did. These are not careless mistakes; they are deliberate acts.
探花视频
But what is driving them is mysterious. Traditionally, plagiarism is motivated by desperation to pad out a CV in search of a job or a promotion – and is often conducted with the help of a standard for-profit paper mill. But why would such a person submit a paper under a fake name? How would that help them?
That is why I think something more systematic and more malicious may be occurring. I wonder if it is a coordinated effort to flood journals with misleading science, create confusion, and erode trust in specific fields or publishers. This may sound far-fetched, but given the current information landscape, it cannot be ruled out.
探花视频
Alternatively, I strongly suspect, though I cannot yet prove, that someone may be developing an AI program and using these submissions to test its performance. Can the AI fool journal editors and reviewers into thinking the paper is new and genuine?
It seems likely that it can – at least some of the time. The cases I have encountered were relatively easy to detect because they involved the duplication of a single paper and were rejected immediately. However, if a manuscript is constructed using fragments of multiple papers, or a review article is assembled by mixing content from dozens of sources, detection becomes far more complicated.
Editors and reviewers are skilled at assessing the scientific quality of manuscripts, not at detecting AI-generated fakes. Nor will they necessarily find it easy without such training – especially under heavy editorial workloads or when they lack deep familiarity with the topic in question. It imposes an even heavier burden on already overextended peer reviewers in particular, who work on a voluntary basis.
We need help. After all, the consequences of failing to detect AI-written papers are serious. Journals themselves risk damage to their reputations and even removal from citation and other scientific databases, affecting not only the journal but also its authors, editors and affiliated institutions.
探花视频
Moreover, academic publications are not just a measure of individual or institutional achievement; they form the foundation for policy decisions and public trust. If fake research slips through and accumulates in the scientific record, the consequences go beyond individual misconduct and threaten the integrity of evidence-based decision-making across society.
We need structural solutions. Publishers should implement stronger safeguards at the submission stage, including AI-authorship detection tools and blacklists of known fraudulent identities. It is also important to introduce identity verification procedures, such as requiring authors’ ORCID IDs, to ensure their credibility. And editorial teams need better training to recognise warning signs.
It is now abundantly evident that AI is disrupting the flow of scientific publishing in multiple ways. There are even reports of researchers in their manuscripts, asking AI for positive reviews. The research community must strengthen its collective vigilance. Raising awareness of confirmed cases of AI fraud and actively sharing information among editors, reviewers and researchers is crucial to building a culture of continuous monitoring and rapid response.
Failure to adopt such measures will ultimately undermine the foundation of scientific trust.
探花视频
Seongjin Hong is a full professor at Chungnam National University, South Korea.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 罢贬贰’蝉 university and college rankings analysis
Already registered or a current subscriber?