探花视频

My paper was probably reviewed by AI – and that’s a serious problem

Our paper was rejected on the basis of reviewer comments that were vague, formulaic, often irrelevant and occasionally inaccurate, says Seongjin Hong

六月 24, 2025
A robot reading, illustrating AI peer reviewing
Source: PhonlamaiPhoto/iStock

As an environmental scientist with over 15 years of experience and more than 150 peer-reviewed publications, I am familiar with the ups and downs of academic publishing. But there was something distinctly odd about the rejection decision that I received from a prominent international journal last month.

After an initial major revision decision, we had carefully addressed each of the reviewers’ concerns and submitted a thoroughly revised manuscript. The first-round comments were reasonable, and we responded in detail to further improve the clarity and scientific rigour of the work. Yet our paper was ultimately rejected, primarily?because of one reviewer’s unexpectedly negative second-round report.

What troubled me was not just the tone, but the nature of the critique. The reviewer introduced entirely new concerns that had not been previously raised. Moreover, the comments were formulaic, vague, often irrelevant and occasionally inaccurate, with little engagement in the actual content of our manuscript. Remarks such as “more needed” and “needs to be validated” lacked technical rationale or data-based feedback.

Our study is in the field of environmental chemistry, focused on the field application of a novel environmental analysis method. However, the reviewer criticised it for failing to provide a “comprehensive ecological assessment” and for “not examining the effects on animal behaviours such as feeding or mating” – as if it were a behavioural ecology paper. The reviewer also claimed that “repeatability of chemical analysis isn’t fully explained” even though this was addressed in multiple sections.

Moreover, the review even contradicted itself. It began by acknowledging that “the authors replied to the questions raised”, but then concluded, without coherent reasoning, that “I cannot recommend this work.”

At that moment, I began to suspect that the review had been written, at least in part, by an AI tool such as ChatGPT. As an associate editor for an environmental science journal myself, I am seeing an increasing number of reviews that appear to be written by AI – though this is rarely disclosed upfront. They often sound superficially articulate, but they lack depth, context and a sense of professional accountability.

Specifically, in my experience, AI-generated reviews often suffer from five key weaknesses. They rely on vague, overly general language. They misrepresent the paper’s scope through abstract criticisms. They flag issues that have already been addressed. They exhibit inconsistent or contradictory logic. And they lack the tone, empathy, or nuance of a thoughtful human reviewer.

To confirm my suspicions, I compared the reviewer’s comments to a sample review that I generated with a large language model. The similarity was striking. The phrasing, once again, was templated and disengaged from the actual content of our manuscript. And, once again, the review contained keyword-driven summaries, baseless assertions and flawed reasoning. It felt less like a thoughtful peer review and more like the automated response that it was.

As an editor, I also know how difficult it can be to recruit qualified reviewers. Many experts are overburdened, and the temptation to use AI tools to speed up the process is growing. But superficial logic is no substitute for scientific judgement. So I raised my concerns with the editor-in-chief of the journal, providing detailed rebuttals and supporting evidence.

The editor replied courteously but cautiously: “It is highly unlikely the reviewer used AI,” they said. “If you can address all concerns, I recommend resubmitting as a new manuscript.” After three months of effort invested in revision and response, we were back at the starting line.

The decision – and the possibility that it was influenced by inappropriate use of AI – left me deeply disappointed. Some might dismiss it as bad luck, but science should not depend on luck. Peer review must be grounded in fairness, transparency and expertise.

This is not a call to ban AI from the peer review process entirely. These tools can assist reviewers and editors by identifying inconsistencies, spotting plagiarism or improving presentation. However, using them to produce entire peer reviews risks undermining the very purpose of the process. Their use must be transparent and strictly secondary.

Reviewers should not rely uncritically on AI-generated text, and editors must learn to recognise reviews that lack substance or coherence. Publishers, too, have a responsibility to develop mechanisms for detecting AI-generated content and to establish clear disclosure policies. Nature’s announcement on 16 June that it will begin publishing all peer review comments and author responses alongside accepted papers represents one potential path forward for publishers to restore transparency and accountability.

If peer review becomes devalued by undisclosed and substandard automation, we risk losing the trust and rigour that scientific credibility depends on. Science and publishing must move forward with technology, but not without responsibility. Transparent, human-centred peer review remains essential.

Seongjin Hong is a full professor at Chungnam National University, South Korea.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
Please
or
to read this article.

Reader's comments (24)

You need evidence to make such claims. Any experienced academics has had manuscripts rejected based on much less that you describe. We can't blame undefined "AI" for everything!
I agree. Suspicion is not enough really. The article says "probably" and the journal is not named. In which case, this piece should not have been published by THES in my view.
Further: is it "his" paper or "our" paper? "Associate editor" or "editor"? Was this written by AI and not fact-checked?
Good point Graff, too much supposition and smear here for my liking. Either make the allegation or shut uo in my view.
AI has advanced significantly, but at least for now, it still falls short compared to human reviewers. Reviewers and editors must take greater responsibility and should not accept AI-generated feedback uncritically.
Thank you for this insightful piece. It’s a timely reminder of the importance of recognizing both the role and limits of AI in scientific publishing.
I believe that more voices need to speak out about both the light and the shadow sides of this emerging trend. While AI has undoubtedly brought us many advantages, we must not overlook the potential harms and unintended consequences it can also bring. Some may question this article by asking, “Is there concrete proof that the review was generated by AI?” Of course, evidence based on facts is important, but I also believe that insights gained through years of experience are equally valuable and should not be dismissed. There is a reason we call such individuals veterans in their field. Thank you for this thoughtful piece. It reminded me of the importance of using AI tools with greater caution, transparency, and responsibility.
Comment withdrawn
I disagree with this piece only to the extent that I uninjured is absolutely the case that AI should be no where near the peer review process and should absolutely and 100% banned. Of course the difficulty is how would one enforce that? Publishing reviewers (and reviewer names) might help, but it's not a complete solution. I often wonder why someone would bother to use AI to review a paper. If you don't want to do it yourself, just say no. The idea (hinted at, if not quite stated here) that it was the editor, inventing a reviewer, rather than a human reviewer, using AI had not even occurred to me. All I can say is that a journal whose editors use AI to review will not stay a top journal for long. I hope the author appealed above the head of the handling editor they are working with.
"Publishing reviewers (and reviewer names) might help, but it's not a complete solution. " Peer review is anonymous and for very good reasons.
Comment withdrawn
Sounds like it will soon make an excellent peer reviewer!
Well yes like that 'Murderbot' character on AppleTV. He is very good at those sort of things.
I am watching that as well. It seems a rather uncanny analogy for my own Department.
"Reviewers should not rely uncritically on AI-generated text, and editors must learn to recognise reviews that lack substance or coherence." This is the main point. People in general should not rely uncritically AI-generated text. The key is looking critically at tasks needing a human eye. An AI review will write what is most likely to be said about an article, which is not helpful seeing as most existing reviews probably include the authors received review " vague, formulaic, often irrelevant and occasionally inaccurate". Critical Thinking is time consuming and costly, but delivers a worthwhile result.
Is this not the trend though? From students cheating on essays, to editors producing reviews with AI, to people writing bits of grant applications they consider unimportant or boilerplate, individuals are trying to use AI to produce outputs without time consuming and costly critical thinking, when in each case the critical thinking is the point and the output is not.
Comment withdrawn.
As Captain Mainwearing used to say, "I think we are getting within the realms of fantasy now"
Comment withdrawn. Legal reasons
new
"Our paper was rejected on the basis of reviewer comments that were vague, formulaic, often irrelevant and occasionally inaccurate, says Seongjin Hong" Hmmmmm. Case proven! I think not M'Lud!!
The journal claims 'peer review'. An AI tool is not a peer. End of story. The journal is committing academic fraud - please name the cheat so we can all avoid it in future.
You should not make a serious allegation such as this without evidence. There is no evidence here, just a suspicion and the journal has rejected the allegation the person used AI in this case. Please do be careful. If the author wishes to make the charge of academic fraud publicly then he should do so. No-one else is in the position to make this allegation but him or his co-authors. At the moment he is having his cake and eating it at the moment.
Roger you really should not accuse someone of being a "cheat" in your position. Let the author of the article make this allegation if he feels justified, but there is no real evidence only a suspicion based on a few verbal phrases and expressions. You are making a serious allegation based on someone else's comments (hearsay) which are at best tendentious and which you are certainly not in a position to substantiate unless you were one of the co-authors.
Well yes exactly, note the weasel words in the article title, "My paper was probably reviewed". "Probably" i.e "as far as one knows or can tell". I am surprised that THES would allow this tbh. "Probably" doth butter no parsnips but might evade a legal action from the journal in question.
ADVERTISEMENT