探花视频

My paper was probably reviewed by AI – and that’s a serious problem

Our paper was rejected on the basis of reviewer comments that were vague, formulaic, often irrelevant and occasionally inaccurate, says Seongjin Hong

June 24, 2025
A robot reading, illustrating AI peer reviewing
Source: PhonlamaiPhoto/iStock

As an environmental scientist with over 15 years of experience and more than 150 peer-reviewed publications, I am familiar with the ups and downs of academic publishing. But there was something distinctly odd about the rejection decision that I received from a prominent international journal last month.

After an initial major revision decision, we had carefully addressed each of the reviewers’ concerns and submitted a thoroughly revised manuscript. The first-round comments were reasonable, and we responded in detail to further improve the clarity and scientific rigour of the work. Yet our paper was ultimately rejected, primarily?because of one reviewer’s unexpectedly negative second-round report.

What troubled me was not just the tone, but the nature of the critique. The reviewer introduced entirely new concerns that had not been previously raised. Moreover, the comments were formulaic, vague, often irrelevant and occasionally inaccurate, with little engagement in the actual content of our manuscript. Remarks such as “more needed” and “needs to be validated” lacked technical rationale or data-based feedback.

Our study is in the field of environmental chemistry, focused on the field application of a novel environmental analysis method. However, the reviewer criticised it for failing to provide a “comprehensive ecological assessment” and for “not examining the effects on animal behaviours such as feeding or mating” – as if it were a behavioural ecology paper. The reviewer also claimed that “repeatability of chemical analysis isn’t fully explained” even though this was addressed in multiple sections.

探花视频

ADVERTISEMENT

Moreover, the review even contradicted itself. It began by acknowledging that “the authors replied to the questions raised”, but then concluded, without coherent reasoning, that “I cannot recommend this work.”

At that moment, I began to suspect that the review had been written, at least in part, by an AI tool such as ChatGPT. As an associate editor for an environmental science journal myself, I am seeing an increasing number of reviews that appear to be written by AI – though this is rarely disclosed upfront. They often sound superficially articulate, but they lack depth, context and a sense of professional accountability.

探花视频

ADVERTISEMENT

Specifically, in my experience, AI-generated reviews often suffer from five key weaknesses. They rely on vague, overly general language. They misrepresent the paper’s scope through abstract criticisms. They flag issues that have already been addressed. They exhibit inconsistent or contradictory logic. And they lack the tone, empathy, or nuance of a thoughtful human reviewer.

To confirm my suspicions, I compared the reviewer’s comments to a sample review that I generated with a large language model. The similarity was striking. The phrasing, once again, was templated and disengaged from the actual content of our manuscript. And, once again, the review contained keyword-driven summaries, baseless assertions and flawed reasoning. It felt less like a thoughtful peer review and more like the automated response that it was.

As an editor, I also know how difficult it can be to recruit qualified reviewers. Many experts are overburdened, and the temptation to use AI tools to speed up the process is growing. But superficial logic is no substitute for scientific judgement. So I raised my concerns with the editor-in-chief of the journal, providing detailed rebuttals and supporting evidence.

The editor replied courteously but cautiously: “It is highly unlikely the reviewer used AI,” they said. “If you can address all concerns, I recommend resubmitting as a new manuscript.” After three months of effort invested in revision and response, we were back at the starting line.

探花视频

ADVERTISEMENT

The decision – and the possibility that it was influenced by inappropriate use of AI – left me deeply disappointed. Some might dismiss it as bad luck, but science should not depend on luck. Peer review must be grounded in fairness, transparency and expertise.

This is not a call to ban AI from the peer review process entirely. These tools can assist reviewers and editors by identifying inconsistencies, spotting plagiarism or improving presentation. However, using them to produce entire peer reviews risks undermining the very purpose of the process. Their use must be transparent and strictly secondary.

Reviewers should not rely uncritically on AI-generated text, and editors must learn to recognise reviews that lack substance or coherence. Publishers, too, have a responsibility to develop mechanisms for detecting AI-generated content and to establish clear disclosure policies. Nature’s announcement on 16 June that it will begin publishing all peer review comments and author responses alongside accepted papers represents one potential path forward for publishers to restore transparency and accountability.

If peer review becomes devalued by undisclosed and substandard automation, we risk losing the trust and rigour that scientific credibility depends on. Science and publishing must move forward with technology, but not without responsibility. Transparent, human-centred peer review remains essential.

探花视频

ADVERTISEMENT

Seongjin Hong is a full professor at Chungnam National University, South Korea.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (19)

You need evidence to make such claims. Any experienced academics has had manuscripts rejected based on much less that you describe. We can't blame undefined "AI" for everything!
I agree. Suspicion is not enough really.
Further: is it "his" paper or "our" paper? "Associate editor" or "editor"? Was this written by AI and not fact-checked?
AI has advanced significantly, but at least for now, it still falls short compared to human reviewers. Reviewers and editors must take greater responsibility and should not accept AI-generated feedback uncritically.
Thank you for this insightful piece. It’s a timely reminder of the importance of recognizing both the role and limits of AI in scientific publishing.
I believe that more voices need to speak out about both the light and the shadow sides of this emerging trend. While AI has undoubtedly brought us many advantages, we must not overlook the potential harms and unintended consequences it can also bring. Some may question this article by asking, “Is there concrete proof that the review was generated by AI?” Of course, evidence based on facts is important, but I also believe that insights gained through years of experience are equally valuable and should not be dismissed. There is a reason we call such individuals veterans in their field. Thank you for this thoughtful piece. It reminded me of the importance of using AI tools with greater caution, transparency, and responsibility.
"There is a reason we call such individuals veterans in their field." Well I am not sure that the author would be exactly delighted with this description of their status. I think we are a bit over the top comparing a bad review fora research paper with someone who has served in the armed forces and risked their lives. There is a danger that we really do take ourselves a bit too seriously, in my view. As Kissinger famously said, the reason academic disputes were so bitter is because the stakes are so low.
I disagree with this piece only to the extent that I uninjured is absolutely the case that AI should be no where near the peer review process and should absolutely and 100% banned. Of course the difficulty is how would one enforce that? Publishing reviewers (and reviewer names) might help, but it's not a complete solution. I often wonder why someone would bother to use AI to review a paper. If you don't want to do it yourself, just say no. The idea (hinted at, if not quite stated here) that it was the editor, inventing a reviewer, rather than a human reviewer, using AI had not even occurred to me. All I can say is that a journal whose editors use AI to review will not stay a top journal for long. I hope the author appealed above the head of the handling editor they are working with.
"Publishing reviewers (and reviewer names) might help, but it's not a complete solution. " Peer review is anonymous and for very good reasons.
Yes AI is excellent but not quite there just yet. But it will be very soon believe you me. The 'super-intelligent' versions are on their way. So all these glitches will be ironed out in due course believe you me. Remember the great Prof Stephen Hawkings predicted what AI will be capable of undertaking in just a few short years from now.
Sounds like it will soon make an excellent peer reviewer!
Well yes like that 'Murderbot' character on AppleTV. He is very good at those sort of things.
I am watching that as well. It seems a rather uncanny analogy for my own Department.
"Reviewers should not rely uncritically on AI-generated text, and editors must learn to recognise reviews that lack substance or coherence." This is the main point. People in general should not rely uncritically AI-generated text. The key is looking critically at tasks needing a human eye. An AI review will write what is most likely to be said about an article, which is not helpful seeing as most existing reviews probably include the authors received review " vague, formulaic, often irrelevant and occasionally inaccurate". Critical Thinking is time consuming and costly, but delivers a worthwhile result.
Is this not the trend though? From students cheating on essays, to editors producing reviews with AI, to people writing bits of grant applications they consider unimportant or boilerplate, individuals are trying to use AI to produce outputs without time consuming and costly critical thinking, when in each case the critical thinking is the point and the output is not.
Well you have put your finger on it exactly. But what is at stake in the larger view I think is the concept and practice of what we think of as 'critical thinking'. My concern is that the use of generative AI will actually change or transform (not in a good way necessarily) the very object of knowledge itself and the critical (and indeed the creative) process itself. Rather than an assistance, it is becoming or will become, a form of "co-author", especially as it is able to access more and more data from academic publishers and the wider world. As I understand it, the AI also "learns" from this process. I think it will become more obvious in the creative practices and creative industries first as that is where the money is and notions of originality are complex.
As Captain Mainwearing used to say, "I think we are getting within the realms of fantasy now"
"The key is looking critically at tasks needing a human eye." As the AI becomes more sophisticated then the number of those tasks will become fewer, at least if we believe the alarmists.
"Our paper was rejected on the basis of reviewer comments that were vague, formulaic, often irrelevant and occasionally inaccurate, says Seongjin Hong" Shock Horror!! Get over it Seongjin!

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT