探花视频

ChatGPT drives rise of ‘flowery language’ in journal abstracts

Increased use of words such as ‘delves’, ‘underscores’ and ‘showcasing’ identified since popularisation of large language models

July 2, 2025
Source: iStock/flavijus

ChatGPT has had an “unprecedented” impact on scientific writing, leading to a marked increase in “flowery” language, a new paper has found.

To determine?the extent of usage of large language models (LLMs), researchers from the University of Tübingen and Northwestern University analysed more than 15 million biomedical abstracts from the PubMed library.

?from before and after the launch of ChatGPT in November 2022 and discovered that LLMs have resulted in certain words featuring much more regularly.

These were predominantly verbs, such as “delves”, “underscores” and “showcasing”, which all had much higher increases in usage compared?with previous years.

探花视频

ADVERTISEMENT

Previously this so-called excess vocabulary had mainly been seen in content words. For example, during the Covid-19 pandemic, nouns such as “respiratory” or “remdesivir” appeared in studies much more regularly.

One example highlighted of “flowery language” from a 2023 study said: “By meticulously delving into the intricate web connecting [...] and [...], this comprehensive chapter takes a deep dive into their involvement as significant risk factors for [...].”

探花视频

ADVERTISEMENT

The results, published in the journal Science Advances, also show that changes as a result of LLMs resulted in abrupt changes in both the quality and quantity of research papers. And the analysis suggested that at least 13.5 per cent of abstracts published last year were processed with LLMs – about 200,000 papers in total.

“We show that LLMs have had an unprecedented impact on scientific writing in biomedical research, surpassing the effect of major world events such as the Covid pandemic,” said ?gnes Horvát, co-author of the study and a professor at Northwestern’s School of Communication.?

The paper,?which did not use LLMs for any writing or editing,?said that the impact of ChatGPT, which is used to improve grammar, rhetoric and overall readability, could have broader implications for scientific writing as well.

“LLMs are infamous for making up references, providing inaccurate summaries, and making false claims that sound authoritative and convincing. While researchers may notice and correct factual mistakes in LLM-assisted summaries of their own work, it may be harder to spot errors in LLM-generated literature reviews or discussion sections.”

探花视频

ADVERTISEMENT

With a risk that LLMs make academic papers less diverse and less novel than human-written text, the researchers warned that such homogenisation?may degrade the quality of scientific writing.

And they called for a reassessment of current policies and regulations around the use of LLMs for science in light of the findings.

patrick.jack@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (5)

The THES has the bit in its teeth over this story it seems!! But yes, let's use data and evidence to pursue this. Is this a scandal uncovered or a damp squib. Let's hear more!! Could be a scoop, if so well done THES!! But ....
I think "underscores" is OK. I use words like "underpins" a lot. I might use "delves" and would not think it was either here not there to be honest. I would not use "showcasing" in the usual run of things but then I guess I might without thinking. I will now religiously avoid the use of those terms (as will Chat GB or whatever it's called) None of these are "flowery" in any real sense, but what does that adjective mean? Keats uses the adjective a lot I know. Much of this lexical analysis is s.... tbh. Can we get the serious computer assisted linguistics and stylistic people involved in this and produce some firm data? This really is just amateur hour as it stands.
Yeah get the experts in to run the software and provide an evidential basis and then sort out the problem if there is one.
Well the key thing here seems in my opinion, that if these serious allegations have any real merit (and I am not convinced they do on the evidence presented which is quite weak), but if they do actually have merit, are we talking about the odd rotten apple in the barrel, as it were, or something more systemic and routine? If the latter then it is a terrible indictment of our profession and not a good look. We are severely criticising our students (calling them cheats in some articles) yet it seems we may be the ones who are cheating. If so then, as a profession, we are, in my view, the lowest of the low with little integrity. So it is a bit of a quandary?
new
When I contemplate my fellow academics and their enormous hypocrisy I think of Max Beckmann's comment that these days he managed to keep down more than he vomitted. We arraign the students over cheating with `AI and at the same time apparently are cheating ourselves with the same technology. Disgusting.

Sponsored

Featured jobs

See all jobs