探花视频

To respond to AI, we may need to live with lower student satisfaction

Reversing some recent trends in teaching and learning is sure to generate mixed responses from students. Regulators must step in, says Ian Pace

Published on
September 12, 2025
Last updated
September 12, 2025
A student gives a thumbs down gesture
Source: iStock/AntonioGuillem

To on generative?artificial intelligence?(AI) is a tempting but futile exercise. The technology is here to stay, and it certainly can facilitate plenty of things. But it also creates many problems, and solving those is where our focus should lie.

The situation for text-centred academia is particularly critical. During this year’s marking season, many academics noted greatly increased numbers of student assessments that they are sure were artificially generated. And, sure enough, found 92 per cent of students using AI tools for their assessments.

However, as has long been true of contract cheating, proving this to the satisfaction of university disciplinary investigations is very difficult. We are left in the depressing position of giving lower marks to those essays most obviously written by the student.

Commentators have proffered several categories of solutions, from straight-out AI bans (almost impossible to police) and a return to handwritten exams (difficult for students so unused to writing by hand) to teaching students AI prompt engineering (a modest skill) and leaving them to it. Common suggestions between these extremes involve embracing AI while requiring students to critically assess the results – and to document that process. The trouble is that it is easy for that documentation to be artificially generated, too.

探花视频

ADVERTISEMENT

gets her students to summarise class discussions (but this can be done by feeding the commonly provided transcript or slide deck into AI), creating blogs (which multiple AI programmes can do), and making assessments “more personal and creative” (I asked ChatGPT to do this in her field of developmental economics and it did so successfully).

Personally, I have wondered whether focus on non-online data, such as archives, knowledge generated through fieldwork, or (to a limited extent) subscription-only databases would represent ways forward. Certainly, the skills required for this type of work are highly valuable in themselves and have many transferable aspects. But adopting it would create significant logistical difficulties, requiring many health and safety checks, and would also necessitate increased levels of dedication and application from undergraduates (although there are ).

探花视频

ADVERTISEMENT

I conclude that written assessments in any format will need to be used less and less. Oral presentations that go beyond reading a prepared text (which could be AI-generated) and involve questions and other live interactions could still be valuable – and at least they demonstrate some speaking and presentation skills. Other assignments, perhaps involving role-playing, could help with building teamworking skills.

that the priority has now shifted towards teaching “human skills” – managing interactions and relationships with others, teamwork, empathy, creativity. But AI can already simulate some of these, especially creativity, and will soon be able to simulate others; and a bigger issue is to what extent existing types of degrees (and disciplines) might provide these skills.

Indeed, hard questions need to be asked about the continued viability of at least some degrees. If they were honest, many academics would admit that significant numbers of graduates have learned only to comprehend and produce a vaguely critical synthesis of existing scholarship on a subject, perhaps applying this to some new data. Such skills may still be valuable, but the professional demand for them may be limited as employers increasingly turn to AI to economise resources.?If, instead, we need to teach a level of critical analysis exceeding that achievable by AI, would this not amount to asking undergraduates – in a mass HE system – to do what we have previously only expected of postgraduates?

There is no silver bullet for universities. All solutions to the challenges thrown up by AI will be provisional – not least because AI will continue to develop for now. They will need to be tried out in an experimental fashion, and some will prove failures. But my suggestions would include a new emphasis on that which cannot easily be rationalised or quantified, forms of “creativity” a long way from the commodified and functionalised understandings of this term, and perhaps also more rote learning and memorisation, to retrain students in effort, persistence and dedication.

探花视频

ADVERTISEMENT

The trouble is that such experiments will inevitably run up against universities’ interest in maximising measurable student satisfaction. The contemporary economics of higher education encourages strategies to attract as many students as possible and ensure they cannot fail. Over a long time – and accelerated by Covid – universities have pursued that imperative by rationalising and standardising the study and learning process (undermining academics’ agency in the process).

They have made most materials that students need available online. They have sought to make required methods and processes for writing an essay as transparent as possible. Reasonable adjustments for neurodivergence have become assessment norms. And the amounts of required reading, self-directed study and more have been progressively reduced, sometimes legitimised by arguments about students needing to also take jobs – in essence, that they should be awarded a full-time degree on the basis of part-time study.

Responses to AI that roll back some of this are sure to generate mixed responses from students – especially from those now used to obtaining high marks merely by writing a reasonable AI prompt. But quality concerns cannot be sacrificed?on the altar of student satisfaction. The regulator needs to step in.

for English universities currently mention essay mills and plagiarism but not generative AI. The OfS did in June recognising the need to engage with AI, but its recommendations are otherwise relatively general.

探花视频

ADVERTISEMENT

I would not want to try and pre-empt the precise measures the OfS might demand. But it seems clear to me that if we want to prevent institutions from taking easy options to maintain student satisfaction, significantly increased quality regulation specific to how institutions manage student use of generative AI may well be required.

This is not a moral judgement on AI. It is just an honest recognition of the extent of an assessment problem that is already out of hand.

探花视频

ADVERTISEMENT

is professor of music, culture and society and university adviser: interdisciplinarity at City St George’s, University of London. He is also secretary of the London Universities’ Council for Academic Freedom. He is writing here in a personal capacity.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (15)

I read in the press that our MPs are now increasinlgy using AI tools to write their speeches and to formulate their interventions in our legislature, duly recorded for the benefit of posterity in that august publication Hansard. This is a source of concern for some but realistically when our legislators are using these tools it makes it increasingly difficult for us to "police" their use in academic work and invoke outmoded notions of plagiarism and cheating? I am afraid, to use another cliche, the genie is well out of the bottle now and the use of these tools is normalised. The "Quill Pen" protocol will not work and end up making us look like even bigger fools than we do already.
In the last instance it's simple. Whatever can be done by AI can be done by anybody. Graduateness has to be about what else you can do. Discuss! (without the use of AI).
Spot on.
Yep lecturers need to adopt their assessment strategies rather than setting boring essays they need to recognise AI exists and then set work where AI cannot really help out that much. I have made sure my MCQ tests are done in class and set problem sets that I know AI can't help out on.
Back to the Medieval U and the use of oral exams - were they called ‘disputations’ and ‘responsions’?
No. Those came much much later. In "medieval universities," almost everything was oral. There were books. Reading and writing were limited. Composition was collective. Pace has no understanding of universities at any point in time.
No. Those came much much later. In "medieval universities," almost everything was oral. There were books. Reading and writing were limited. Composition was collective. Pace has no understanding of universities at any point in time.
Why can't Pace strive for an era with MORE "student satisfaction" WITH AI used properly?
I think he is rather describing what they call a double bind here, a contradiction that is hard to resolve.
Yes and not just the MPs! Now we also hear that "the governor of the Bank of England uses Copilot, the AI software owned by Microsoft, for official speeches, the Bank has confirmed. It is understood that Bailey uses AI as an editing tool, inputting the draft of a lengthy speech and asking the software to cut it down to the right length before delivering it." You see I have argued with the students that their ability to shape their materials, edit them and hit a word length is all part of the academic discipline and represents skills they are being assessed on. They need to structure their essays and develop an argument which is a challenging task. So many of them are unable to edit and focus their writing. And of course you can not say getting AI to do their editing is plagiarism but you might claim it is cheating.
I agree with Professor Pace for the most part here. I also read about the bifg Tech co, Google, Apple etc, are now working on a Universal AI Transaltor and are getting close to achieveing this. I think the implications for the text based Humnaities subjects are very serious and we are rather flailing about at the moment.
Yes I think this is an honest and thoughtful (even courageous) piece from Prof Pace. When he argues that "hard questions need to be asked about the continued viability of at least some degrees. If they were honest, many academics would admit that significant numbers of graduates have learned only to comprehend and produce a vaguely critical synthesis of existing scholarship on a subject, perhaps applying this to some new data" I think he is correct. Atthe higher end of the student intake there is still some excellent work with real critical reading, thinking and writing, but for the majority what he says is probably, sadly accurate. When all is said and done in a mass HE system, we need the students though so they have to be "educated" somehow. I will be leaving the profession soon I hope so I wont have to worry about it for much longer.
Teach the students how to use AI responsibly in all of their courses. Then make the assessments unseen exams or vivas to test understanding of the subject. I can't understand why anyone would still be setting coursework essays if they have spent more than 5 minutes using GenAI. With ever better updates that remove made up references and link to online resources more reliably, any college drop out kid can input an essay title into chatgpt or copilot and get a 2i these days. But they will inevitably fall flat on their faces when they have to sit a written exam.
(From Ian Pace) Yes, re essays - but they remain the mainstay of many a degree course. Quite a few will get degrees next year mostly from feeding in prompts. This is why the situation is urgent
new
Excellent essay by Prof Pace, highlighting the tension between academic standards and student satisfaction, not to mention staff workload. Oral exams for a class of 200 are just not possible. And (as I've posted before), it's not just humanities essays. Many numerical problems in STEM are routinely solved by ChatGPT and cut and pasted into student answers and it can also write perfect computer code.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT