探花视频

AI easier to deal with if accepted as a ‘wicked problem’

No correct solution to the challenge of assessment in the age of generative AI, and academics must be given ‘permission’ to tailor their approaches, Australian researchers say

Published on
九月 8, 2025
Last updated
九月 7, 2025
Burning a witch
Source: iStock

The threat to assessment from generative artificial intelligence is a “wicked problem” with no “silver bullets”, and universities should approach it “not as a problem to be solved but as a condition to be navigated”.

A study by Deakin University researchers has found that assessment in the age of generative AI meets all 10 criteria for a “wicked problem”, according to a developed more than 50 years ago.

This means there is no “correct solution” and people cannot even agree on what the problem is. Attempts to resolve it will require trade-offs and will only work in certain circumstances, for limited periods.

The Deakin team grilled 20 academics responsible for running subjects at an unnamed Australian university. The interviews revealed evidence for all 10 characteristics of wicked problems, as defined by Berkeley design theorists Horst Rittel and Melvin Webber in 1973.

Wicked problems cannot be conclusively defined, and the way they are described determines the possible approaches to solving them. Yet the potential solutions are infinite, cannot be tested and cannot be identified through experimentation, because the consequences are too costly. The solutions are never correct – only better or worse – and there are no clear criteria for determining whether they work. Even so, failure to solve wicked problems is considered intolerable. Wicked problems are in essence unique and can always be described as a symptom of other problems.

Assessment in the world of generative AI has all of these characteristics, the study finds. This means it requires a different approach from “tame” problems, which have clear definitions and measurable solutions.

Rather than chasing “the elusive ‘right answer’ to generative AI in assessment” – a quest that “will exhaust their educators while failing their students” – universities should allow academics to “compromise”, “diverge” and “iterate” in a relentless process of adapting their assessment techniques.

“Permission” to do these things will relieve staff of the endless stress of trying out remedies, and getting blamed – including by themselves – when the remedies fail.

“While…wicked problems do not have correct solutions, they do have better and worse responses,” the researchers in the journal Assessment & Evaluation in Higher Education. “The path forward requires abandoning the search for silver bullets in favour of developing adaptive capacity.

“This means creating institutional structures that support educator decision-making rather than mandating uniform responses. Removing the spectre of ‘finding the perfect solution’ just might help teachers navigate AI related challenges in more sustainable, healthy and effective ways.”

Lead author Thomas Corbin said universities would “push back” against the notion that their assessment problem had no real answer, but he said risks from generative AI would persist whether universities heeded the advice or not.

“The current risk-averse approach just isn’t working,” Corbin said. “It’s not like…management has a solution that’s great, and we’re offering something that’s better but higher risk. At the moment, we’re kind of in a position where teachers have to come up with shit [solutions] or just do whatever they’re told, which is pretty shit as well.”

Some people regarded assessment as a wicked problem “even before the rise of generative AI”, the paper notes. Corbin said it was misguided to imagine that the sector had been able to fully substantiate graduates’ skills before AI came along.

“No university has ever been able to guarantee…that we have perfect insight into what students know, think, are capable of or what they’ll do in the real world,” he said. “That’s always been a myth.”

john.ross@timeshighereducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
Please
or
to read this article.

Reader's comments (2)

No. There are plural--not one-"correct solution." Just is no one AI. Nor is there a single use. Higher education's learning curve is failing.
new
Well, do you know. I have been reading more about this. It seems now that there is a small but increasingly vocal community which is championing the rights of AI avatars etc. It seems they are subject to a great deal of abuse in all areas of life and as "intelligent beings" (albeit digital) their rights should also be protected. So we do need to be careful and we shoud be reformulating our EDI policies appropriately.
ADVERTISEMENT