For some, the apparent capacities of artificial intelligence (AI) question the very existence of universities. Indeed, a narrative of inevitable digital replacement has recently come to colour strategic discussions of the already beleaguered financial position of the UK’s major universities, radically dividing the educational and academic communities.
Opinions vary: some see AI’s into personal creativity and learning as acutely destabilising to human purpose, and , as well as undermining fundamental , rights and freedoms. Others take , viewing AI as simply the next step in the evolution of digital improvement in human society, replacing costly human inadequacies with faster, more efficient and more tailored responses to the customer bases of governments, corporations and educational institutions.?
These debates over whether AI is a “good thing” are only set to become more heated. However, they overlook a fundamental and practical issue. The past 30 years have been marked by overhyped speculations about innovations aimed at . Whether it be the dot-com bubble, derivatives financing or subprime mortgage selling, all have been accompanied by the same messianic enthusiasm that presently attends AI advocacy, and all have proved to be gateways to large-scale financial failure at a societal level.
Before risking the systematic adoption of AI at a national or institutional level therefore, universities should ask serious questions regarding the economic and technical conditions of the present AI boom – the volatile venture capital gold rush buoying the industry up and setting prices artificially low, the critical information ecology involved in AI design itself, and the potential negative consequences of mass adoption of AI itself. Moreover, attention must now be paid to the very purpose of education at the higher education level, whose artificial distortions have been highlighted by the arrival of AI. Individually and in combination, these issues question both the “myth of human replacement” that is commonplace in AI debates and, more substantially, the very sustainability of the AI industry itself.?
探花视频
Campus resource: How will AI reshape academic employment?
Here, universities need to be aware of the vital role they play as cultural brokers in the development of human use of AI. After all, it did not emerge from the world of corporate capitalism, but rather from the higher education sector itself, largely as university spin-off companies in the US, UK and China – academic and scientific initiatives that only later attracted major venture capital. Universities therefore cannot collectively treat AI as a challenging or inconvenient “external factor of the world” to which they must simply capitulate or accommodate.
If universities wish to secure both their own survival and their responsibilities to wider society, there is an urgent need for them to take a clear, informed and value-based stance on the AI question. Such a stance must resolve the tension between AI’s demonstrable usefulness in the fields of science, technology and medicine on the one hand, and its cognitively, socially and politically damaging potential on the other.
探花视频
However, we need to be aware that we may presently misunderstand how AI works, and the conditions of its existence. Indeed, we may miss the fact that an AI-informed future will require more of what universities presently provide, not less.?
The costs of?AI
One of the more worrisome aspects of AI is its , which places significant limits on its sustainability. Each of Google’s?9 billion AI-powered daily search requests is estimated to use 7-9 watt-hours of energy, equivalent to running a 100-watt light bulb for five and a half minutes. The AI industry is remarkably secretive about its electricity and freshwater consumption, making such estimates impossible to verify; but it is clear that the AI industry now consumes resources on a par with entire nation-states, even .?
While the environmental costs of this kind of energy usage are disquieting enough, the fact remains that little of this comes free. AI companies are presently the forefront of , with $59.6 billion (?44.4 billion) being invested globally in the first quarter of 2025, including a gargantuan . Such huge funding means that AI companies presently have little need to make AI pay for itself, but a strong need to ensure that it will soon.

Adopting AI for business processes therefore looks good now (when its services are almost free), but companies and institutions (including universities) that depend on AI for core processes must anticipate significant cost growth later when venture capitalists seek a return on their investment.?
Not all?AI is equal, however. Practically, there are two basic types of AI available on the market: Narrow (or task-specific) AI and General AI. Narrow AI is designed to perform specific tasks that are time-critical or beyond the capabilities of ordinary humans: cancer diagnostics, drug discovery, facial recognition or AI’s wunderkind, DeepMind’s protein-folding prediction software AlphaFold, whose authors received the Nobel Prize in Chemistry in 2024.
By contrast, General AI programs are designed to attempt any formalisable problem. These include generative AI and large language models such as ChatGPT, Lama, Cohere and Gemini. These tend to be the AIs that students now (unwisely) use to write their essays, and that companies use to produce task-specific interfaces, including timetabling, task organisers and interview processing. Generally speaking, Narrow AI does the jobs humans probably 肠辞耻濒诲苍’迟 do, while General AI does the jobs that humans can and probably should do.?
The difference between these two is signal, because energy consumption is proportional to the size of the training database, not the nature of the problem. Narrow AIs use comparatively small databases, mining only the material relevant to the assigned task: even a large “narrow” AI such as Alphafold used a training base of 147,000 previous protein-folding solutions. By comparison, General AIs have enormous databases (Meta’s Lama 4 Behemoth AI, presently in development, is estimated to use a training database of ).
Universities should therefore think critically not just about whether to adopt AI, but rather what kind of AI and for what purposes, and anticipate dramatic fluctuations in cost and availability as the venture capital window closes.
探花视频
The needs of?AI
The second sustainability problem lies in AI’s use of intellectual resources. There is a tendency to view AI as an autonomous third-party provider, external to the human population, one that can independently provide educational, organisational and analytical services that allow employers to simply replace expert human staff. This zero-sum game mentality lies at the very heart of the between machines and humanity that surrounds AI. This involves a profound misunderstanding of how AI works. AI providers are not independent external providers: they repurpose material that humans themselves produce.

General AI models (particularly large language models, or LLMs) depend on vast, pre-constructed training databases of humanly-produced data, created by you and me in books, articles, essays, newspaper articles and social media posts. Sophisticated algorithms then map their tagged contents to identify existing nodes and relationships that are treated as statistical probabilities, which are used to generate answers relevant to the user’s request, presented in a natural language form.
While most people are aware of these basic features of AI functionality, their policy ramifications are worth emphasising. While AI certainly transforms and augments the basic data that exists in its training databases to produce its results, it nonetheless only works within an informational ecology that requires high-quality human resources as sustenance. This means that AI is already cybernetic: if AI can be likened to a brain, then we as humans are its neurons, whose outputs are brought together by the executive functions of AI’s algorithms to address specific tasks. AI is not an “alien”; it is us, transformed and magnified.
This dependence on human product is ongoing, required every time an AI performs a calculation. To provide world-relevant responses, AI models must be fed a constant supply of up-to-date information, producing a huge demand for data. This demand is so voracious it is beginning to challenge and compromise the legal order of the existing knowledge world. The UK government’s recent proposals to change copyright laws to render previously protected intellectual property available to AI data scrapes . Meta’s use of intellectual property from the LibGen “pirate” library in Russia was met with international by publishers’ guilds, writers, artists and performers.?Since 2016, more than?30 countries have introduced laws specifically focused on AI, creating an increasingly unstable and fast-changing legal landscape.?At a deeper social level, artists and writers are increasingly abandoning the digital space as a means to protect the integrity of their own work.
The problem here is straightforward: much of the high-quality human data that AI relies on to function effectively is produced by humans, motivated by financial recompense, public recognition or personal expression. The tendency of AI to undermine those motivations does not simply compromise the legal order: it undermines the very conditions of AI’s own productivity.
Any serious national policy that seeks to maintain AI as part of a sustainable economic environment needs to consider not just the data needs of AI companies, but the human ecology by which that data is produced. If the latter fragments, the former will quickly stall.
AI’s Achilles heel
The third problem of AI sustainability emerges from its own tendency towards “error”. This is often referred to as the autophagy problem. In the view of many AI scientists and designers, autophagy constitutes an existential threat to the AI environment, particularly when the technology becomes commercially dominant.
Any AI calculation involves subtle but significant errors, many of which are integral to it performing effectively. This may seem counterintuitive, but even a generative AI producing spectacularly helpful and technically accurate answers is nonetheless engaged in a statistical and linguistic transformation of its data hinterland, rather than an actual replication of it. Just as a photograph of a landscape is not the same thing as the landscape it captures, AI’s synthetic results are not “the same” as the information it was trained on, carrying within them the hidden algorithmic and statistical “rules” of the algorithm itself.

Such “errors” are initially hardly noticeable, barely influencing AI’s effective accuracy. Their presence, however, becomes important as humans use AI daily, and the more AI services are called upon to do ever-increasing percentages of our daily human work. When this happens, an AI’s synthetic results inevitably get fed back into its newly-updated database. Unfortunately, because AI’s synthetic results cannot be readily identified as such (a fact utilised by certain students), the next iteration of AI processes treats these synthetic results as though they were humanly-produced. This means that the errors and transformations implicit in the original results become compounded, eventually geometrically, as generations of synthesis follow.?
探花视频
The effects of autophagy have been tracked and debated with growing alarm by the AI research community itself. In the absence of reliable fresh data, free from synthetic products, AI models quickly tend towards “model collapse” under the weight of their compounded errors, much like audio feedback on a sound stage. Researchers at Rice University found that images reiteratively reproduced through AI . Somewhat more disturbingly, text-based generative AI tended towards increasingly bland, monolithic narratives as AI reproduces its own digital consensus. applied the Montreal Cognitive Assessment (MoCA) test to a variety of AI programs, discovering significant “cognitive impairment” similar to humans in the early stage of dementia. This decline, moreover, occurred swiftly: what takes decades in a human took months for an AI – a function of the very speed that we value so much in the digital world.
As AI is pushed ever more insistently into our everyday lives, the pool of real-world data will become increasingly coloured by synthetic AI material. As administrators, students, authors and civil servants use AI to write more and more of their daily material, it becomes impossible to tell the human product from the synthetic. Combined with the prodigious data demands for training General AIs, the consequences of autophagy become increasingly unavoidable. Estimates place the time limit on AI’s access to high-quality human data in terms of months: many , with low-quality data lasting only a few years longer.
Unlike most other technologies, the technical accuracy of artificial intelligence products may decrease with time and use, rather than improve.
The paradox of AI?
If correct, the implications of these three sustainability problems are sobering. They suggest that we may be living right now in the heyday of AI, a unique but temporary apogee of venture capital support and largely uncontaminated data resources. But the best-before date on this sweet spot seems to be rapidly approaching, and the conditions for even the medium-term sustainability of AI use are neither in place nor even understood at a policy level.
As a wide variety of AI entrepreneurs, not least , have pointed out, coming to terms with AI’s relationship with human society is neither a technical issue nor a business one, but a . We need to face up to the possibility that AI is not creating problems in our social and economic organisation, but revealing tensions that already existed.
Here, then, is the peculiar paradox of AI: for better or for worse, AI cannot replace us because it depends so deeply on what we do, say, write and record. We are not its rivals any more than wheat crops or dairy livestock are rivals of humanity. More vitally, the quality of AI results gets better with the improvement of the data we supply it with. Conversely, human overdependence on AI – and the cognitive offloading that comes with it – will render poorer, and eventually useless, synthetic AI production. If we want it to be better, we have to get better ourselves.?
If this sounds like a win-win scenario, it may well be. However, at the moment, we seem to be heading for a lose-lose scenario, in which mass overdependence on AI does not simply threaten human jobs and skills, but artificial intelligence too.?
The answer, it would seem, is straightforward: if AI cannot replace humans in the grand scheme of things, then we need to ask ourselves how we improve human creativity in order to improve AI’s dependent functionality?
The future purposes of education?
The problem here seems to lie in how we have formulated higher education as a teaching and learning endeavour in the first place – in particular, the shift in education towards the demands of qualifications and employability, and away from the creation of autonomous human minds as authors. That focus on the written and performative end product of education has sharpened in a world where the end product can now increasingly be replaced by AI.
Taken to its logical conclusion, a teacher can ask ChatGPT to prepare a set of questions, students can use ChatGPT to answer them, whereupon the teacher can get ChatGPT to assess them and provide feedback – meaning that an entire course assessment can occur within a few minutes and without any education actually happening. Thus the rise of student plagiarism, essay mills and indeed AI has arguably become a rational response to an educational environment?that has become progressively divorced from development of autonomous, competent and creative subjects. Indeed, educational policy over the past few decades may well have created an avid but artificial market for uncritical student use of AI.
In all of this, the “end product” of education – the “learning outcome”, to use a common pedagogic term – has occurred at the cost of the actual person being educated. Without trained and disciplined human subjectivity and creativity, AI models will not have the requisite training data to perform its functions and will collapse under the weight of their own self-consumed errors. AI’s cybernetic nature means it still needs human subjectivity. Again, if we want it to be better, we have to get better ourselves.?
At all levels of human society, the development of a sustainable human-AI economy will hinge on three basic principles of governance:?first, promoting human subjectivity in order to support AI work; second, breaking the AI feedback loops that endanger its sustainability; and third, augmenting, not replacing, human knowledge, agency and subjectivity.

The first of these requires developing a value-focused culture of self-discipline that systematically resists humans cognitively offloading skills on to AI, at both a personal and an institutional level. The second will need the maintenance of appropriate contractual economic reward for sources of human creativity: put simply, we will need to pay for AI, and AI companies will need to pay for human data.?The third involves returning AI to the purposes that it was designed for, that is to augment human knowledge and action, not replace it. AI is for jobs that we cannot do on our own, not those we can – but 肠辞耻濒诲苍’迟 be bothered to – do. A sustainable AI economy is for protein-splicing calculations, not organising your diary; for swifter cancer diagnoses, not writing your essay. Just as overuse of antibiotics makes them resistant to disease, overuse of AI makes it useless to humanity.
For universities, the responsibilities here should be obvious; they must remain a key source of human agency, knowledge and creativity. Their human intellectual production is generally huge, especially if we include student essays, research drafts, articles, monographs and so on. Boasting significant populations of writers, scholars and scientists, they are already dedicated to the creation and dissemination of valid knowledge – that is, to human subjectivity. More than this, they promote and maintain disciplinary practices and regimes designed to ensure the quality and propriety of that knowledge output – to police plagiarism and ethical research and study, and to maintain legal compliance requirements to protect intellectual property. They are, in other words, already practised in breaking the feedback loop. And finally, they are already adept at identifying those precise and specific purposes by which AI can be used to augment, not replace, human endeavour.?
Thus universities could and should negotiate as collective bodies with AI servers to provide core training data. This, of course, requires that those universities take a particular disciplinary stance towards AI use itself: that they concentrate on the human production of scientific and scholarly knowledge and insight, at both a teaching and research level, and emphasise – very much contrary to the present trend – the actual value of human capacities. And this would not just be in STEM subjects: because the needs of AI are now so wide, this would require universities to take an equally broad-based and liberal approach to knowledge production, along the classic A-Z model of disciplines.?
On the teaching front, universities should be equally radical. Staff should steer students and researchers away from learnt dependence on General AI and restrict use only to Narrow AI programmes with specialised research purposes. Where AI use presents the possibilities of undermining or replacing the goals of actual human education and subjectivity itself, it should be avoided completely.?
At a practical level, this obviously implies both banning?General ?AI use and, given the difficulties of detecting AI-written text, a global shift in the way educational establishments address the provenance of writing of all kinds. Rather than course tutors seeking out AI use?post hoc?through insufficient and legally weak detection software, students will need to provide authenticity reports?in advance?that demonstrate precisely by whom, when and how documents were produced (software for which – such as Grammarly’s Authorship or Google’s Brisk – is already emerging). Ironically, such environmentally and financially expensive digital solutions merely replicate the oversight conditions of “traditional” in-person, handwritten exams – methods that are now looking more and more like the most advanced and hack-proof technology available on the market.?
None of this, however, can replace the deep need for education to refurbish the fundamental value of human subjectivity and authorship and reassess at a deep level the corrosive philosophy that it does not matter what people actually think and do, only what they produce.
Martin A. Mills is chair in anthropology at the University of Aberdeen.
探花视频
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 罢贬贰’蝉 university and college rankings analysis
Already registered or a current subscriber?