If you know how ChatGPT works, you won鈥檛 be surprised to聽learn that AI聽detection filters consider it聽highly likely that the chatbot had a聽large hand in聽writing . Nor will you be聽surprised that ChatGPT is聽biased towards the latest intellectual ideas, .
AI agents are prediction engines using the web as their memory. They do no聽more than predict . When you ask ChatGPT a聽question, it聽parses it聽into words and their sequence, returning answers that match those sequences in聽reverse. It might sound like a聽simple trick, and it聽is, yet the secret sauce is聽the size of the database the AIs use to perform聽it.
Of the very used to train ChatGPT, 60聽per cent was a hotchpotch of information culled from websites, blogs or social media. Another 20聽per cent was content shared on Reddit and evaluated relatively highly by the users. The rest was books typically found in the public domain (mostly older and general purpose), with a bit of Wikipedia (3聽per cent) mixed in for good measure.
AIs store for each word the probability that any other word will follow it. The quality and value of these predictions depend very much on how often and in how many circumstances the software encounters any two (or more words) in proximity, how long a sentence goes, and which sentence might follow another. When put together, these predictions favour the most influential texts of a given culture, which shaped generations upon generations of English language teachers and the students they educated.
探花视频
Fed and raised on the incantations of Shakespeare and the literature that grew out of King James Bibles, this traditional English thought pattern could not but create AIs that could regenerate the Bible or the Constitution as if they were common knowledge. Yet when asked questions about everyday issues, AI agents will be more likely to use a liberal-secular tone because this perspective dominates web conversations.
Frequently, AI content mixes heavenly and earthly perspectives. For example, when you tempt ChatGPT with the prompt 鈥淐ontinue the story: In the beginning there was鈥︹ it will promptly deliver a Genesis-style Feynman physics lecture, 鈥淚n the beginning, there was a profound stillness that seemed to stretch for eternity. Within this void, a single point of unimaginable density and energy existed. This singularity held within it the potential for all that would come to be. Then, in an instant that defied the very concept of time, the singularity erupted in a cataclysmic explosion known as the Big Bang.鈥 (Try it, although your answer might vary.)
探花视频
The overlap of old and new in ChatGPT-generated texts is not the cause but the result of the ongoing cultural strife of the American mind with itself. This tension should not lead to finger-pointing. But we do need a healthy conversation about the origins and uses of ChatGPT or its siblings, such as Google鈥檚 Bard, Facebook鈥檚 LLAMA or Anthropic鈥檚 Claude.
First, is such training, jumping from green energy and trans rights to sermons and pro-life arguments in one click, appropriate for a tool used in the academy? Suppose we raised the AI models/agents on a diet of 80 per cent books and 20 per cent information from curated encyclopedias, including Britannica. In that case, they would be less focused on the vagaries of the present and more concerned with the age-old dilemmas and gained certainties of academic knowledge.
Creating AI agents that cater to academic needs could be an expensive proposition, of course. However, given the enormous resources of the leading US and European universities, this could be a stimulating problem to be solved by a large consortium of higher education institutions, such as the American Association of Universities (AAU) or the European University Association. ChatGPT聽4 cost 鈥渕erely鈥 . The AAU universities, a group of 69 large state and private universities, received .
Second, ChatGPT was created with a 鈥渏ust in case鈥 mentality. It was meant to answer all questions for all purposes. This leads to tentative, 鈥渉e said, she said鈥 answers 鈥 even to questions whose answers we should be sure of, such as whether vaccines save lives or whether Communism is as genocidal as Nazism. When trained on specialised information, it should express more confidence about matters that truly matter.
探花视频
Third, ChatGPT speaks like a parrot because its delivery is not automatically adjusted. More research and engineering are needed to calibrate the tool to each request鈥檚 real-life intentions and consequences. In academic learning, these situations should be the pre- and post-stages of the research process: finding arguments and packaging them for public consumption.
The in-between, the moment of discovery, should be reimagined in future pedagogies to scaffold around rather than fall back on AI agents. Assignments must connect to specific competencies demonstrated across written, multimedia and oral presentations. A聽return of the in-class written or oral exams (horribile dictu) should not be out of the question.
In their current forms, ChatGPT and its siblings are like those three-year-olds who can recite entire stories read to them only once. But turning a three-year-old into a learned person takes 20 years of strenuous, structured education. It is time to stop reading AI agents stories and send them to a real school.
is associate dean of research and graduate education at Purdue University鈥檚 .
探花视频
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 罢贬贰鈥檚 university and college rankings analysis
Already registered or a current subscriber?








