探花视频

HuaweiHow can universities meet the ethical challenges of AI?

How can universities meet the ethical challenges of AI?

istock-huawei-ai-ethics
厂辞耻谤肠别:听
iStock

A panel at THE Live furthered the debate about the best applications of AI within the higher education sector

A two-day event with the remit to 鈥渃hange the story鈥 about higher education, 2019 featured a range of panels that debated and discussed the challenges facing higher education.听

To address the issues of delivering ethics in the era of artificial intelligence, THE鈥檚 digital editor, Sara Custer, was joined by Nathan Lea, senior research associate at UCL Institute of Health Informatics, which works with Huawei, and Kate Devlin, senior lecturer in social and cultural artificial intelligence at King鈥檚 College London.听

Dr Lea began the discussion by acknowledging that ethics committees have struggled to get to grips with computing and tech issues. Similarly, those departments responsible for tech and computing don鈥檛 necessarily understand the subject area they are handling data for.

Describing AI algorithms as autonomous 鈥 not yet sentient but sophisticated 鈥 Dr Lea emphasised: 鈥淲e are not programming something; we are educating something that is, to date, unpredictable.鈥澨

The challenge of this, as both speakers acknowledged, was to take into account the potential bias of the data and the potential 鈥減rejudice鈥 of the decision-making algorithms, particularly around the issue of recruitment where there is the potential for discriminatory factors within those algorithms. Video surveillance and facial recognition are two other areas notably fraught with concerns around privacy and bias.听

鈥淲e have to appreciate that engineering can dehumanise problems when it is breaking them down into manageable chunks,鈥 Dr Lea added. 鈥淭here are so many of those chunks now that we can鈥檛 manage them in the traditional paradigm.鈥

It was agreed that some of what is considered as AI is, in fact, 鈥渇ancy statistics鈥, but also that public understanding of the field is key. Many people, Dr Lea noted, get their knowledge via 鈥渢he media, science fiction or entertainment鈥.听

Kate Devlin added that AI is often overhyped and falls below expectations, citing a number of 鈥淎I winters鈥, where the progress of potential breakthroughs has slowed. Hype is an important consideration when it comes to public perception, but even more so in relation to businesses taking up AI uncritically, something that Dr Devlin said happens regularly.

The application of AI or machine learning in higher education specifically is focused around learning analytics and tracking student progress, plagiarism and testing hypotheses in research. The former category has a number of implications around privacy, as Dr Devlin pointed out. For example, sensitive information about mental health could be collected and passed on throughout the student鈥檚 learning journey. Dr Devlin was particularly concerned about who was accountable for information gathered by AI, and she again mentioned the uncritical nature of AI take-up, referencing an example of a performance-tracking venture with no peer review papers.听

Transparency and accountability are, of course, crucial in this area. Dr Lea made the point that GDPR is relatively new and is evolving. This underscored his belief that we have the frameworks to try to tackle the challenges that AI brings. He cited various examples, such as academic journals not generally publishing articles unless they have undergone an ethics review. 鈥淒on鈥檛 throw more oversight at the problem. We know we have one 鈥 we need to try to understand it to regulate it properly,鈥 he said.

Europe has a strong reputation for the promotion of ethics within AI, and Dr Lea gave an example of how helpful information can be disseminated. It arose from an NHS review of information security, which said that neither good nor bad practice in the field of tech was being shared. 鈥淚t鈥檚 a fundamental tenet of my work to say 鈥榮hare what you like鈥 鈥 you learn from it. It鈥檚 so important.鈥

No discussion of AI comes without mentioning threats to employment. Ms Custer flagged the THE 2019 survey of university presidents and AI experts, which suggested that there was an expectation that AI would create jobs rather than steal them. Both speakers broadly agreed with this and pointed out that there are various other economic factors that could cause uncertainty. Rather than let the prospect of a robot takeover paralyse us, Dr Lea echoed the sentiments from THE report when he said: 鈥淲e need to offer enriching jobs, careers and pathways for as many people as possible.鈥

about Huawei and higher education.

Join听the THE Live mailing list 听for all the latest听THE Live news听and exclusive offers.

Brought to you by