Research and impact | By Simon Watts

AI: Existential threat or life-changing opportunity?

The sudden rise to prominence of artificial intelligence chatbot ChatGPT has shone a spotlight on how AI is rapidly transforming our lives.

 

Over the last year, the world seems to have woken up to the potential (and potential threat) of Artificial Intelligence (AI). Its ability to revolutionise industries from healthcare to transportation has generated both excitement and alarm. Excitement at its potential for innovation and efficiency, and alarm at the pace of development, which is currently going largely unchecked by governments and regulators.

 

As a writer, I’ve viewed the rise of ChatGPT with scepticism about its abilities and a little trepidation about being replaced. So, in the interests of journalistic research, I asked ChatGPT to generate a 600-word article on the benefits and dangers of AI. Within 10 seconds I was surprised to see it return a very passable, if slightly generic, response. You can read that here.

 

Undeterred by those AI-generated results, this writer has pressed on, to see what insight and expertise City academics can add on the technology which is on the cusp of completely transforming our lives.

 

Improving healthcare diagnosis and treatment

 

One of the most exciting areas for AI is in healthcare, where it can help rapidly speed up the diagnosis of diseases.

 

Research from a team including the Crabb Lab at City, has used ‘deep learning’ (DL), which is a type of artificial intelligence (AI), on thousands of images of the backs of the eyes of glaucoma patients to predict how much their vision has been affected by the disease.

 

Glaucoma – a group of eye diseases which cause progressive damage to the optic nerve – affects around 2 per cent of people over 40 and almost 10 per cent of those over 75, leading to more than a million hospital visits every year. Early detection and management is crucial, as once somebody loses sight through glaucoma it cannot be restored.

The study mobilised and curated large volumes of data from more than 24,000 NHS patients. The findings of the study suggest that the AI method could play a role in tracking how glaucoma progresses in patients in the clinic and could also be used to optimise research trials investigating glaucoma.

 

Dr Christina Malamateniou is the Postgraduate Programme Director for Radiography at City, and a strong advocate for the benefits of AI in radiography. In 2020 she launched the ‘Artificial Intelligence for Radiographers’ module, fully evaluated for students and staff. The course was the first of its kind in the EMEA region and attracted students from across the globe.

 

Dr Christina Malamateniou

Dr Malamateniou is also the Chair of the AI working and advisory groups at the Society and College of Radiographers (SCoR), and has been leading European-wide studies to understand the impact of AI on professional roles and identity of radiographers. She has also conducted research on how responsible AI practice and AI education are central to its implementation.

 

She said:

 

“I want AI to succeed, I want it to be safe and for us all to be able to harness its huge potential. In the past, research emphasis has been on the development of AI tools and innovation; it is time we focused on usability driven by clinical needs and governance of these tools.”

 

“AI is only as good as the data we feed it with, so it has to be fed with well-curated data, that is free from biases. AI has such massive reach that, if used correctly and ethically, it can make healthcare more equitable and inclusive by incorporating more demographics into the AI algorithms.”

 

Through her work with colleagues in the sector, Dr Malamateniou has contributed to the development of the first BSI standard for the ethical use of AI in healthcare, working with a multidisciplinary team of expert clinicians, academics, research, policy makers and industry reps.

“This is a unique opportunity to make things better for patients and clinical practitioners and reimagining healthcare for now and for the future.”

– Dr Christina Malamateniou, School of Health and Psychological Sciences

How AI will change the world of work

 

Writers and journalists are far from alone in fearing their jobs might be at risk. A significant number of operational jobs are likely to become automated through AI. While that might sound appealing, with proponents of AI suggesting this will give employees time to devote to ‘creative tasks’, the more likely reality is that many of these jobs will simply disappear. In one new e-commerce warehouse in Northampton, robot ants are being used to take on the majority of storage and packing duties.

 

Distinguishing truth from AI-fiction

 

Although already impressive, ChatGPT can be just as fallible as us mere mortals. A recent example of this is when a New York lawyer admitted using ChatGPT for legal research, and gave results which turned out to be false, referencing legal cases which did not exist. That error in judgement resulted in a $5,000 dollar fine.

 

Being able to distinguish between AI-produced information which is useful not useless, will increasingly become an issue as it is more widely adopted.

 

It is an issue already on the radar of Professor Artur Garcez, Director of the Research Centre for Adaptive Computer Systems and Machine Learning at City. He said:

 

“While humans have been learning how to trust information provided on websites, social media and emails from different cues such as source, visual appearance and grammatical errors, ChatGPT creates an entirely new level of difficulty when it comes to that judgement.”

 

Professor Artur Garcez

“The most immediate concern of a practical nature, therefore, is the risk of large-scale disinformation dissemination and its impact, particularly on young democracies.”

 

Ethics and regulation

 

Understandably, the rapid evolution of this technology has raised urgent ethical questions – with many experts suggesting that industries and governments have been too slow to see the issues fast approaching on the horizon. In June, tech industry leaders met with UK government ministers and regulators in London to discuss the UK’s approach to AI and how it should be governed.

 

Dr Mai Elshehaly is a Lecturer in Visualisation at City’s giCentre. She believes a wide-angled lens is needed to capture different views and expertise. She said:

 

“We need to unite the perspectives of people working in AI – computer scientists, lawyers, ethicists, and create a medium for inter-disciplinary dialogue and research, where each can share their part to make sense of the bigger picture.”

 

Dr Mai Elshehaly

Preparing graduates for the AI-driven workplace

 

Many universities have recently issued guidelines to students about the permitted use of AI in assessments, and are giving guidance to make students aware of the limitations of the technology. It is an approach which Dr Elshehaly supports:

 

“Simply telling students not to use it won’t work. Already, we have Github research telling us that 92 per cent of programmers are using AI to support their practice. Therefore, when teaching Computer Science, we need to view AI as an assistive technology, and we need to help students see the value in leveraging this technology while investing in their own skills. We should be looking at it from the perspective of improving human and machine collaboration – we’ll still need human expertise to engineer what AI generates.”

 

“Skills like data literacy in graduates becomes incredibly important. There will be a huge boom in opportunities for those who understand this technology.”

 

– Dr Mai Elshehaly, School of Science & Technology

Professor Susan Blake, Associate Dean (Digital Learning) from the City Law School agrees. She is a member of the Generative AI Task and Finish Group at City, and she is also considering how AI could be used constructively in teaching and learning within the School. In a recent interview with The Times, she said:

 

“Increasingly, legal practitioners are using this kind of software to deal with clients, to provide online services to clients and so on. In legal practice, it is not only inevitable, it is already happening. Students looking for careers as lawyers are going to have to be ready for that.”

Professor Susan Blake

Where next for AI?

 

We have reached an important crossroads for AI. If governments and regulators can quickly get to grips with the pace of change, then perhaps an exciting new world lies ahead, where AI is a positive addition to our daily lives. If not, then maybe a future more akin to an episode of Black Mirror will be our reality.

 

One thing is clear: now the AI genie is out of the bottle, there is no going back.