ΑΙhub.org
 

Use of AI to fight COVID-19 risks harming “disadvantaged groups”, experts warn

by
28 July 2021



share this:

covid world map
COVID-19 world map. Credit: Martin Sanchez. Published under a CC-BY 3.0 licence.

Rapid deployment of artificial intelligence and machine learning to tackle coronavirus must still go through ethical checks and balances, or we risk harming already disadvantaged communities in the rush to defeat the disease.

This is according to researchers at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence (CFI) in two articles published in the British Medical Journal, cautioning against blinkered use of AI for data-gathering and medical decision-making as we fight to regain normalcy in 2021.

“Relaxing ethical requirements in a crisis could have unintended harmful consequences that last well beyond the life of the pandemic,” said Dr Stephen Cave, Director of CFI and lead author of one of the articles.

“The sudden introduction of complex and opaque AI, automating judgments once made by humans and sucking in personal information, could undermine the health of disadvantaged groups as well as long-term public trust in technology.”

In a further paper, co-authored by CFI’s Dr Alexa Hagerty, researchers highlight potential consequences arising from the AI now making clinical choices at scale – predicting deterioration rates of patients who might need ventilation, for example – if it does so based on biased data.

Datasets used to “train” and refine machine-learning algorithms are inevitably skewed against groups that access health services less frequently, such as minority ethnic communities and those of “lower socioeconomic status”.

“COVID-19 has already had a disproportionate impact on vulnerable communities. We know these systems can discriminate, and any algorithmic bias in treating the disease could land a further brutal punch,” Hagerty said.

In December, protests ensued when Stanford Medical Centre’s algorithm prioritized home-workers for vaccination over those on the Covid wards. “Algorithms are now used at a local, national and global scale to define vaccine allocation. In many cases, AI plays a central role in determining who is best placed to survive the pandemic,” said Hagerty.

“In a health crisis of this magnitude, the stakes for fairness and equity are extremely high.”

Along with colleagues, Hagerty highlights the well-established “discrimination creep” found in AI that uses “natural language processing” technology to pick up symptom profiles from medical records – reflecting and exacerbating biases against minorities already in the case notes.

They point out that some hospitals already use these technologies to extract diagnostic information from a range of records, and some are now using this AI to identify symptoms of COVID-19 infection.

Similarly, the use of track-and-trace apps creates the potential for biased datasets. The researchers write that, in the UK, over 20% of those aged over 15 lack essential digital skills, and up to 10% of some population “sub-groups” don’t own smartphones.

“Whether originating from medical records or everyday technologies, biased datasets applied in a one-size-fits-all manner to tackle COVID-19 could prove harmful for those already disadvantaged,” said Hagerty.

In the BMJ articles, the researchers point to examples such as the fact that a lack of data on skin colour makes it almost impossible for AI models to produce accurate large-scale computation of blood-oxygen levels. Or how an algorithmic tool used by the US prison system to calibrate reoffending – and proven to be racially biased – has been repurposed to manage its COVID-19 infection risk.

The Leverhulme Centre for the Future of Intelligence recently launched a Master’s course for ethics in AI. For Cave and colleagues, machine learning in the Covid era should be viewed through the prism of biomedical ethics – in particular the “four pillars”.

The first is beneficence. “Use of AI is intended to save lives, but that should not be used as a blanket justification to set otherwise unwelcome precedents, such as widespread use of facial recognition software,” said Cave.

In India, biometric identity programs can be linked to vaccination distribution, raising concerns for data privacy and security. Other vaccine allocation algorithms, including some used by the COVAX alliance, are driven by privately owned AI, says Hagerty. “Proprietary algorithms make it hard to look into the ‘black box’, and see how they determine vaccine priorities.”

The second is ‘non-maleficence’, or avoiding needless harm. A system programmed solely to preserve life will not consider rates of ‘long covid’, for example. Thirdly, human autonomy must be part of the calculation. Professionals need to trust technologies, and designers should consider how systems affect human behaviour – from personal precautions to treatment decisions.

Finally, data-driven AI must be underpinned by ideals of social justice. “We need to involve diverse communities, and consult a range of experts, from engineers to frontline medical teams. We must be open about the values and trade-offs inherent in these systems,” said Cave.

“AI has the potential to help us solve global problems, and the pandemic is unquestionably a major one. But relying on powerful AI in this time of crisis brings ethical challenges that must be considered to secure public trust.”

AIhub focus issue on reduced inequalities

tags: ,


University of Cambridge




            AIhub is supported by:


Related posts :



Machine learning helps improve quality assurance for wind turbines

A collaboration between EPFL and the University of Glasgow has led to a ML algorithm to effectively detect concealed manufacturing defects in wind turbine composite blades.
19 March 2024, by

#AAAI2024 workshops round-up 3: human-centric representation learning, and AI to accelerate science and engineering

We hear from the organisers of two workshops at AAAI2024 and find out the key takeaways from their events.
18 March 2024, by

The shift from models to compound AI systems

State-of-the-art AI results are increasingly obtained by compound systems with multiple components, not just monolithic models.
15 March 2024, by

Interview with Aaquib Tabrez: explainability and human-robot interaction

PhD student, and AAAI Doctoral Consortium participant, Aaquib tells us about his research so far.
14 March 2024, by

The Good Robot Podcast: Featuring Shannon Vallor

In this episode, Eleanor and Kerry talk to Shannon Vallor about feminist care ethics, techno-virtues and vices, and the 'AI Mirror'
13 March 2024, by

#AAAI2024 workshops round-up 2: AI for credible elections, and are large language models simply causal parrots?

We hear from the organisers of two workshops at AAAI2024 and find out the key takeaways from their events.
12 March 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association