ΑΙhub.org
 

Fairness in artificial intelligence


by
20 February 2020



share this:

Training machines using unbiased data and methodology is something that should be considered when designing artificial intelligence (AI) systems. Machine decisions can affect our rights, and we need to ensure that AI does not absorb biases by being trained on biased data. Researchers at the University of Bristol have investigated potential biases and looked at ways in which they could be removed.

The creation of a new generation of AI systems that can be trusted to make fair and unbiased decisions is an urgent task for researchers. As AI rapidly conquers technical challenges related to predictive performance, we are discovering a new dimension to the design of such systems that must be addressed: the fairness and trust in the system’s decisions.

The research team at Bristol addressed this critical issue of trust in AI by not only proposing a new high standard for models to meet (being agnostic to a protected concept) but also proposing a way to achieve such models. The model was defined to be agnostic, with respect to a set of concepts, if it was shown that it makes its decisions without ever using these concepts. This is a much stronger requirement than in distributional matching or other definitions of fairness. The team focussed on the case where a small set of contextual concepts should not be used in decisions, and can be exemplified by samples of data. They demonstrated how ideas developed in the context of domain adaptation can deliver agnostic representations that are important to ensure fairness, and therefore, trust.

The team used a Domain Adversarial Neural Network (DANN) method. Their experiments demonstrated that this method can successfully remove unwanted contextual information, and makes decisions for the right reasons. DANNs are a type of Convolutional Neural Network (CNN) that can achieve an agnostic representation using three components: 1) a feature extractor 2) a label prediction output layer and 3) an additional protected concept prediction layer.

A technically different but analogous process led the same team to explore unbiased representations in natural language processing, demonstrating that it is possible to remove gender bias from the way in which we represent words – an essential step if we want our algorithms to screen CVs and resumes.

While demonstrated in this work by ignoring the physical background context of an object in an image, the same approach could be used to ensure that other contextual information does not make its way into black-box classifiers deployed to make decisions about people in other domains and classification tasks.

Read the published book chapters to find out more:
Machine Decisions and Human Consequences Teresa Scantamburlo, Andrew Charlesworth, Nello Cristianini. Also published as a chapter in: Algorithmic Regulation, Oxford University Press (2019).

Right for the Right Reason: Training Agnostic Networks Sen Jia, Thomas Lansdall-Welfare and Nello Cristianini. Also published in: Advances in Intelligent Data Analysis XVII, Lecture Notes in Computer Science, vol 11191. Springer (2018).

Biased Embeddings from Wild Data: Measuring, Understanding and Removing Adam Sutton, Thomas Lansdall-Welfare and Nello Cristianini. Also published in: Advances in Intelligent Data Analysis XVII, Lecture Notes in Computer Science, vol 11191. Springer (2018).

This work is part of the ERC ThinkBIG project, Principal Investigator Nello Cristianini, University of Bristol.




Nello Cristianini is a Professor of Artificial Intelligence at the University of Bristol.
Nello Cristianini is a Professor of Artificial Intelligence at the University of Bristol.




            AIhub is supported by:


Related posts :



Interview with Joseph Marvin Imperial: aligning generative AI with technical standards

  02 Apr 2025
Joseph tells us about his PhD research so far and his experience at the AAAI 2025 Doctoral Consortium.

Forthcoming machine learning and AI seminars: April 2025 edition

  01 Apr 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 April and 31 May 2025.

AI can be a powerful tool for scientists. But it can also fuel research misconduct

  31 Mar 2025
While AI is allowing scientists to make technological breakthroughs, there’s also a darker side to the use of AI in science: scientific misconduct is on the rise.
monthly digest

AIhub monthly digest: March 2025 – human-allied AI, differential privacy, and social media microtargeting

  28 Mar 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

AI ring tracks spelled words in American Sign Language

  27 Mar 2025
In its current form, SpellRing could be used to enter text into computers or smartphones via fingerspelling.

How AI images are ‘flattening’ Indigenous cultures – creating a new form of tech colonialism

  26 Mar 2025
AI-generated stock images that claim to depict “Indigenous Australians”, don’t resemble Aboriginal and Torres Strait Islander peoples.

Interview with Lea Demelius: Researching differential privacy

  25 Mar 2025
We hear from doctoral consortium participant Lea Demelius who is investigating the trade-offs and synergies that arise between various requirements for trustworthy AI.

The Machine Ethics podcast: Careful technology with Rachel Coldicutt

This episode, Ben chats to Rachel Coldicutt about AI taxonomy, innovating for everyone not just the few, responsibilities of researchers, and more.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association