ΑΙhub.org
 

Ethics and AI: tackling biases hidden in big data


by
16 July 2021



share this:

two people looking at a computer with data on screen
How do artificial intelligence (AI) algorithms learn to predict and make decisions? Can we entrust them with decisions that affect our lives and societies? Are they neutral and as immune to societal imperfections as commonly thought?

Nello Cristianini (University of Bristol) investigates the challenges emerging from data-driven AI, addressing issues such as gender biases in AI algorithms, and shifts in people’s emotions reflected in social media content.

Watch an introduction to Nello’s research below:

Video from the European Research Council

You can find out more about specific projects below:

AI and human autonomy: an analysis of the interaction between intelligent software agents and human users

This work involved development of a model of an autonomous agent that allows researchers to distinguish various types of control that intelligent software agents can exert on users. The framework of this model allows different types of interaction (i.e. trading, nudging, coercion and deception) to be separated, and presents a unified narrative for discussion of important ethical, psychological and social issues.

Fairness in artificial intelligence

The research team addressed the critical issue of trust in AI, proposing a new high standard for models to meet (being agnostic to a protected concept), and a way to achieve such models.

Can machines read our minds?

In this research, Nello and his team reviewed empirical studies concerning the deployment of algorithms to predict personal information using online data. They were interested in understanding what kind of psychological information can be inferred on the basis of our online activities, and whether an intelligent system could use this information to improve its ability to subsequently steer our behaviour towards its own goals.

Shortcuts to artificial intelligence

This research considers some of the shortcuts that were taken in the field and their connection to some of today’s challenges in AI, including those relating to bias, value alignment, privacy and explainability.

Many of these challenges arise from use of training data generated by various social processes. Therefore, it is critical for us to consider the interface between social sciences and computational sciences. The analysis of media content (both traditional and new media) is necessary to understand what we use to train our models. This was the motivation behind the final project highlighted here:

Finding patterns in historical newspapers

Developed by Nello and his team, History Playground enables users to search for small sequences of words and retrieve their relative frequencies over the course of history. The tool makes use of scalable algorithms to first extract trends from textual corpora, before making them available for real-time search and discovery, presenting users with an interface to explore the data.


Nello Cristianini is a Professor of Artificial Intelligence at the University of Bristol. His research interests include data science, artificial intelligence, machine learning, and applications to computational social sciences, digital humanities and news content analysis.

AIhub focus issue on reduced inequalities

tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:



Related posts :



How AI is opening the playbook on sports analytics

  18 Sep 2025
Waterloo researchers create simulated soccer datasets to unlock insights once reserved for pro teams.

Discrete flow matching framework for graph generation

and   17 Sep 2025
Read about work presented at ICML 2025 that disentangles sampling from training.

We risk a deluge of AI-written ‘science’ pushing corporate interests – here’s what to do about it

  16 Sep 2025
A single individual using AI can produce multiple papers that appear valid in a matter of hours.

Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence