ΑΙhub.org
 

Ethics and AI: tackling biases hidden in big data


by
16 July 2021



share this:

two people looking at a computer with data on screen
How do artificial intelligence (AI) algorithms learn to predict and make decisions? Can we entrust them with decisions that affect our lives and societies? Are they neutral and as immune to societal imperfections as commonly thought?

Nello Cristianini (University of Bristol) investigates the challenges emerging from data-driven AI, addressing issues such as gender biases in AI algorithms, and shifts in people’s emotions reflected in social media content.

Watch an introduction to Nello’s research below:

Video from the European Research Council

You can find out more about specific projects below:

AI and human autonomy: an analysis of the interaction between intelligent software agents and human users

This work involved development of a model of an autonomous agent that allows researchers to distinguish various types of control that intelligent software agents can exert on users. The framework of this model allows different types of interaction (i.e. trading, nudging, coercion and deception) to be separated, and presents a unified narrative for discussion of important ethical, psychological and social issues.

Fairness in artificial intelligence

The research team addressed the critical issue of trust in AI, proposing a new high standard for models to meet (being agnostic to a protected concept), and a way to achieve such models.

Can machines read our minds?

In this research, Nello and his team reviewed empirical studies concerning the deployment of algorithms to predict personal information using online data. They were interested in understanding what kind of psychological information can be inferred on the basis of our online activities, and whether an intelligent system could use this information to improve its ability to subsequently steer our behaviour towards its own goals.

Shortcuts to artificial intelligence

This research considers some of the shortcuts that were taken in the field and their connection to some of today’s challenges in AI, including those relating to bias, value alignment, privacy and explainability.

Many of these challenges arise from use of training data generated by various social processes. Therefore, it is critical for us to consider the interface between social sciences and computational sciences. The analysis of media content (both traditional and new media) is necessary to understand what we use to train our models. This was the motivation behind the final project highlighted here:

Finding patterns in historical newspapers

Developed by Nello and his team, History Playground enables users to search for small sequences of words and retrieve their relative frequencies over the course of history. The tool makes use of scalable algorithms to first extract trends from textual corpora, before making them available for real-time search and discovery, presenting users with an interface to explore the data.


Nello Cristianini is a Professor of Artificial Intelligence at the University of Bristol. His research interests include data science, artificial intelligence, machine learning, and applications to computational social sciences, digital humanities and news content analysis.

AIhub focus issue on reduced inequalities

tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.

            AUAI is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Does ‘federated unlearning’ in AI improve data privacy, or create a new cybersecurity risk?

  15 May 2026
As the capacity of AI systems increases apace, so do concerns about the privacy of user data.

Reflections from #AIES2025

and   14 May 2026
We reflect on AIES 2025, outlining a discussion session on LLMs for clinical usage and human rights.

Deep learning-powered biochip to detect genetic markers

System can detect extremely small amounts of microRNAs, genetic markers linked to diseases such as heart disease.

Half of AI health answers are wrong even though they sound convincing – new study

  12 May 2026
Imagine you have just been diagnosed with early-stage cancer and, before your next appointment, you type a question into an AI chatbot.

Gradient-based planning for world models at longer horizons

  11 May 2026
What were the problems that motivated this project and what was the approach to address them?

It’s tempting to offload your thinking to AI. Cognitive science shows why that’s a bad idea

  08 May 2026
Increased offloading to new tools has raised the fear that people will become overly reliant on AI.

Making AI systems more transparent and trustworthy: an interview with Ximing Wen

  07 May 2026
Find out more about Ximing's work, experience as a research intern, and what inspired her to study AI.

Report on foundation model impacts released

  06 May 2026
Partnership on AI publish a progress report on post-deployment governance practices.



AUAI is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence