ΑΙhub.org
 

Ethics and AI: tackling biases hidden in big data


by
16 July 2021



share this:

two people looking at a computer with data on screen
How do artificial intelligence (AI) algorithms learn to predict and make decisions? Can we entrust them with decisions that affect our lives and societies? Are they neutral and as immune to societal imperfections as commonly thought?

Nello Cristianini (University of Bristol) investigates the challenges emerging from data-driven AI, addressing issues such as gender biases in AI algorithms, and shifts in people’s emotions reflected in social media content.

Watch an introduction to Nello’s research below:

Video from the European Research Council

You can find out more about specific projects below:

AI and human autonomy: an analysis of the interaction between intelligent software agents and human users

This work involved development of a model of an autonomous agent that allows researchers to distinguish various types of control that intelligent software agents can exert on users. The framework of this model allows different types of interaction (i.e. trading, nudging, coercion and deception) to be separated, and presents a unified narrative for discussion of important ethical, psychological and social issues.

Fairness in artificial intelligence

The research team addressed the critical issue of trust in AI, proposing a new high standard for models to meet (being agnostic to a protected concept), and a way to achieve such models.

Can machines read our minds?

In this research, Nello and his team reviewed empirical studies concerning the deployment of algorithms to predict personal information using online data. They were interested in understanding what kind of psychological information can be inferred on the basis of our online activities, and whether an intelligent system could use this information to improve its ability to subsequently steer our behaviour towards its own goals.

Shortcuts to artificial intelligence

This research considers some of the shortcuts that were taken in the field and their connection to some of today’s challenges in AI, including those relating to bias, value alignment, privacy and explainability.

Many of these challenges arise from use of training data generated by various social processes. Therefore, it is critical for us to consider the interface between social sciences and computational sciences. The analysis of media content (both traditional and new media) is necessary to understand what we use to train our models. This was the motivation behind the final project highlighted here:

Finding patterns in historical newspapers

Developed by Nello and his team, History Playground enables users to search for small sequences of words and retrieve their relative frequencies over the course of history. The tool makes use of scalable algorithms to first extract trends from textual corpora, before making them available for real-time search and discovery, presenting users with an interface to explore the data.


Nello Cristianini is a Professor of Artificial Intelligence at the University of Bristol. His research interests include data science, artificial intelligence, machine learning, and applications to computational social sciences, digital humanities and news content analysis.

AIhub focus issue on reduced inequalities

tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:



Related posts :



The Machine Ethics podcast: Companion AI with Giulia Trojano

Ben chats to Giulia Trojano about AI as an economic narrative, companion chatbots, deskilling of digital literacy, chatbot parental controls, differences between social AI and general AI services and more.

What are small language models and how do they differ from large ones?

  06 Jan 2026
Let’s explore what makes SLMs and LLMs different – and how to choose the right one for your situation.

Forthcoming machine learning and AI seminars: January 2026 edition

  05 Jan 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 5 January and 28 February 2026.

AAAI presidential panel – AI perception versus reality video discussion

  02 Jan 2026
Watch the second panel discussion in this series from AAAI.

More than half of new articles on the internet are being written by AI

  31 Dec 2025
The line between human and machine authorship is blurring, particularly as it’s become increasingly difficult to tell whether something was written by a person or AI.
monthly digest

2025 digest of digests

  30 Dec 2025
We look back through the archives of our monthly digests to pick out some highlights from the year.
monthly digest

AIhub monthly digest: December 2025 – studying bias in AI-based recruitment tools, an image dataset for ethical AI benchmarking, and end of year com

  29 Dec 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence