ΑΙhub.org
 

Ethics and AI: tackling biases hidden in big data

by
16 July 2021



share this:

two people looking at a computer with data on screen
How do artificial intelligence (AI) algorithms learn to predict and make decisions? Can we entrust them with decisions that affect our lives and societies? Are they neutral and as immune to societal imperfections as commonly thought?

Nello Cristianini (University of Bristol) investigates the challenges emerging from data-driven AI, addressing issues such as gender biases in AI algorithms, and shifts in people’s emotions reflected in social media content.

Watch an introduction to Nello’s research below:

Video from the European Research Council

You can find out more about specific projects below:

AI and human autonomy: an analysis of the interaction between intelligent software agents and human users

This work involved development of a model of an autonomous agent that allows researchers to distinguish various types of control that intelligent software agents can exert on users. The framework of this model allows different types of interaction (i.e. trading, nudging, coercion and deception) to be separated, and presents a unified narrative for discussion of important ethical, psychological and social issues.

Fairness in artificial intelligence

The research team addressed the critical issue of trust in AI, proposing a new high standard for models to meet (being agnostic to a protected concept), and a way to achieve such models.

Can machines read our minds?

In this research, Nello and his team reviewed empirical studies concerning the deployment of algorithms to predict personal information using online data. They were interested in understanding what kind of psychological information can be inferred on the basis of our online activities, and whether an intelligent system could use this information to improve its ability to subsequently steer our behaviour towards its own goals.

Shortcuts to artificial intelligence

This research considers some of the shortcuts that were taken in the field and their connection to some of today’s challenges in AI, including those relating to bias, value alignment, privacy and explainability.

Many of these challenges arise from use of training data generated by various social processes. Therefore, it is critical for us to consider the interface between social sciences and computational sciences. The analysis of media content (both traditional and new media) is necessary to understand what we use to train our models. This was the motivation behind the final project highlighted here:

Finding patterns in historical newspapers

Developed by Nello and his team, History Playground enables users to search for small sequences of words and retrieve their relative frequencies over the course of history. The tool makes use of scalable algorithms to first extract trends from textual corpora, before making them available for real-time search and discovery, presenting users with an interface to explore the data.


Nello Cristianini is a Professor of Artificial Intelligence at the University of Bristol. His research interests include data science, artificial intelligence, machine learning, and applications to computational social sciences, digital humanities and news content analysis.

AIhub focus issue on reduced inequalities

tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



Training AI requires more data than we have — generating synthetic data could help solve this challenge

The rapid rise of generative AI has brought advancements, but it also presents significant risks.
26 July 2024, by

Congratulations to the #ICML2024 award winners

Find out who won the Test of Time award, and the Best Paper award at ICML this year.
25 July 2024, by

#ICML2024 – tweet round-up from the first few days

We take a look at what participants have been getting up to at the International Conference on Machine Learning.
24 July 2024, by

International collaboration lays the foundation for future AI for materials

Presenting an extended version of the Open databases integration for materials design (OPTIMADE) standard.
23 July 2024, by

#RoboCup2024 – daily digest: 21 July

In the last of our digests, we report on the closing day of competitions in Eindhoven.
21 July 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association