ΑΙhub.org
 

AI and human autonomy: an analysis of the interaction between intelligent software agents and human users


by
24 January 2020



share this:

Is our autonomy affected by interacting with intelligent machines designed to persuade us? That’s what researchers at the University of Bristol attempted to find out through an analysis of the interaction between intelligent software agents and human users.

Interactions between an intelligent software agent (ISA) and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience.

A typical ISA, such as a recommender system, might have to select a set of videos for a user to watch (out of a vast catalogue), using any available information or signal it has about the given user (e.g. location, time, past usage, explicit ratings, and much more). In this case, the ISA’s goal is to select an action that, for the given user, maximises the expected click-through rate: an expression of the probability of users clicking through links.

Using ideas from bounded rationality (and deploying concepts from artificial intelligence, behavioural economics, control theory, and game theory), the team frame these interactions as instances of an ISA whose reward depends on actions performed by the user.

The team present a model of an autonomous agent that allows them to distinguish various types of control that actual ISAs can exert on users. The framework of this model allows different types of interaction (i.e. trading, nudging, coercion and deception) to be separated, and presents a unified narrative for discussion of polarisation, addiction, value alignment, autonomy, misuse of proxies for relevance feedback, and moral accountability, as well as other important ethical, psychological and social issues that arise from second-order effects.

This framework is proposed as a resource to better enable philosophers and scientists, policy-makers, and other interested parties, to engage with these issues with a shared conceptual basis. The research highlights the importance of framing the interactions between human users and ISAs as potentially generating positive feedback loops. The nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. ISAs could sometimes exploit and reinforce weaknesses in human beings. It may be possible to mitigate this by using negative feedback, but first, and in any case, the ethical concerns raised in this work must be faced.

Read the full research article:
An Analysis of the Interaction Between Intelligent Software Agents and Human Users Burr, C., Cristianini, N. & Ladyman, J. Minds & Machines (2018).

This work is part of the ERC ThinkBIG project, Principal Investigator Nello Cristianini, University of Bristol.




Nello Cristianini is a Professor of Artificial Intelligence at the University of Bristol.
Nello Cristianini is a Professor of Artificial Intelligence at the University of Bristol.




            AIhub is supported by:


Related posts :



Introducing the NASA Onboard Artificial Intelligence Research (OnAIR) platform: an interview with Evana Gizzi

  03 Jul 2025
Find out about the OnAIR platform, some of the particular challenges of deploying AI-based solutions in space, and how the tool has been used so far.

An interview with Nicolai Ommer: the RoboCupSoccer Small Size League

  01 Jul 2025
We caught up with Nicolai to find out more about the Small Size League, how the auto referees work, and how teams use AI.

Forthcoming machine learning and AI seminars: July 2025 edition

  30 Jun 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 July and 31 August 2025.
monthly digest

AIhub monthly digest: June 2025 – gearing up for RoboCup 2025, privacy-preserving models, and mitigating biases in LLMs

  26 Jun 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

RoboCupRescue: an interview with Adam Jacoff

  25 Jun 2025
Find out what's new in the RoboCupRescue League this year.

Making optimal decisions without having all the cards in hand

Read about research which won an outstanding paper award at AAAI 2025.

Exploring counterfactuals in continuous-action reinforcement learning

  20 Jun 2025
Shuyang Dong writes about her work that will be presented at IJCAI 2025.

What is vibe coding? A computer scientist explains what it means to have AI write computer code − and what risks that can entail

  19 Jun 2025
Until recently, most computer code was written, at least originally, by human beings. But with the advent of GenAI, that has begun to change.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence