ΑΙhub.org
 

Benefits and control influence the acceptance of automated decision-making


by
17 April 2023



share this:
stocks trading screen and graph on mobile phone

Algorithms will inevitably play an increasing role in our lives in the coming years. Yet it is often suggested that people are wary of computer systems taking over their everyday choices. For example, do we trust computers with our health and money? A new publication in AI & Society by Radboud University researchers shows that humans are quite open to algorithms’ decisions, as long as we believe that this is to our benefit.

As long as we are reasonably sure that it will get us something we want, we are not at all bothered about accepting decisions made by algorithms. That is the conclusion of Gabi Schaap, communication scientist and one of the researchers involved in the study. The study’s 1,000-plus participants were presented with several scenarios around online dating and stock-market shares. “The participants were asked whether they would rather leave decisions in this area to an algorithm or to a human. When told that the probability of success was higher if they chose the algorithm, people were on average much more likely to opt for the algorithm. This suggests that people are not necessarily afraid of algorithms, but that for many people, this is yet another cost-benefit analysis.”

In addition, many of the study participants apparently found it important to still have some freedom of choice. Schaap: “You could see this, for example, when we asked people about investing in specific stock-market shares. Participants who were told that they would be able to see the algorithm’s choices before the money was invested were more likely to accept the algorithm than the group that had to blindly trust that the algorithm would give them higher odds of gains.”

Human and algorithm presented on equal footing

And yet we often hear that people are suspicious of algorithms, preferring a human to a machine when it comes to important decisions. So why is that? “Previous research often failed to present people with a fair choice. For example, participants were asked whether they would prefer a medical diagnosis to be made by a doctor with human empathy, or a robot good at mathematical calculations. In such cases many people tend towards empathy, while for diagnosis, it is mainly the mathematical background that counts,” Schaap explains.

“In our research, we presented humans and computer systems in the same way. As a result, the choice focused more on competence, and then you see that people prefer the method that gives them the greatest yields or the most control. If a human being is better, we will opt for a human being. If an algorithm is better, we will go for an algorithm after all. Now that algorithms are increasingly becoming better or even much better at making decisions than humans, we can predict that our preference will also increasingly go to them.” That is not very surprising, according to Schaap. “It’s a question of very basic, human mechanisms. Ultimately, we always want to know how something benefits us, and how much control we have in our choices. It is almost a primal instinct: we are hard-wired to choose the solution that works out best for us.”

Autonomy

According to the researchers, this study offers important insights into how we as a society are likely to interact with algorithms in the years to come. Artificial intelligence is developing rapidly, and it is already being used by governments, in education, in medicine, and in many other aspects of our daily lives. Schaap: “The question people often ask these days is: Is this what we want? We often say that we should avoid algorithms: social debate suggests that we find them scary. But in concrete situations, it turns out that we make completely different choices, as this study shows. The objections suddenly fall away, and we appear to be mainly focused on optimal opportunities and gains. This also creates new starting points for discussion: How do we present algorithms, but also: What should we watch out for? If the suggestion is made that it is much easier to leave decisions to computer systems, how much autonomy are we willing to give up?”

Read the paper in full

Schaap, G., Bosse, T. & Hendriks Vettehen, P. The ABC of algorithmic aversion: not agent, but benefits and control determine the acceptance of automated decision-making. AI & Society (2023).




Radboud University

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence