ΑΙhub.org
 

Benefits and control influence the acceptance of automated decision-making


by
17 April 2023



share this:
stocks trading screen and graph on mobile phone

Algorithms will inevitably play an increasing role in our lives in the coming years. Yet it is often suggested that people are wary of computer systems taking over their everyday choices. For example, do we trust computers with our health and money? A new publication in AI & Society by Radboud University researchers shows that humans are quite open to algorithms’ decisions, as long as we believe that this is to our benefit.

As long as we are reasonably sure that it will get us something we want, we are not at all bothered about accepting decisions made by algorithms. That is the conclusion of Gabi Schaap, communication scientist and one of the researchers involved in the study. The study’s 1,000-plus participants were presented with several scenarios around online dating and stock-market shares. “The participants were asked whether they would rather leave decisions in this area to an algorithm or to a human. When told that the probability of success was higher if they chose the algorithm, people were on average much more likely to opt for the algorithm. This suggests that people are not necessarily afraid of algorithms, but that for many people, this is yet another cost-benefit analysis.”

In addition, many of the study participants apparently found it important to still have some freedom of choice. Schaap: “You could see this, for example, when we asked people about investing in specific stock-market shares. Participants who were told that they would be able to see the algorithm’s choices before the money was invested were more likely to accept the algorithm than the group that had to blindly trust that the algorithm would give them higher odds of gains.”

Human and algorithm presented on equal footing

And yet we often hear that people are suspicious of algorithms, preferring a human to a machine when it comes to important decisions. So why is that? “Previous research often failed to present people with a fair choice. For example, participants were asked whether they would prefer a medical diagnosis to be made by a doctor with human empathy, or a robot good at mathematical calculations. In such cases many people tend towards empathy, while for diagnosis, it is mainly the mathematical background that counts,” Schaap explains.

“In our research, we presented humans and computer systems in the same way. As a result, the choice focused more on competence, and then you see that people prefer the method that gives them the greatest yields or the most control. If a human being is better, we will opt for a human being. If an algorithm is better, we will go for an algorithm after all. Now that algorithms are increasingly becoming better or even much better at making decisions than humans, we can predict that our preference will also increasingly go to them.” That is not very surprising, according to Schaap. “It’s a question of very basic, human mechanisms. Ultimately, we always want to know how something benefits us, and how much control we have in our choices. It is almost a primal instinct: we are hard-wired to choose the solution that works out best for us.”

Autonomy

According to the researchers, this study offers important insights into how we as a society are likely to interact with algorithms in the years to come. Artificial intelligence is developing rapidly, and it is already being used by governments, in education, in medicine, and in many other aspects of our daily lives. Schaap: “The question people often ask these days is: Is this what we want? We often say that we should avoid algorithms: social debate suggests that we find them scary. But in concrete situations, it turns out that we make completely different choices, as this study shows. The objections suddenly fall away, and we appear to be mainly focused on optimal opportunities and gains. This also creates new starting points for discussion: How do we present algorithms, but also: What should we watch out for? If the suggestion is made that it is much easier to leave decisions to computer systems, how much autonomy are we willing to give up?”

Read the paper in full

Schaap, G., Bosse, T. & Hendriks Vettehen, P. The ABC of algorithmic aversion: not agent, but benefits and control determine the acceptance of automated decision-making. AI & Society (2023).




Radboud University

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

A model for defect identification in materials

  20 Apr 2026
A new model measures defects that can be leveraged to improve materials’ mechanical strength, heat transfer, and energy-conversion efficiency.

‘Probably’ doesn’t mean the same thing to your AI as it does to you

  17 Apr 2026
Are you sure you and the AI chatbot you’re using are on the same page about probabilities?

Interview with Xinwei Song: strategic interactions in networked multi-agent systems

  16 Apr 2026
Xinwei Song tells us about her research using algorithmic game theory and multi-agent reinforcement learning.

2026 AI Index Report released

  15 Apr 2026
Find out what the ninth edition of the report, which was published on 13 April, says about trends in AI.

Formal verification for safety evaluation of autonomous vehicles: an interview with Abdelrahman Sayed Sayed

  14 Apr 2026
Find out more about work at the intersection of continuous AI models, formal methods, and autonomous systems.

Water flow in prairie watersheds is increasingly unpredictable — but AI could help

  13 Apr 2026
In recent years, the Prairies have seen bigger swings in climate conditions — very wet years followed by very dry ones.

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence