ΑΙhub.org
 

Shortcuts to artificial intelligence – a tale


by
19 May 2020



share this:
shortcuts-to-ai

The current paradigm of artificial intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are seemingly small design decisions, that led to a subtle reframing of some of the field’s original goals, and are now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI.

Far from being a series of separate problems, recent cases of unexpected effects of AI are the consequences of those very choices that enabled the field to succeed, and this is why it will be difficult to solve them. Research at the University of Bristol has considered three of these choices, investigating their connection to some of today’s challenges in AI, including those relating to bias, value alignment, privacy and explainability.

1) Correlation vs causation
One important consequence of training statistical algorithms to emulate the decisions or behaviours of humans (e.g. recommending a book) is that we no longer value so highly the reason why the decision is made, so long as the action it generates is appropriate. Predictions count more than explanations, knowing ‘what’ counts more than knowing ‘why’, and ‘correlation trumps causation’.

2) Data from the wild
The second shortcut was summarised in a paper by Halevy, Norvig and Pereira which draws general lessons from the success stories of speech recognition and machine translation. It identifies the reason for those successes being the availability of large amounts of data, which had already been created for different purposes. Data gathered from the wild has been crucial in the design of object recognition systems, face recognition, and machine translation. The ubiquitous word embeddings that allow us to represent the meaning of words before we process them, are also all learned from data gathered from the wild.

3) Proxies and implicit feedback
Rather than asking users explicitly what they wanted the AI system to do, designers started making use of implicit feedback, which is another way of saying that they replaced unobservable quantities with cheaper proxies. Understanding the misalignment between a proxy and the intended target has become an important question for AI.

What has been accomplished by the AI research community over the past 20 years is remarkable, and much of this could not have been achieved at the time without taking “shortcuts”, including the three that have been summarised above. With the benefit of hindsight we can, however, reflect on how we introduced assumptions into our systems that are now generating problems, so that we can work on repairing and regulating the current version of AI. The same methods and principles that are perfectly innocuous in a certain domain, can become problematic only after being deployed in different domains. This is the space where we will need better informed regulation.

Read the research to find out more:
Shortcuts to Artificial Intelligence Cristianini, N. Machines We Trust. MIT Press (forthcoming)

Further papers that may be of interest:
Can Machines Read our Minds? Burr, C. & Cristianini, N. Minds & Machines (2019).

An Analysis of the Interaction Between Intelligent Software Agents and Human Users Burr, C., Cristianini, N. & Ladyman, J. Minds & Machines (2018) 28: 735.

This work is part of the ERC ThinkBIG project, Principal Investigator Nello Cristianini, University of Bristol.




Nello Cristianini is a Professor of Artificial Intelligence at the University of Bristol.
Nello Cristianini is a Professor of Artificial Intelligence at the University of Bristol.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.

The Machine Ethics podcast: moral agents with Jen Semler

In this episode, Ben and Jen Semler talk about what makes a moral agent, the point of moral agents, philosopher and engineer collaborations, and more.

Extending the reward structure in reinforcement learning: an interview with Tanmay Ambadkar

  23 Feb 2026
Find out more about Tanmay's research on RL frameworks, the latest in our series meeting the AAAI Doctoral Consortium participants.

The Good Robot podcast: what makes a drone “good”? with Beryl Pong

  20 Feb 2026
In this episode, Eleanor and Kerry talk to Beryl Pong about what it means to think about drones as “good” or “ethical” technologies.

Relational neurosymbolic Markov models

and   19 Feb 2026
Relational neurosymbolic Markov models make deep sequential models logically consistent, intervenable and generalisable

AI enables a Who’s Who of brown bears in Alaska

  18 Feb 2026
A team of scientists from EPFL and Alaska Pacific University has developed an AI program that can recognize individual bears in the wild, despite the substantial changes that occur in their appearance over the summer season.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence