ΑΙhub.org
 

Shortcuts to artificial intelligence – a tale


by
19 May 2020



share this:
shortcuts-to-ai

The current paradigm of artificial intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are seemingly small design decisions, that led to a subtle reframing of some of the field’s original goals, and are now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI.

Far from being a series of separate problems, recent cases of unexpected effects of AI are the consequences of those very choices that enabled the field to succeed, and this is why it will be difficult to solve them. Research at the University of Bristol has considered three of these choices, investigating their connection to some of today’s challenges in AI, including those relating to bias, value alignment, privacy and explainability.

1) Correlation vs causation
One important consequence of training statistical algorithms to emulate the decisions or behaviours of humans (e.g. recommending a book) is that we no longer value so highly the reason why the decision is made, so long as the action it generates is appropriate. Predictions count more than explanations, knowing ‘what’ counts more than knowing ‘why’, and ‘correlation trumps causation’.

2) Data from the wild
The second shortcut was summarised in a paper by Halevy, Norvig and Pereira which draws general lessons from the success stories of speech recognition and machine translation. It identifies the reason for those successes being the availability of large amounts of data, which had already been created for different purposes. Data gathered from the wild has been crucial in the design of object recognition systems, face recognition, and machine translation. The ubiquitous word embeddings that allow us to represent the meaning of words before we process them, are also all learned from data gathered from the wild.

3) Proxies and implicit feedback
Rather than asking users explicitly what they wanted the AI system to do, designers started making use of implicit feedback, which is another way of saying that they replaced unobservable quantities with cheaper proxies. Understanding the misalignment between a proxy and the intended target has become an important question for AI.

What has been accomplished by the AI research community over the past 20 years is remarkable, and much of this could not have been achieved at the time without taking “shortcuts”, including the three that have been summarised above. With the benefit of hindsight we can, however, reflect on how we introduced assumptions into our systems that are now generating problems, so that we can work on repairing and regulating the current version of AI. The same methods and principles that are perfectly innocuous in a certain domain, can become problematic only after being deployed in different domains. This is the space where we will need better informed regulation.

Read the research to find out more:
Shortcuts to Artificial Intelligence Cristianini, N. Machines We Trust. MIT Press (forthcoming)

Further papers that may be of interest:
Can Machines Read our Minds? Burr, C. & Cristianini, N. Minds & Machines (2019).

An Analysis of the Interaction Between Intelligent Software Agents and Human Users Burr, C., Cristianini, N. & Ladyman, J. Minds & Machines (2018) 28: 735.

This work is part of the ERC ThinkBIG project, Principal Investigator Nello Cristianini, University of Bristol.




Nello Cristianini is a Professor of Artificial Intelligence at the University of Bristol.
Nello Cristianini is a Professor of Artificial Intelligence at the University of Bristol.

            AUAI is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Making AI systems more transparent and trustworthy: an interview with Ximing Wen

  07 May 2026
Find out more about Ximing's work, experience as a research intern, and what inspired her to study AI.

Report on foundation model impacts released

  06 May 2026
Partnership on AI publish a progress report on post-deployment governance practices.

Forthcoming machine learning and AI seminars: May 2026 edition

  05 May 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 5 May and 30 June 2026.

AI for Science – from cosmology to chemistry

  01 May 2026
How AI is transforming science, from a day conference at the Royal Society
monthly digest

AIhub monthly digest: April 2026 – machine learning for particle physics, AI Index Report, and table tennis

  30 Apr 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Machine Ethics podcast: organoid computing with Dr Ewelina Kurtys

In this episode, Ben chats to Ewelina about the uses of organoids and energy saving computing, differences between biological neurons and digital neural networks, and much more.

#AAAI2026 invited talk: Yolanda Gil on improving workflows with AI

  28 Apr 2026
Former AAAI president on using AI to help communities of scientists better streamline their research.

Maryna Viazovska’s proofs of sphere packing formalized with AI

  27 Apr 2026
Formalization achieved through a collaboration between mathematicians and artificial intelligence tools.



AUAI is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence