ΑΙhub.org
 

Shortcuts to artificial intelligence – a tale


by
19 May 2020



share this:
shortcuts-to-ai

The current paradigm of artificial intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are seemingly small design decisions, that led to a subtle reframing of some of the field’s original goals, and are now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI.

Far from being a series of separate problems, recent cases of unexpected effects of AI are the consequences of those very choices that enabled the field to succeed, and this is why it will be difficult to solve them. Research at the University of Bristol has considered three of these choices, investigating their connection to some of today’s challenges in AI, including those relating to bias, value alignment, privacy and explainability.

1) Correlation vs causation
One important consequence of training statistical algorithms to emulate the decisions or behaviours of humans (e.g. recommending a book) is that we no longer value so highly the reason why the decision is made, so long as the action it generates is appropriate. Predictions count more than explanations, knowing ‘what’ counts more than knowing ‘why’, and ‘correlation trumps causation’.

2) Data from the wild
The second shortcut was summarised in a paper by Halevy, Norvig and Pereira which draws general lessons from the success stories of speech recognition and machine translation. It identifies the reason for those successes being the availability of large amounts of data, which had already been created for different purposes. Data gathered from the wild has been crucial in the design of object recognition systems, face recognition, and machine translation. The ubiquitous word embeddings that allow us to represent the meaning of words before we process them, are also all learned from data gathered from the wild.

3) Proxies and implicit feedback
Rather than asking users explicitly what they wanted the AI system to do, designers started making use of implicit feedback, which is another way of saying that they replaced unobservable quantities with cheaper proxies. Understanding the misalignment between a proxy and the intended target has become an important question for AI.

What has been accomplished by the AI research community over the past 20 years is remarkable, and much of this could not have been achieved at the time without taking “shortcuts”, including the three that have been summarised above. With the benefit of hindsight we can, however, reflect on how we introduced assumptions into our systems that are now generating problems, so that we can work on repairing and regulating the current version of AI. The same methods and principles that are perfectly innocuous in a certain domain, can become problematic only after being deployed in different domains. This is the space where we will need better informed regulation.

Read the research to find out more:
Shortcuts to Artificial Intelligence Cristianini, N. Machines We Trust. MIT Press (forthcoming)

Further papers that may be of interest:
Can Machines Read our Minds? Burr, C. & Cristianini, N. Minds & Machines (2019).

An Analysis of the Interaction Between Intelligent Software Agents and Human Users Burr, C., Cristianini, N. & Ladyman, J. Minds & Machines (2018) 28: 735.

This work is part of the ERC ThinkBIG project, Principal Investigator Nello Cristianini, University of Bristol.




Nello Cristianini is a Professor of Artificial Intelligence at the University of Bristol.
Nello Cristianini is a Professor of Artificial Intelligence at the University of Bristol.




            AIhub is supported by:


Related posts :



monthly digest

AIhub monthly digest: July 2025 – RoboCup round-up, ICML in Vancouver, and leveraging feedback in human-robot interactions

  30 Jul 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Interview with Yuki Mitsufuji: Text-to-sound generation

  29 Jul 2025
We hear from Sony AI Lead Research Scientist Yuki Mitsufuji to find out more about his latest research.

Open-source Swiss language model to be released this summer

  29 Jul 2025
This summer, EPFL and ETH Zurich will release a large language model (LLM) developed on public infrastructure.

Interview with Kate Candon: Leveraging explicit and implicit feedback in human-robot interactions

  25 Jul 2025
Hear from PhD student Kate about her work on human-robot interactions.

#RoboCup2025: social media round-up part 2

  24 Jul 2025
Find out what participants got up to during the second half of RoboCup2025 in Salvador, Brazil.

Visualising the digital transformation of work

Does it matter that the existing images of AI and digital technologies are so unrealistic?

#ICML2025 social media round-up part 2

  22 Jul 2025
Find out what participants got up to during the second half of the conference.

#RoboCup2025: social media round-up 1

  21 Jul 2025
Find out what participants got up to during the opening days of RoboCup2025 in Salvador, Brazil.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence