ΑΙhub.org
 

Interview with Nello Cristianini: “The Shortcut – Why Intelligent Machines Do Not Think Like Us”


by
06 March 2023



share this:
Book cover with maze and two arrows, one ignoring the maze walls and taking a shortcut

In a new book, to be published on 8 March, Nello Cristianini explains the fundamental concepts of artificial intelligence (AI) and how it is changing culture and society. The Shortcut: Why Intelligent Machines Do Not Think Like Us is aimed at the general reader, providing an introduction to the concepts that underpin the technology and the wider implications for society. In the book, Nello provides practical advice on how we should approach AI in the future, including how to avoid the hype and the fears that tend to surround the technology today.

We spoke to Nello about the “first draft of AI”, the “shortcut”, some of the questions he considered in the book, and important considerations we should bear in mind as the technology progresses.

How do you see our current position, with regards to AI and its presence and use in society?

Building the first useful form of machine intelligence was not easy, but, as it turns out, that was not the most difficult part. Now we have to understand what we have created, how it will affect us and – most urgently – how to live with it in a safe way.

An important complication is that, just as we completed the first draft of AI, we have already embedded it in our infrastructure and everyday lives, to the point that it is already unthinkable to revert to a world without it. And yet we feel the anxiety of devolving so many decisions to algorithms we do not fully understand or control. Knowing how we got here, and how they actually work, is an important objective for all types of scholars, not only computer scientists.

Could you talk about some of the questions you consider in the book?

We should not ignore the way we completed this first version of AI: by discovering statistical patterns in vast quantities of human-generated data, and exploiting them to inform machine behaviour in a variety of domains, from the simple spell checker to the more advanced recommender agent that proposes videos and news, all the way to the latest chatGPT. In each case we did away with the idea of actually understanding and modelling the behaviour that we wanted to reproduce: knowing what to do, and not knowing why, was deemed sufficient. Where does this leave us in our attempt to control and regulate the technology? Can this line of development produce “superhuman” agents? Is it realistic to imagine “general intelligence” and what would that even mean? Can we expect explanations, accountability, and do we understand the effects of long term exposure to these agents, for society and individuals alike? We are now facing a deluge of questions, some fascinating, some urgent, some dating back to the earliest days of philosophy.

In the book “The Shortcut: Why Intelligent Machines Do Not Think Like Us” I explore these questions, proposing a new narrative to connect and make sense of events that have happened in the recent tumultuous past, so as to enable us to think about the road ahead.

We also reconstruct the history of the field, identifying a series of shortcuts we took along the way; we explore the complex scientific literature about the effects of interacting with recommender systems; and we consider a new perspective according to which intelligent behaviour might be the emergent property of billions of people interacting with an online system, an idea that is called “the social machine”.

What are some of the things that need to be considered when developing AI technologies?

One important consideration is that we should spend more time understanding the effects of delegating certain decisions to our machines, even those that are not “high stakes”. Consider the humble recommender system that suggests content every time you connect to a social platform. Some are concerned about its effects on your emotional wellbeing, others about the risk of addiction, others still about amplification and polarisation of opinions. But how many scientific studies have been completed, and what do they tell us? Before legal regulation can be effective, we should really clarify all the facts.

Other questions involve the possibility of building superhuman automata, such as AlphaGo, whose behaviour is superior to ours and we cannot fully explain. Can this happen in other domains, such as science, and what is the right way to relate to those agents? Can we consider the vast social machines, emergent from the online interaction of billions of users, as superhuman intelligent agents?

The questions posed by the latest developments in AI are fascinating, and will require work from a variety of scholars: legal, political and social scientists will be needed, along with natural and computer scientists. This is the next big adventure for artificial intelligence.

About Nello Cristianini

Nello Cristianini has been working in machine learning and Artificial Intelligence for over 20 years. He is Professor of Artificial Intelligence at the University of Bath, and has previously worked at the University of Bristol, the University of California (Davis), the University of London (Royal Holloway).




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Guarding Europe’s hidden lifelines: how AI could protect subsea infrastructure

  15 Jan 2026
EU-funded researchers are developing AI-powered surveillance tools to protect the vast network of subsea cables and pipelines that keep the continent’s energy and data flowing.

What’s coming up at #AAAI2026?

  14 Jan 2026
Find out what's on the programme at the annual AAAI Conference on Artificial Intelligence.

Taking humanoid soccer to the next level: An interview with RoboCup trustee Alessandra Rossi

  13 Jan 2026
Find out more about the forthcoming changes to the RoboCup soccer leagues.

Robots to navigate hiking trails

  12 Jan 2026
Find out more about work presented at IROS 2025 on autonomous hiking trail navigation via semantic segmentation and geometric analysis.

AAAI presidential panel – AI reasoning

  09 Jan 2026
Watch the third panel discussion in this series from AAAI.

The Machine Ethics podcast: Companion AI with Giulia Trojano

Ben chats to Giulia Trojano about AI as an economic narrative, companion chatbots, deskilling of digital literacy, chatbot parental controls, differences between social AI and general AI services and more.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence