ΑΙhub.org
 

Interview with Nello Cristianini: “The Shortcut – Why Intelligent Machines Do Not Think Like Us”


by
06 March 2023



share this:
Book cover with maze and two arrows, one ignoring the maze walls and taking a shortcut

In a new book, to be published on 8 March, Nello Cristianini explains the fundamental concepts of artificial intelligence (AI) and how it is changing culture and society. The Shortcut: Why Intelligent Machines Do Not Think Like Us is aimed at the general reader, providing an introduction to the concepts that underpin the technology and the wider implications for society. In the book, Nello provides practical advice on how we should approach AI in the future, including how to avoid the hype and the fears that tend to surround the technology today.

We spoke to Nello about the “first draft of AI”, the “shortcut”, some of the questions he considered in the book, and important considerations we should bear in mind as the technology progresses.

How do you see our current position, with regards to AI and its presence and use in society?

Building the first useful form of machine intelligence was not easy, but, as it turns out, that was not the most difficult part. Now we have to understand what we have created, how it will affect us and – most urgently – how to live with it in a safe way.

An important complication is that, just as we completed the first draft of AI, we have already embedded it in our infrastructure and everyday lives, to the point that it is already unthinkable to revert to a world without it. And yet we feel the anxiety of devolving so many decisions to algorithms we do not fully understand or control. Knowing how we got here, and how they actually work, is an important objective for all types of scholars, not only computer scientists.

Could you talk about some of the questions you consider in the book?

We should not ignore the way we completed this first version of AI: by discovering statistical patterns in vast quantities of human-generated data, and exploiting them to inform machine behaviour in a variety of domains, from the simple spell checker to the more advanced recommender agent that proposes videos and news, all the way to the latest chatGPT. In each case we did away with the idea of actually understanding and modelling the behaviour that we wanted to reproduce: knowing what to do, and not knowing why, was deemed sufficient. Where does this leave us in our attempt to control and regulate the technology? Can this line of development produce “superhuman” agents? Is it realistic to imagine “general intelligence” and what would that even mean? Can we expect explanations, accountability, and do we understand the effects of long term exposure to these agents, for society and individuals alike? We are now facing a deluge of questions, some fascinating, some urgent, some dating back to the earliest days of philosophy.

In the book “The Shortcut: Why Intelligent Machines Do Not Think Like Us” I explore these questions, proposing a new narrative to connect and make sense of events that have happened in the recent tumultuous past, so as to enable us to think about the road ahead.

We also reconstruct the history of the field, identifying a series of shortcuts we took along the way; we explore the complex scientific literature about the effects of interacting with recommender systems; and we consider a new perspective according to which intelligent behaviour might be the emergent property of billions of people interacting with an online system, an idea that is called “the social machine”.

What are some of the things that need to be considered when developing AI technologies?

One important consideration is that we should spend more time understanding the effects of delegating certain decisions to our machines, even those that are not “high stakes”. Consider the humble recommender system that suggests content every time you connect to a social platform. Some are concerned about its effects on your emotional wellbeing, others about the risk of addiction, others still about amplification and polarisation of opinions. But how many scientific studies have been completed, and what do they tell us? Before legal regulation can be effective, we should really clarify all the facts.

Other questions involve the possibility of building superhuman automata, such as AlphaGo, whose behaviour is superior to ours and we cannot fully explain. Can this happen in other domains, such as science, and what is the right way to relate to those agents? Can we consider the vast social machines, emergent from the online interaction of billions of users, as superhuman intelligent agents?

The questions posed by the latest developments in AI are fascinating, and will require work from a variety of scholars: legal, political and social scientists will be needed, along with natural and computer scientists. This is the next big adventure for artificial intelligence.

About Nello Cristianini

Nello Cristianini has been working in machine learning and Artificial Intelligence for over 20 years. He is Professor of Artificial Intelligence at the University of Bath, and has previously worked at the University of Bristol, the University of California (Davis), the University of London (Royal Holloway).




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.

How the internet and its bots are sabotaging scientific research

  03 Sep 2025
What most people have failed to fully realise is that internet research has brought along risks of data corruption or impersonation.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence