ΑΙhub.org
 

Interview with Nello Cristianini: “The Shortcut – Why Intelligent Machines Do Not Think Like Us”


by
06 March 2023



share this:
Book cover with maze and two arrows, one ignoring the maze walls and taking a shortcut

In a new book, to be published on 8 March, Nello Cristianini explains the fundamental concepts of artificial intelligence (AI) and how it is changing culture and society. The Shortcut: Why Intelligent Machines Do Not Think Like Us is aimed at the general reader, providing an introduction to the concepts that underpin the technology and the wider implications for society. In the book, Nello provides practical advice on how we should approach AI in the future, including how to avoid the hype and the fears that tend to surround the technology today.

We spoke to Nello about the “first draft of AI”, the “shortcut”, some of the questions he considered in the book, and important considerations we should bear in mind as the technology progresses.

How do you see our current position, with regards to AI and its presence and use in society?

Building the first useful form of machine intelligence was not easy, but, as it turns out, that was not the most difficult part. Now we have to understand what we have created, how it will affect us and – most urgently – how to live with it in a safe way.

An important complication is that, just as we completed the first draft of AI, we have already embedded it in our infrastructure and everyday lives, to the point that it is already unthinkable to revert to a world without it. And yet we feel the anxiety of devolving so many decisions to algorithms we do not fully understand or control. Knowing how we got here, and how they actually work, is an important objective for all types of scholars, not only computer scientists.

Could you talk about some of the questions you consider in the book?

We should not ignore the way we completed this first version of AI: by discovering statistical patterns in vast quantities of human-generated data, and exploiting them to inform machine behaviour in a variety of domains, from the simple spell checker to the more advanced recommender agent that proposes videos and news, all the way to the latest chatGPT. In each case we did away with the idea of actually understanding and modelling the behaviour that we wanted to reproduce: knowing what to do, and not knowing why, was deemed sufficient. Where does this leave us in our attempt to control and regulate the technology? Can this line of development produce “superhuman” agents? Is it realistic to imagine “general intelligence” and what would that even mean? Can we expect explanations, accountability, and do we understand the effects of long term exposure to these agents, for society and individuals alike? We are now facing a deluge of questions, some fascinating, some urgent, some dating back to the earliest days of philosophy.

In the book “The Shortcut: Why Intelligent Machines Do Not Think Like Us” I explore these questions, proposing a new narrative to connect and make sense of events that have happened in the recent tumultuous past, so as to enable us to think about the road ahead.

We also reconstruct the history of the field, identifying a series of shortcuts we took along the way; we explore the complex scientific literature about the effects of interacting with recommender systems; and we consider a new perspective according to which intelligent behaviour might be the emergent property of billions of people interacting with an online system, an idea that is called “the social machine”.

What are some of the things that need to be considered when developing AI technologies?

One important consideration is that we should spend more time understanding the effects of delegating certain decisions to our machines, even those that are not “high stakes”. Consider the humble recommender system that suggests content every time you connect to a social platform. Some are concerned about its effects on your emotional wellbeing, others about the risk of addiction, others still about amplification and polarisation of opinions. But how many scientific studies have been completed, and what do they tell us? Before legal regulation can be effective, we should really clarify all the facts.

Other questions involve the possibility of building superhuman automata, such as AlphaGo, whose behaviour is superior to ours and we cannot fully explain. Can this happen in other domains, such as science, and what is the right way to relate to those agents? Can we consider the vast social machines, emergent from the online interaction of billions of users, as superhuman intelligent agents?

The questions posed by the latest developments in AI are fascinating, and will require work from a variety of scholars: legal, political and social scientists will be needed, along with natural and computer scientists. This is the next big adventure for artificial intelligence.

About Nello Cristianini

Nello Cristianini has been working in machine learning and Artificial Intelligence for over 20 years. He is Professor of Artificial Intelligence at the University of Bath, and has previously worked at the University of Bristol, the University of California (Davis), the University of London (Royal Holloway).




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



monthly digest

AIhub monthly digest: August 2025 – causality and generative modelling, responsible multimodal AI, and IJCAI in Montréal and Guangzhou

  29 Aug 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Interview with Benyamin Tabarsi: Computing education and generative AI

  28 Aug 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

The value of prediction in identifying the worst-off: Interview with Unai Fischer Abaigar

  27 Aug 2025
We hear from the winner of an outstanding paper award at ICML2025.

#IJCAI2025 social media round-up: part two

  26 Aug 2025
Find out what the participants got up to during the main part of the conference.

AI helps chemists develop tougher plastics

  25 Aug 2025
Researchers created polymers that are more resistant to tearing by incorporating stress-responsive molecules identified by a machine learning model.

RoboCup@Work League: Interview with Christoph Steup

  22 Aug 2025
Find out more about the RoboCup League focussed on industrial production systems.

Interview with Haimin Hu: Game-theoretic integration of safety, interaction and learning for human-centered autonomy

  21 Aug 2025
Hear from Haimin in the latest in our series featuring the 2025 AAAI / ACM SIGAI Doctoral Consortium participants.

Congratulations to the #IJCAI2025 distinguished paper award winners

  20 Aug 2025
Find out who has won the prestigious awards at the International Joint Conference on Artificial Intelligence.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence