ΑΙhub.org
 

Interview with Nello Cristianini: “The Shortcut – Why Intelligent Machines Do Not Think Like Us”


by
06 March 2023



share this:
Book cover with maze and two arrows, one ignoring the maze walls and taking a shortcut

In a new book, to be published on 8 March, Nello Cristianini explains the fundamental concepts of artificial intelligence (AI) and how it is changing culture and society. The Shortcut: Why Intelligent Machines Do Not Think Like Us is aimed at the general reader, providing an introduction to the concepts that underpin the technology and the wider implications for society. In the book, Nello provides practical advice on how we should approach AI in the future, including how to avoid the hype and the fears that tend to surround the technology today.

We spoke to Nello about the “first draft of AI”, the “shortcut”, some of the questions he considered in the book, and important considerations we should bear in mind as the technology progresses.

How do you see our current position, with regards to AI and its presence and use in society?

Building the first useful form of machine intelligence was not easy, but, as it turns out, that was not the most difficult part. Now we have to understand what we have created, how it will affect us and – most urgently – how to live with it in a safe way.

An important complication is that, just as we completed the first draft of AI, we have already embedded it in our infrastructure and everyday lives, to the point that it is already unthinkable to revert to a world without it. And yet we feel the anxiety of devolving so many decisions to algorithms we do not fully understand or control. Knowing how we got here, and how they actually work, is an important objective for all types of scholars, not only computer scientists.

Could you talk about some of the questions you consider in the book?

We should not ignore the way we completed this first version of AI: by discovering statistical patterns in vast quantities of human-generated data, and exploiting them to inform machine behaviour in a variety of domains, from the simple spell checker to the more advanced recommender agent that proposes videos and news, all the way to the latest chatGPT. In each case we did away with the idea of actually understanding and modelling the behaviour that we wanted to reproduce: knowing what to do, and not knowing why, was deemed sufficient. Where does this leave us in our attempt to control and regulate the technology? Can this line of development produce “superhuman” agents? Is it realistic to imagine “general intelligence” and what would that even mean? Can we expect explanations, accountability, and do we understand the effects of long term exposure to these agents, for society and individuals alike? We are now facing a deluge of questions, some fascinating, some urgent, some dating back to the earliest days of philosophy.

In the book “The Shortcut: Why Intelligent Machines Do Not Think Like Us” I explore these questions, proposing a new narrative to connect and make sense of events that have happened in the recent tumultuous past, so as to enable us to think about the road ahead.

We also reconstruct the history of the field, identifying a series of shortcuts we took along the way; we explore the complex scientific literature about the effects of interacting with recommender systems; and we consider a new perspective according to which intelligent behaviour might be the emergent property of billions of people interacting with an online system, an idea that is called “the social machine”.

What are some of the things that need to be considered when developing AI technologies?

One important consideration is that we should spend more time understanding the effects of delegating certain decisions to our machines, even those that are not “high stakes”. Consider the humble recommender system that suggests content every time you connect to a social platform. Some are concerned about its effects on your emotional wellbeing, others about the risk of addiction, others still about amplification and polarisation of opinions. But how many scientific studies have been completed, and what do they tell us? Before legal regulation can be effective, we should really clarify all the facts.

Other questions involve the possibility of building superhuman automata, such as AlphaGo, whose behaviour is superior to ours and we cannot fully explain. Can this happen in other domains, such as science, and what is the right way to relate to those agents? Can we consider the vast social machines, emergent from the online interaction of billions of users, as superhuman intelligent agents?

The questions posed by the latest developments in AI are fascinating, and will require work from a variety of scholars: legal, political and social scientists will be needed, along with natural and computer scientists. This is the next big adventure for artificial intelligence.

About Nello Cristianini

Nello Cristianini has been working in machine learning and Artificial Intelligence for over 20 years. He is Professor of Artificial Intelligence at the University of Bath, and has previously worked at the University of Bristol, the University of California (Davis), the University of London (Royal Holloway).




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Generative AI is already being used in journalism – here’s how people feel about it

  21 Feb 2025
New report draws on three years of interviews and focus group research into generative AI and journalism

Charlotte Bunne on developing AI-based diagnostic tools

  20 Feb 2025
To advance modern medicine, EPFL researchers are developing AI-based diagnostic tools. Their goal is to predict the best treatment a patient should receive.

What’s coming up at #AAAI2025?

  19 Feb 2025
Find out what's on the programme at the 39th Annual AAAI Conference on Artificial Intelligence

An introduction to science communication at #AAAI2025

  18 Feb 2025
Find out more about our forthcoming training session at AAAI on 26 February 2025.

The Good Robot podcast: Critiquing tech through comedy with Laura Allcorn

  17 Feb 2025
Eleanor and Kerry chat to Laura Allcorn about how she pairs humour and entertainment with participatory public engagement to raise awareness of AI use cases

Interview with Kayla Boggess: Explainable AI for more accessible and understandable technologies

  14 Feb 2025
Hear from Doctoral Consortium participant Kayla about her work focussed on explanations for multi-agent reinforcement learning, and human-centric explanations.

The Machine Ethics podcast: Running faster with Enrico Panai

This episode, Ben chats to Enrico Panai about different aspects of AI ethics.

Diffusion model predicts 3D genomic structures

  12 Feb 2025
A new approach predicts how a specific DNA sequence will arrange itself in the cell nucleus.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association