ΑΙhub.org
 

Michael Wooldridge: Talking to the public about AI – #EAAI2021 invited talk


by
17 March 2021



share this:
microphone in front of a crowd
Photo by Kane Reinholdtsen on Unsplash

Michael Wooldridge is the winner of the 2021 Educational Advances in Artificial Intelligence (EAAI) Outstanding Educator Award. He gave a plenary talk at AAAI/EAAI in February this year, focussing on lessons he has learnt in communicating AI to the public.

AIhub focus issue on quality education

Who speaks for AI?

Michael’s public science journey began in 2014 when the press and social media became awash with stories of AI. He wondered who was going to respond to these, often exaggerated, narratives and to add some nuance to the discussion. It turned out that nobody did, and there was a noticeable absence of expert opinion reported. Concerned about this issue, the following year, Michael organised a panel at the International Joint Conference on Artificial Intelligence (IJCAI) entitled “Who speaks for AI?” His aim for this session was to facilitate a discussion around who should be the voice that responds when the press report grand proclamations about AI. Not expecting it to be the most lively or well-attended of sessions, he was surprised by the turnout and by the heated nature of the debate.

That panel event catalysed his serious involvement in science communication and he has since written two popular science books, sat on UK Government committees, given many radio and TV interviews, and spoken at popular science and literary festivals. Through all of these different media, his aim is to inform the public debate in as balanced and as transparent a way as possible.

Learning lessons from talking about AI

Michael condensed everything he has learnt, with regards to talking to non-specialists about AI, into 14 lessons. During the talk he illustrated these with examples of personal experiences. The first experience he mentioned was turning down an interview with the BBC, assuming that they would find someone else who was more eloquent and informed. It turned out that they didn’t, and this taught him the first lesson: “If you don’t inform the public about AI, then someone who knows even less than you will”.

Michael went on to speak about his experiences advising the UK Government. The rise of AI has prompted governments across the world to investigate AI, and to consider national responses. Michael has given evidence at an all-party parliamentary group in the UK, and has been involved in Select Committees. He learnt a number of lessons from these parliamentary sessions. One of these was that you need to keep your message clear and simple. He also highlighted the need to take care with the language you use when talking to non-specialists. This is illustrated by lesson number six in the figure below:

Wooldridge talk lesson 6
Slide from Michael Wooldridge’s talk. Lesson 6: researchers need to be very careful with language they use when communicating with non-specialists.

Michael talked about his two popular science books and provided some guidance on writing books for non-specialist audiences. His first book (The Ladybird Expert Guide to AI) was a short-format, illustrated, historical introduction to AI. The challenge was to distil AI into 54 pages, whilst giving a balanced view and making sure there was no hype. However, he still wanted to convey a sense of wonder as to what was possible with AI.

His second book (The Road to Conscious Machines) was more ambitious and tells the story of what AI is, the ideas that have formed AI (such as symbolic AI, behavioural AI, neural AI), and what AI can and can’t do.

One important lesson he learnt was to remember your key message, and to keep focussed on that. It is vital that the opening chapters of the book are clear and communicate your message. Also, to keep readers interested there must be a clear narrative running through the book.

Since Michael’s journey began he, and many of his colleagues, have found themselves in the spotlight, trying to communicate what AI is and what the future holds. In general, he has found it an incredibly rewarding experience. He believes that researchers have a responsibility to step up and communicate the reality of AI today.

Wooldridge EAAI talk conclusions
The conclusion slide from Michael Wooldridge’s talk.

Find out more

Who speaks for AI? – following the IJCAI panel discussion (mentioned above), the participants of that debate, and others from the field, wrote about the topic is this article, published in AI Matters.

You can watch the talk in full here.

About Michael Wooldridge

Michael Wooldridge is a Professor of Computer Science and Head of Department of Computer Science at the University of Oxford, and a programme director for AI at the Alan Turing Institute. His research interests are in the use of formal techniques for reasoning about multiagent systems. He is particularly interested in the computational aspects of rational action in systems composed of multiple self-interested computational systems. As well as more than 400 technical articles on AI, he has published two popular science introductions to the field: The Ladybird Expert Guide to AI (2018), and The Road to Conscious Machines (2020).



tags: , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Forthcoming machine learning and AI seminars: December 2025 edition

  01 Dec 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 December 2025 and 31 January 2026.
monthly digest

AIhub monthly digest: November 2025 – learning robust controllers, trust in multi-agent systems, and a new fairness evaluation dataset

  28 Nov 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

EU proposal to delay parts of its AI Act signal a policy shift that prioritises big tech over fairness

  27 Nov 2025
The EC has proposed delaying parts of the act until 2027 following intense pressure from tech companies and the Trump administration.

Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence