ΑΙhub.org
 

Turing Institute panel discussion on interpretability, safety and security in AI


by
13 April 2022



share this:

AlexaSteinbrück-ExplainableAIAlexa Steinbrück / Better Images of AI / Explainable AI / Licenced by CC-BY 4.0

A few months ago, the Alan Turing Institute played host to a conference on interpretability, safety, and security in AI. This public event brought together leading academics, industrialists, policy makers, and journalists to foster conversations with the wider public about the merits and risks of artificial intelligence technology. The talks are now all available to watch on the Institute’s YouTube channel.

As part of the conference, participants were treated to a lively panel debate, chaired by Hannah Fry, which saw Isaac Kohane, Aleksander Madry, Cynthia Rudin, and Manuela Veloso discuss a variety of topics. Amongst other things, they talked about breakthroughs in their respective fields, holding AI systems (and their creators) accountable, complicated decision making, interpretability, and misinformation.

You can watch the conversation in full below:

You can catch up with the rest of the talks at these links:


The image used to accompany this post is courtesy of Better images of AI. They have a growing repository of images that can be downloaded and used by anyone for free if credited using the Creative Commons license referenced on the image card.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.

Learning from failure to tackle extremely hard problems

  12 Nov 2025
This blog post is based on the work "BaNEL: Exploration posteriors for generative modeling using only negative rewards".

How AI can improve storm surge forecasts to help save lives

  10 Nov 2025
Looking at how AI models can help provide more detailed forecasts more quickly.

Rewarding explainability in drug repurposing with knowledge graphs

and   07 Nov 2025
A RL approach that not only predicts which drug-disease pairs might hold promise but also explains why.

AI Song Contest – vote for your favourite

  06 Nov 2025
Voting is open until 9 November.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence