ΑΙhub.org
 

Turing Institute panel discussion on interpretability, safety and security in AI


by
13 April 2022



share this:

AlexaSteinbrück-ExplainableAIAlexa Steinbrück / Better Images of AI / Explainable AI / Licenced by CC-BY 4.0

A few months ago, the Alan Turing Institute played host to a conference on interpretability, safety, and security in AI. This public event brought together leading academics, industrialists, policy makers, and journalists to foster conversations with the wider public about the merits and risks of artificial intelligence technology. The talks are now all available to watch on the Institute’s YouTube channel.

As part of the conference, participants were treated to a lively panel debate, chaired by Hannah Fry, which saw Isaac Kohane, Aleksander Madry, Cynthia Rudin, and Manuela Veloso discuss a variety of topics. Amongst other things, they talked about breakthroughs in their respective fields, holding AI systems (and their creators) accountable, complicated decision making, interpretability, and misinformation.

You can watch the conversation in full below:

You can catch up with the rest of the talks at these links:


The image used to accompany this post is courtesy of Better images of AI. They have a growing repository of images that can be downloaded and used by anyone for free if credited using the Creative Commons license referenced on the image card.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Google’s SynthID is the latest tool for catching AI-made content. What is AI ‘watermarking’ and does it work?

  16 Jun 2025
Last month, Google announced SynthID Detector, a new tool to detect AI-generated content.

The Good Robot podcast: Symbiosis from bacteria to AI with N. Katherine Hayles

  13 Jun 2025
In this episode, Eleanor and Kerry talk to N. Katherine Hayles about her new book, and discuss how the biological concept of symbiosis can inform the relationships we have with AI.

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

  12 Jun 2025
We caught up with Marco to find out what exciting events are in store at this year's RoboCup.

Graphic novel explains the environmental impact of AI

  11 Jun 2025
EPFL’s Center for Learning Sciences has released Utop’IA, an educational graphic novel that explores the environmental impact of artificial intelligence.

Interview with Amar Halilovic: Explainable AI for robotics

  10 Jun 2025
Find out about Amar's research investigating the generation of explanations for robot actions.

Congratulations to the #IJCAI2025 award winners

  09 Jun 2025
The winners of three prestigious IJCAI awards for 2025 have been announced.

Machine learning powers new approach to detecting soil contaminants

  06 Jun 2025
Method spots pollutants without experimental reference samples.

What is AI slop? Why you are seeing more fake photos and videos in your social media feed

  05 Jun 2025
AI-generated low-quality news sites are popping up all over the place, and AI images are also flooding social media platforms



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence