ΑΙhub.org
 

Turing Institute panel discussion on interpretability, safety and security in AI

by
13 April 2022



share this:

AlexaSteinbrück-ExplainableAIAlexa Steinbrück / Better Images of AI / Explainable AI / Licenced by CC-BY 4.0

A few months ago, the Alan Turing Institute played host to a conference on interpretability, safety, and security in AI. This public event brought together leading academics, industrialists, policy makers, and journalists to foster conversations with the wider public about the merits and risks of artificial intelligence technology. The talks are now all available to watch on the Institute’s YouTube channel.

As part of the conference, participants were treated to a lively panel debate, chaired by Hannah Fry, which saw Isaac Kohane, Aleksander Madry, Cynthia Rudin, and Manuela Veloso discuss a variety of topics. Amongst other things, they talked about breakthroughs in their respective fields, holding AI systems (and their creators) accountable, complicated decision making, interpretability, and misinformation.

You can watch the conversation in full below:

You can catch up with the rest of the talks at these links:


The image used to accompany this post is courtesy of Better images of AI. They have a growing repository of images that can be downloaded and used by anyone for free if credited using the Creative Commons license referenced on the image card.




Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Should I use offline RL or imitation learning?

In this blog post, we aim to understand if, when and why offline RL is a better approach for tackling a variety of sequential decision-making problems.
17 May 2022, by

Watch the sessions from AI UK

The recordings of the sessions from the AI UK conference are now available for all to watch.
16 May 2022, by

Launch of a new standard for AI security in Singapore

The standard aims to guide AI practitioners in dealing with malicious attacks on AI systems.

Using deep learning to predict physical interactions of protein complexes

A computational tool developed to predict the structure of protein complexes is providing new insights into the biomolecular mechanisms of their function.
podcast

New voices in AI: human-AI collaboration, with Nicolo' Brandizzi

We talk to Nicolo' Brandizzi about his work on human-AI collaboration.
11 May 2022, by

ACM SIGAI Industry Award 2022 nominations

Find out how you can make a nomination for the ACM SIGAI Industry Award - deadline 31 May 2022.
10 May 2022, by





©2021 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association