ΑΙhub.org
 

Turing Institute panel discussion on interpretability, safety and security in AI

by
13 April 2022



share this:

AlexaSteinbrück-ExplainableAIAlexa Steinbrück / Better Images of AI / Explainable AI / Licenced by CC-BY 4.0

A few months ago, the Alan Turing Institute played host to a conference on interpretability, safety, and security in AI. This public event brought together leading academics, industrialists, policy makers, and journalists to foster conversations with the wider public about the merits and risks of artificial intelligence technology. The talks are now all available to watch on the Institute’s YouTube channel.

As part of the conference, participants were treated to a lively panel debate, chaired by Hannah Fry, which saw Isaac Kohane, Aleksander Madry, Cynthia Rudin, and Manuela Veloso discuss a variety of topics. Amongst other things, they talked about breakthroughs in their respective fields, holding AI systems (and their creators) accountable, complicated decision making, interpretability, and misinformation.

You can watch the conversation in full below:

You can catch up with the rest of the talks at these links:


The image used to accompany this post is courtesy of Better images of AI. They have a growing repository of images that can be downloaded and used by anyone for free if credited using the Creative Commons license referenced on the image card.




Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



AIhub monthly digest: November 2023 – deconstructing sentiment analysis, few-shot learning for medical images, and Angry Birds structure generation

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
29 November 2023, by

An introduction to science communication at #NeurIPS2023

Find out more about our short course to be held in-person at NeurIPS on Monday 11 December.
28 November 2023, by

Co-creating better images of AI

In July, 2023, Science Gallery London and the London Office of Technology and Innovation co-hosted a workshop helping Londoners think about the kind of AI they want.
27 November 2023, by

The power of collaboration: power grid control with multi-agent reinforcement learning

A promising AI tool for assisting network operators in their real-time decision-making and operations

Goal representations for instruction following

How can we reconcile the ease of specifying tasks through natural language-based approaches with the performance improvements of goal-conditioned learning?
23 November 2023, by

A comprehensive survey on rare event prediction

We review the rare event prediction literature and highlight open research questions and future directions in the field.





©2021 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association