Alexa Steinbrück / Better Images of AI / Explainable AI / Licenced by CC-BY 4.0
A few months ago, the Alan Turing Institute played host to a conference on interpretability, safety, and security in AI. This public event brought together leading academics, industrialists, policy makers, and journalists to foster conversations with the wider public about the merits and risks of artificial intelligence technology. The talks are now all available to watch on the Institute’s YouTube channel.
As part of the conference, participants were treated to a lively panel debate, chaired by Hannah Fry, which saw Isaac Kohane, Aleksander Madry, Cynthia Rudin, and Manuela Veloso discuss a variety of topics. Amongst other things, they talked about breakthroughs in their respective fields, holding AI systems (and their creators) accountable, complicated decision making, interpretability, and misinformation.
You can watch the conversation in full below:
You can catch up with the rest of the talks at these links:
- Model based deep learning: Applications to imaging and communications, Yonina Eldar (Weizmann Institute of Science)
- Towards safe, reliable and trustworthy AI, Pushmeet Kohli (Google)
- Deep learning for scientific computing: Two stories on the gap between theory and practice, Ben Adcock (Simon Fraser University)
- Interpreting deep neural networks towards trustworthiness, Bin Yu (University of California, Berkeley)
- Machine learning in healthcare: From interpretability to a new human-machine partnership, Mihaela van der Schaar (University of Cambridge)
- Scoring systems: At the extreme of interpretable machine learning, Cynthia Rudin (Duke University)
- Active human-machine interactions necessary for interpretability, Isaac Kohane (Harvard University)
- Preserving patient safety as AI transforms clinical care, Curt Langlotz (Stanford University)
- Recent progress in predictive inference, Emmanuel Candes (Stanford University)
- Safety and robustness for deep learning with provable guarantees, Marta Kwiatkowska (University of Oxford)
- Algorithmic fairness and individual probabilities, Cynthia Dwork (Harvard University)
- Human-compatible artificial intelligence, Stuart Russell (University of California, Berkeley)
The image used to accompany this post is courtesy of Better images of AI. They have a growing repository of images that can be downloaded and used by anyone for free if credited using the Creative Commons license referenced on the image card.
Lucy Smith
is Senior Managing Editor for AIhub.