ΑΙhub.org
 

#AAAI2022 invited talk – Cynthia Rudin on interpretable machine learning


by
09 March 2022



share this:
power cables

In October 2021, Cynthia Rudin was announced as the winner of the AAAI Squirrel AI award. This award recognizes positive impacts of artificial intelligence to protect, enhance, and improve human life in meaningful ways. Cynthia was formally presented with the prize during an award ceremony at the AAAI Conference on Artificial Intelligence, following which she delivered an invited talk.

Cynthia began her talk, and set the scene for her research in interpretable AI, with the story of a project she carried out in New York City, where the goal was to maintain the power grid using machine learning.

Some parts of the grid infrastructure in the city are as old as 140 years, and this inevitably leads to failures in parts of the system. Cynthia and her team were tasked with using historical records of past grid failures to predict future ones. Many of the failures occur underground at manhole locations, and these failures may manifest themselves as fires or explosions. Typically the failure is due to wire degradation, so the ability to predict this and replace wires before a failure occurs would be highly beneficial.

knowledge discovery processPredicting manhole events – the process. Screenshot from Cynthia Rudin’s talk.

One of the challenges with this project was the data, which took the form of written tickets. It was very difficult to extract the required information from these tickets. In addition, pockets of domain knowledge were spread across different databases. The slide above outlines the process that Cynthia and her team designed. They first had to clean the data and classify what would constitute a “serious event” at a manhole location (e.g. a fire or explosion). Their model then ranked the manholes in order of vulnerability. The final part of the process concerned the design of two tools to show what was going on in the analysis. These tools helped with interpretability and provided the electricity company with report cards that they could act on. A test of their method on unseen data revealed that if the electricity company had acted on the top 10% of their ranked list, they could’ve reduced the number of manholes events by up to 44% for that time period.

This project shaped how Cynthia thinks about what is important in machine learning problems. During the project she learnt that more powerful machine learning methods were not as effective as having an interpretable process. They tried a range of machine learning models, from the most basic up to the most powerful, and found no performance difference.

Cynthia spoke about some of the lessons she learnt about machine learning culture at that time. The stakes for the majority of problems tended to be low, and data (which was usually clean) came from repositories. This was quite a contrast to the real-world manhole problem, with high stakes and messy data. She noted that problems arise when a low stakes mentality is applied to high stakes fields, for example parole decisions. There are bad decisions being made because someone typed the wrong number into a black-box model.

lessons learnedLessons learned. Screenshot from Cynthia Rudin’s talk.

Something else that Cynthia realised was that people’s experiences with machine learning are wildly different depending on what type of problem they are working on. Specifically, raw data are very different from tabular data, and these two data types are like two different worlds of machine learning. For raw data, neural networks are the only technique that works right now. In contrast, with tabular data, all methods have a similar performance. That includes very sparse models, such as decision trees or scoring systems.

Therefore, working with tabular data gives us the opportunity to create simple models that are easy for the users to interpret. An example of such a model is one that Cynthia and her team developed to aid doctors in preventing brain damage in critically ill patients, where EEG measurements are used to detect seizures. Their model, 2HELPS2B, is now widely used. It is a score-based model, which is simple for doctors to use and they can memorise it just by knowing its name. Although the end product of the model is very simple, the design of it was not. It was necessary to work out which sub-set of features work together and lead to the most effective prediction of seizures. It’s a combinatorially hard problem to design these generalized additive models.

2HELPS2B model2HELPS2B model. Screenshot from Cynthia Rudin’s talk.

Cynthia’s lab does a lot of work on sparse generalised additive models. They have designed different models for various medical applications. These include models for ADHD screening, sleep apnea screening, and for the clock drawing test to detect dementia. You can read their latest work on sparse generalised additive models, which was recently published on arXiv.

To close, Cynthia issued a call to the AI community that applied AI research be more accepted into the fold. She suggested applied-focussed tracks at major conferences as a good starting point.

You can watch the talk in full here.



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



#ICRA2025 social media round-up

  23 May 2025
Find out what the participants got up to at the International Conference on Robotics & Automation.

Interview with Gillian Hadfield: Normative infrastructure for AI alignment

  22 May 2025
Kumar Kshitij Patel spoke to Gillian Hadfield about her interdisciplinary research, career trajectory, path into AI alignment, law, and general thoughts on AI systems.

PitcherNet helps researchers throw strikes with AI analysis

  21 May 2025
Baltimore Orioles tasks Waterloo Engineering researchers to develop AI tech that can monitor pitchers using low-resolution video captured by smartphones

Interview with Filippos Gouidis: Object state classification

  20 May 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

#AAAI2025 workshops round-up 3: Neural reasoning and mathematical discovery, and AI to accelerate science and engineering

  19 May 2025
We find out about three more of the workshops that took place at AAAI 2025.

What’s coming up at #ICRA2025?

  16 May 2025
Find out what's in store at the IEEE International Conference on Robotics & Automation, which will take place from 19-23 May.

AI Song Contest returns for 2025

  15 May 2025
This year's competition will culminate in a live award show in November.

Robot see, robot do: System learns after watching how-tos

  14 May 2025
Researchers have developed a new robotic framework that allows robots to learn tasks by watching a how-to video



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence