ΑΙhub.org
 

Cynthia Rudin wins AAAI Squirrel AI Award


by
15 October 2021



share this:
Cynthia Rudin

Cynthia Rudin, professor of computer science at Duke University, USA, has become the second recipient of the AAAI Squirrel AI Award. She was awarded the 2022 prize for pioneering scientific work in the area of interpretable and transparent AI systems in real-world deployments, the advocacy for these features in highly sensitive areas such as social justice and medical diagnosis, and serving as a role model for researchers and practitioners.

Cynthia talks about the prize, and what inspires her work, in this short video from Duke University:

Cynthia has worked on a variety of research topics during her career. The first applied project used machine learning to predict which manholes in New York City were at risk of exploding due to degrading and overloaded electrical circuitry.

An area of particular focus for Cynthia is interpretable machine learning, which she has applied in different settings. She, and her collaborators, designed a simple point-based system that can predict which patients are most at risk of having destructive seizures after a stroke or other brain injury. She also works on interpretable models in the field of criminal justice.

About the AAAI Squirrel AI Award

The AAAI Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity recognizes positive impacts of artificial intelligence to protect, enhance, and improve human life in meaningful ways with long-lived effects. The award is given annually at the conference for the Association for the Advancement of Artificial Intelligence (AAAI), and is accompanied by a prize of $1,000,000 plus travel expenses to the conference. Financial support for the award is provided by Squirrel AI. The award was given for the first time in 2021.

Cynthia Rudin biography

Cynthia earned undergraduate degrees in mathematical physics and music theory from the University at Buffalo before completing her PhD in applied and computational mathematics at Princeton. She then worked as a National Science Foundation postdoctoral research fellow at New York University, and as an associate research scientist at Columbia University. She became an associate professor of statistics at the Massachusetts Institute of Technology before joining Duke’s faculty in 2017, where she holds appointments in computer science, electrical and computer engineering, biostatistics and bioinformatics, and statistical science.

You can read the AAAI press release here.



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.

How the internet and its bots are sabotaging scientific research

  03 Sep 2025
What most people have failed to fully realise is that internet research has brought along risks of data corruption or impersonation.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence