ΑΙhub.org
 

Radical AI podcast: featuring Michael Madaio


by
22 October 2020



share this:

michael madaio
Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Michael Madaio about practices for co-designing ethical technologies.

Checklists and principles and values, oh my! Practices for co-designing ethical technologies with Michael Madaio

What are the limitations of using checklists for fairness? What are the alternatives? How do we effectively design ethical AI systems around our collective values?

To answer these questions we welcome Dr Michael Madaio to the show. Michael is a postdoctoral researcher at Microsoft Research working with the FATE (Fairness, Accountability, Transparency, and Ethics in AI) research group. He works at the intersection of human-computer interaction and AI/ML, where he uses human-centered methods to understand how we might co-design more equitable data-driven technologies with stakeholders. Michael received his PhD in Human-Computer Interaction from Carnegie Mellon University, where he was a PIER fellow funded by the Institute for Education Sciences and a Siebel Scholar. Michael, along with other collaborators at Microsoft FATE, authored the paper: Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI, which is one of the major focuses of this interview.

Full show notes for this episode can be found at Radical AI.

Listen to the episode below:

About Radical AI:

Hosted by Dylan Doyle-Burke, a PhD student at the University of Denver, and Jessie J Smith, a PhD student at the University of Colorado Boulder, Radical AI is a podcast featuring the voices of the future in the field of Artificial Intelligence Ethics.

Radical AI lifts up people, ideas, and stories that represent the cutting edge in AI, philosophy, and machine learning. In a world where platforms far too often feature the status quo and the usual suspects, Radical AI is a breath of fresh air whose mission is “To create an engaging, professional, educational and accessible platform centering marginalized or otherwise radical voices in industry and the academy for dialogue, collaboration, and debate to co-create the field of Artificial Intelligence Ethics.”

Through interviews with rising stars and experts in the field we boldly engage with the topics that are transforming our world like bias, discrimination, identity, accessibility, privacy, and issues of morality.

To find more information regarding the project, including podcast episode transcripts and show notes, please visit Radical AI.




The Radical AI Podcast




            AIhub is supported by:



Related posts :



Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.

How the internet and its bots are sabotaging scientific research

  03 Sep 2025
What most people have failed to fully realise is that internet research has brought along risks of data corruption or impersonation.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence