ΑΙhub.org
 

Radical AI podcast: featuring Shion Guha


by
08 August 2022



share this:
Shion Guha

Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Shion Guha about the government and AI.

Should the government use AI?

How does the government use algorithms? How do algorithms impact social services, policing, and other social services? And where does Silicon Valley fit in?

In this episode we interview Shion Guha about how governments adopt algorithms to enforce public policy. Shion is an Assistant Professor in the Faculty of Information at University of Toronto. His research fits into the field of Human-Centered Data Science, which he helped develop. Shion explores the intersection between AI and public policy by researching algorithmic decision-making in public services such as criminal justice, child welfare, and healthcare.

Follow Shion on Twitter @GuhaShion.

If you enjoyed this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.

Full show notes for this episode can be found at Radical AI.

Listen to the episode below:

About Radical AI:

Hosted by Dylan Doyle-Burke, a PhD student at the University of Denver, and Jessie J Smith, a PhD student at the University of Colorado Boulder, Radical AI is a podcast featuring the voices of the future in the field of Artificial Intelligence Ethics.

Radical AI lifts up people, ideas, and stories that represent the cutting edge in AI, philosophy, and machine learning. In a world where platforms far too often feature the status quo and the usual suspects, Radical AI is a breath of fresh air whose mission is “To create an engaging, professional, educational and accessible platform centering marginalized or otherwise radical voices in industry and the academy for dialogue, collaboration, and debate to co-create the field of Artificial Intelligence Ethics.”

Through interviews with rising stars and experts in the field we boldly engage with the topics that are transforming our world like bias, discrimination, identity, accessibility, privacy, and issues of morality.

To find more information regarding the project, including podcast episode transcripts and show notes, please visit Radical AI.




The Radical AI Podcast




            AIhub is supported by:


Related posts :



What is AI slop? Why you are seeing more fake photos and videos in your social media feed

  05 Jun 2025
AI-generated low-quality news sites are popping up all over the place, and AI images are also flooding social media platforms

The Machine Ethics podcast – DeepDive: AI and the environment

In the 100th episode of the podcast, Ben talks to four experts in the field.

Interview with Debalina Padariya: Privacy-preserving generative models

  03 Jun 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Forthcoming machine learning and AI seminars: June 2025 edition

  02 Jun 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 June and 31 July 2025.
monthly digest

AIhub monthly digest: May 2025 – materials design, object state classification, and real-time monitoring for healthcare data

  30 May 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Congratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners

  29 May 2025
Find out who won the awards presented at the International Conference on Autonomous Agents and Multiagent Systems last week.

The Good Robot podcast: Transhumanist fantasies with Alexander Thomas

  28 May 2025
In this episode, Eleanor talks to Alexander Thomas, a filmmaker and academic, about the transhumanist narrative.

Congratulations to the #ICRA2025 best paper award winners

  27 May 2025
The winners and finalists in the different categories have been announced.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence