ΑΙhub.org
 

The Good Robot Hot Take: does AI know how you feel?


by
21 October 2024



share this:
Space scene with words Good Robot Podcast

Hosted by Eleanor Drage and Kerry McInerney, The Good Robot is a podcast which explores the many complex intersections between gender, feminism and technology.

The Good Robot Hot Take: does AI know how you feel?

In this episode, we chat about coming back from summer break, and discuss a research paper recently published by Kerry and the AI ethicist and researcher Os Keyes called The Infopolitics of Feeling: How race and disability are configured in Emotion Recognition Technology. We discuss why AI tools that promise to be able to read our emotions from our faces are scientifically and politically suspect. We then explore the ableist foundations of what used to be the most famous Emotion AI firm in the world: Affectiva. Kerry also explains how the Stop Asian Hate and Black Lives Matters protests of 2020 inspired this research project, and why she thinks that emotion recognition technologies have no place in our societies.

Listen to the episode here:

For the reading list and transcript for this episode, visit The Good Robot website.

This episode is also available to watch on YouTube:

About The Good Robot Podcast

Dr Eleanor Drage and Dr Kerry McInerney are Research Associates at the Leverhulme Centre for the Future of Intelligence, where they work on the Mercator-Stiflung funded project on Desirable Digitalisation. Previously, they were Christina Gaw Postdoctoral Researchers in Gender and Technology at the University of Cambridge Centre for Gender Studies. During the COVID-19 pandemic they decided to co-found The Good Robot Podcast to explore the many complex intersections between gender, feminism and technology.




The Good Robot Podcast




            AIhub is supported by:



Related posts :



Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Interview with Mario Mirabile: trust in multi-agent systems

  18 Nov 2025
We meet ECAI Doctoral Consortium participant, Mario, to find out more about his research.

Review of “Exploring metaphors of AI: visualisations, narratives and perception”

and   17 Nov 2025
A curated research session at the Hype Studies Conference, “(Don’t) Believe the Hype?!” 10-12 September 2025, Barcelona.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence