ΑΙhub.org
 

Radical AI podcast: featuring Su Lin Blodgett

by
08 April 2021



share this:

Su Lin Blodgett
Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Su Lin Blodgett about defining bias.

Defining bias with Su Lin Blodgett

How do we define bias? Is all bias the same? Is it possible to eliminate bias completely in our AI systems? Should we even try? To answer these questions and more we welcome to the show Su Lin Blodgett.

Su Lin is a postdoctoral researcher in the Fairness, Accountability, Transparency, and Ethics (FATE) group at Microsoft Research Montréal. She is broadly interested in examining the social implications of Natural Language Processing, or NLP technologies, and in using NLP approaches to examine language variation and change. She previously completed her Ph.D. in computer science at the University of Massachusetts Amherst.

Follow Su Lin Blodgett on Twitter @sulin_blodgett.

Full show notes for this episode can be found at Radical AI.

Listen to the episode below:

About Radical AI:

Hosted by Dylan Doyle-Burke, a PhD student at the University of Denver, and Jessie J Smith, a PhD student at the University of Colorado Boulder, Radical AI is a podcast featuring the voices of the future in the field of Artificial Intelligence Ethics.

Radical AI lifts up people, ideas, and stories that represent the cutting edge in AI, philosophy, and machine learning. In a world where platforms far too often feature the status quo and the usual suspects, Radical AI is a breath of fresh air whose mission is “To create an engaging, professional, educational and accessible platform centering marginalized or otherwise radical voices in industry and the academy for dialogue, collaboration, and debate to co-create the field of Artificial Intelligence Ethics.”

Through interviews with rising stars and experts in the field we boldly engage with the topics that are transforming our world like bias, discrimination, identity, accessibility, privacy, and issues of morality.

To find more information regarding the project, including podcast episode transcripts and show notes, please visit Radical AI.




The Radical AI Podcast




            AIhub is supported by:


Related posts :



PeSTo: an AI tool for predicting protein interactions

The model can predict the binding interfaces of proteins when they bind other proteins, nucleic acids, lipids, ions, and small molecules.
01 June 2023, by

Tetris reveals how people respond to an unfair AI algorithm

An experiment in which two people play a modified version of Tetris revealed that players who get fewer turns perceive the other player as less likeable, regardless of whether a person or an algorithm allocates the turns.
31 May 2023, by

AIhub monthly digest: May 2023 – mitigating biases, ICLR invited talks, and Eurovision fun

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
30 May 2023, by

Latest AI announcements from the US Government include updated strategic plan

Find out more about the latest initiatives pertaining to responsible AI in the USA.
26 May 2023, by

Interview with Haotian Xue: learning intuitive physics from videos

A framework for learning 3D-grounded visual intuitive physics models from videos of complex scenes.
25 May 2023, by

Using engineered bacteria and AI to sense and record environmental signals

Synthetic biologists engineer bacterial swarm patterns to visibly record environment and use deep learning to decode patterns.





©2021 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association