Radical AI podcast: featuring Meredith Ringel Morris


by
25 January 2021

share this:

Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Meredith Ringel Morris about ability and accessibility in AI.

Ability and accessibility in AI with Meredith Ringel Morris

What should you know about Ability and Accessibility in AI and responsible technology development? In this episode we interview Meredith Ringel Morris.

Meredith is a computer scientist conducting research in the areas of human-computer interaction (HCI), computer-supported cooperative work (CSCW), social computing, and accessibility. Her current research focus is on accessibility, particularly on the intersection of accessibility and social technologies.

Follow Meredith Morris on Twitter @merrierm.

Full show notes for this episode can be found at Radical AI.

Listen to the episode below:

About Radical AI:

Hosted by Dylan Doyle-Burke, a PhD student at the University of Denver, and Jessie J Smith, a PhD student at the University of Colorado Boulder, Radical AI is a podcast featuring the voices of the future in the field of Artificial Intelligence Ethics.

Radical AI lifts up people, ideas, and stories that represent the cutting edge in AI, philosophy, and machine learning. In a world where platforms far too often feature the status quo and the usual suspects, Radical AI is a breath of fresh air whose mission is “To create an engaging, professional, educational and accessible platform centering marginalized or otherwise radical voices in industry and the academy for dialogue, collaboration, and debate to co-create the field of Artificial Intelligence Ethics.”

Through interviews with rising stars and experts in the field we boldly engage with the topics that are transforming our world like bias, discrimination, identity, accessibility, privacy, and issues of morality.

To find more information regarding the project, including podcast episode transcripts and show notes, please visit Radical AI.








            AIhub is supported by:


Related posts :



Counterfactual predictions under runtime confounding

We propose a method for using offline data to build a prediction model that only requires access to the available subset of confounders at prediction time.
07 May 2021, by

Tweet round-up from #ICLR2021

Peruse some of the chat from this week's International Conference on Learning Representations.
06 May 2021, by

Hot papers on arXiv from the past month: April 2021

What’s hot on arXiv? Here are the most tweeted papers that were uploaded onto arXiv during April 2021.
05 May 2021, by


















©2021 - Association for the Understanding of Artificial Intelligence