Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Jaime Snyder about visualizing our lives through data.
How do we see ourselves in data? What is self-tracking and how can we design for visualizing the data of our bodies and mental health? How do we make visualized data more accessible?
In this episode, we interview Jaime Snyder about the data visualization of COVID, mental health, and more. Jaime Snyder is an Associate Professor in the Information School at the University of Washington in Seattle.
She leads the Visualization Studies Research Studio and is also an Adjunct Associate Professor in the UW Department of Human-Centered Design and Engineering. Snyder’s research draws on her background as an artist and information science scholar to explore the creation and use of visual representations of information, data, and knowledge in collaborative and coordinated contexts.
Follow Jaime on Twitter @jay_ess.
Full show notes for this episode can be found at Radical AI.
Listen to the episode below:
Hosted by Dylan Doyle-Burke, a PhD student at the University of Denver, and Jessie J Smith, a PhD student at the University of Colorado Boulder, Radical AI is a podcast featuring the voices of the future in the field of Artificial Intelligence Ethics.
Radical AI lifts up people, ideas, and stories that represent the cutting edge in AI, philosophy, and machine learning. In a world where platforms far too often feature the status quo and the usual suspects, Radical AI is a breath of fresh air whose mission is “To create an engaging, professional, educational and accessible platform centering marginalized or otherwise radical voices in industry and the academy for dialogue, collaboration, and debate to co-create the field of Artificial Intelligence Ethics.”
Through interviews with rising stars and experts in the field we boldly engage with the topics that are transforming our world like bias, discrimination, identity, accessibility, privacy, and issues of morality.
To find more information regarding the project, including podcast episode transcripts and show notes, please visit Radical AI.