Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Eun Seo Jo about “The History that Defines our Technological Future”.
How does your data tell your story? Is historical data political? What do our archives have to do with defining the future of our technology? To answer these questions and more The Radical AI Podcast welcomes Stanford PhD student and archivist Eun Seo Jo to the show. Eun Seo Jo is a PhD student in history at Stanford University. Her research broadly covers applications of machine learning on historical data and the ethical concerns of using socio-cultural data for AI research and systems.
You can follow Eun Seo Jo on Twitter @unsojo.
Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning by Eun Seo Jo and Timnit Gebru
Disseminating Research News in HCI: Perceived Hazards, How-To’s, and Opportunities for Innovation by C. Estelle Smith, Eduardo Nevarez, and Haiyi Zhu
Full show notes for this episode can be found at Radical AI.
Listen to the episode below:
Hosted by Dylan Doyle-Burke, a PhD student at the University of Denver, and Jessie J Smith, a PhD student at the University of Colorado Boulder, Radical AI is a podcast featuring the voices of the future in the field of Artificial Intelligence Ethics.
Radical AI lifts up people, ideas, and stories that represent the cutting edge in AI, philosophy, and machine learning. In a world where platforms far too often feature the status quo and the usual suspects, Radical AI is a breath of fresh air whose mission is “To create an engaging, professional, educational and accessible platform centering marginalized or otherwise radical voices in industry and the academy for dialogue, collaboration, and debate to co-create the field of Artificial Intelligence Ethics.”
Through interviews with rising stars and experts in the field we boldly engage with the topics that are transforming our world like bias, discrimination, identity, accessibility, privacy, and issues of morality.
To find more information regarding the project, including podcast episode transcripts and show notes, please visit Radical AI.