ΑΙhub.org
 

Radical AI podcast: featuring Raziye Buse Çetin


by
17 December 2021



share this:

Raziye Buse Çetin
Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Raziye Buse Çetin about decolonial AI.

Decolonial AI 101

What is Decolonial AI? How can we apply a postcolonial lens to AI design?

In this episode we interview Raziye Buse Çetin about colonial, decolonial, and postcolonial AI – and the newly-released Decolonial AI Manyfesto.

Buse is an AI policy and ethics researcher and consultant. Her work revolves around ethics, impact, and governance of AI systems. She combines her lived experience with her interest in postcolonial studies, intersectional feminism and science and technology studies (STS) to develop critical thinking about AI technologies and narratives around it.

Follow Buse on Twitter @BuseCett.

Full show notes for this episode can be found at Radical AI.

Listen to the episode below:

About Radical AI:

Hosted by Dylan Doyle-Burke, a PhD student at the University of Denver, and Jessie J Smith, a PhD student at the University of Colorado Boulder, Radical AI is a podcast featuring the voices of the future in the field of Artificial Intelligence Ethics.

Radical AI lifts up people, ideas, and stories that represent the cutting edge in AI, philosophy, and machine learning. In a world where platforms far too often feature the status quo and the usual suspects, Radical AI is a breath of fresh air whose mission is “To create an engaging, professional, educational and accessible platform centering marginalized or otherwise radical voices in industry and the academy for dialogue, collaboration, and debate to co-create the field of Artificial Intelligence Ethics.”

Through interviews with rising stars and experts in the field we boldly engage with the topics that are transforming our world like bias, discrimination, identity, accessibility, privacy, and issues of morality.

To find more information regarding the project, including podcast episode transcripts and show notes, please visit Radical AI.



tags: ,


The Radical AI Podcast




            AIhub is supported by:


Related posts :



monthly digest

AIhub monthly digest: January 2025 – artists’ perspectives on generative AI, biomedical knowledge graphs, and ML for studying greenhouse gas e

  29 Jan 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Public competition for better images of AI – winners announced!

  28 Jan 2025
See the winning images from the Better Images of AI and Cambridge Diversity Fund competition.

Translating fiction: how AI could assist humans in expanding access to global literature and culture

  27 Jan 2025
Dutch publishing house Veen Bosch & Keuning (VBK) has confirmed plans to experiment using AI to translate fiction.

Interview with Yuki Mitsufuji: Improving AI image generation

  23 Jan 2025
Find out about two pieces of research tackling different aspects of image generation.

The Good Robot podcast: Using feminist chatbots to fight trolls with Sarah Ciston

  22 Jan 2025
Eleanor and Kerry chat to Sarah Ciston about the difficult labor of content moderation, chatbots to combat trolls, and more.

An open-source training framework to advance multimodal AI

  22 Jan 2025
EPFL researchers have developed 4M, a next-generation, framework for training versatile and scalable multimodal foundation models.

Optimizing LLM test-time compute involves solving a meta-RL problem

  20 Jan 2025
By altering the LLM training objective, we can reuse existing data along with more test-time compute to train models to do better.

Generating a biomedical knowledge graph question answering dataset

  17 Jan 2025
Introducing PrimeKGQA - a scalable approach to dataset generation, harnessing the power of large language models.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association