ΑΙhub.org
 

GRACE Podcast: Dr Harriett Jernigan interviews Dr Brandeis Marshall


by
04 July 2022



share this:
grace podcast

GRACE: Global Review of AI Community Ethics is a new student-run, peer-reviewed, open-access, international journal. To accompany the journal, there is a podcast hosted by Dr Harriett Jernigan.

In this first episode, Harriett interviews Dr Brandeis Marshall about her research, ranking algorithms, misinformation, combining the analytical and the creative, the lack of Black women in leadership roles in the data industry, the disproportional effect of data on Black women, tech solutionism, her forthcoming book, and more.

Listen to the audio version below:

You can watch the video version here.

Dr Brandeis Marshall is founder and CEO of DataedX Group, a social impact business that provides learning and development activities on recognizing algorithmic harms and humanizing data practices for data educators, scholars and practitioners. She is also Full Professor of Computer Science at Spelman College. She holds a Ph.D. and Master of Science in Computer Science from Rensselaer Polytechnic Institute and a Bachelor of Science in Computer Science from the University of Rochester. Find out more about her forthcoming book here.

Dr Harriett Jernigan is a lecturer at Stanford University. She earned her BA in German and Creative Writing at the University of Alabama and her PhD in German Studies at Stanford University. She specializes in writing across the disciplines; second-language acquisition; project-based instruction; social geography; and German languages, literatures and cultures.




GRACE

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

coffee corner

AIhub coffee corner: AI, kids, and the future – “generation AI”

  13 Mar 2026
The AIhub coffee corner captures the musings of AI experts over a short conversation.

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence