ΑΙhub.org
 

The Machine Ethics podcast: Responsible AI strategy with Olivia Gambelin


by
07 January 2025



share this:

Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology’s impact on society.

Responsible AI strategy with Olivia Gambelin

We chat about Olivia’s book on responsible AI, scalable AI strategy, AI ethics and responsible AI (RAI), bad innovation, values for RAI, risk and innovation mindsets, who owns the RAI strategy, why one would work with an external consultant, agentic AI, predictions for the next two years, and more…

Listen to the episode here:


One of the first movers in Responsible AI, Olivia Gambelin is a world-renowned expert in AI Ethics and product innovation whose experience in utilising ethics-by-design has empowered hundreds of business leaders to achieve their desired impact on the cutting edge of AI development. Olivia works directly with product teams to drive AI innovation through human value alignment, as well as executive teams on the operational and strategic development of responsible AI.

As the founder of Ethical Intelligence, the world’s largest network of Responsible AI practitioners, Olivia offers unparalleled insight into how leaders can embrace the strength of human values to drive holistic business success. She is the author of the book Responsible AI: Implement an Ethical Approach in Your Organization with Kogan Page Publishing, the creator of The Values Canvas, which can be found at www.thevaluescanvas.com, and co-founder of Women Shaping the Future of Responsible AI (WSFR.AI).


About The Machine Ethics podcast

This podcast was created and is run by Ben Byford and collaborators. The podcast, and other content was first created to extend Ben’s growing interest in both the AI domain and in the associated ethics. Over the last few years the podcast has grown into a place of discussion and dissemination of important ideas, not only in AI but in tech ethics generally. As the interviews unfold on they often veer into current affairs, the future of work, environmental issues, and more. Though the core is still AI and AI Ethics, we release content that is broader and therefore hopefully more useful to the general public and practitioners.

The hope for the podcast is for it to promote debate concerning technology and society, and to foster the production of technology (and in particular, decision making algorithms) that promote human ideals.

Join in the conversation by getting in touch via email here or following us on Twitter and Instagram.




The Machine Ethics Podcast




            AIhub is supported by:



Related posts :



AAAI 2025 presidential panel on the future of AI research – video discussion on AGI

  12 Dec 2025
Watch the first in a series of video discussions from AAAI.

The Machine Ethics podcast: the AI bubble with Tim El-Sheikh

Ben chats to Tim about AI use cases, whether GenAI is even safe, the AI bubble, replacing human workers, data oligarchies and more.

Australia’s vast savannas are changing, and AI is showing us how

Improving decision-making for dynamic and rapidly changing environments.

AI language models show bias against regional German dialects

New study examines how artificial intelligence responds to dialect speech.

We asked teachers about their experiences with AI in the classroom — here’s what they said

  05 Dec 2025
Researchers interviewed teachers from across Canada and asked them about their experiences with GenAI in the classroom.

Interview with Alice Xiang: Fair human-centric image dataset for ethical AI benchmarking

  04 Dec 2025
Find out more about this publicly-available, globally-diverse, consent-based human image dataset.

The Machine Ethics podcast: Fostering morality with Dr Oliver Bridge

Talking machine ethics, superintelligence, virtue ethics, AI alignment, fostering morality in humans and AI, and more.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence