ΑΙhub.org
 

The Machine Ethics podcast: Responsible AI strategy with Olivia Gambelin


by
07 January 2025



share this:

Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology’s impact on society.

Responsible AI strategy with Olivia Gambelin

We chat about Olivia’s book on responsible AI, scalable AI strategy, AI ethics and responsible AI (RAI), bad innovation, values for RAI, risk and innovation mindsets, who owns the RAI strategy, why one would work with an external consultant, agentic AI, predictions for the next two years, and more…

Listen to the episode here:


One of the first movers in Responsible AI, Olivia Gambelin is a world-renowned expert in AI Ethics and product innovation whose experience in utilising ethics-by-design has empowered hundreds of business leaders to achieve their desired impact on the cutting edge of AI development. Olivia works directly with product teams to drive AI innovation through human value alignment, as well as executive teams on the operational and strategic development of responsible AI.

As the founder of Ethical Intelligence, the world’s largest network of Responsible AI practitioners, Olivia offers unparalleled insight into how leaders can embrace the strength of human values to drive holistic business success. She is the author of the book Responsible AI: Implement an Ethical Approach in Your Organization with Kogan Page Publishing, the creator of The Values Canvas, which can be found at www.thevaluescanvas.com, and co-founder of Women Shaping the Future of Responsible AI (WSFR.AI).


About The Machine Ethics podcast

This podcast was created and is run by Ben Byford and collaborators. The podcast, and other content was first created to extend Ben’s growing interest in both the AI domain and in the associated ethics. Over the last few years the podcast has grown into a place of discussion and dissemination of important ideas, not only in AI but in tech ethics generally. As the interviews unfold on they often veer into current affairs, the future of work, environmental issues, and more. Though the core is still AI and AI Ethics, we release content that is broader and therefore hopefully more useful to the general public and practitioners.

The hope for the podcast is for it to promote debate concerning technology and society, and to foster the production of technology (and in particular, decision making algorithms) that promote human ideals.

Join in the conversation by getting in touch via email here or following us on Twitter and Instagram.




The Machine Ethics Podcast

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

A model for defect identification in materials

  20 Apr 2026
A new model measures defects that can be leveraged to improve materials’ mechanical strength, heat transfer, and energy-conversion efficiency.

‘Probably’ doesn’t mean the same thing to your AI as it does to you

  17 Apr 2026
Are you sure you and the AI chatbot you’re using are on the same page about probabilities?

Interview with Xinwei Song: strategic interactions in networked multi-agent systems

  16 Apr 2026
Xinwei Song tells us about her research using algorithmic game theory and multi-agent reinforcement learning.

2026 AI Index Report released

  15 Apr 2026
Find out what the ninth edition of the report, which was published on 13 April, says about trends in AI.

Formal verification for safety evaluation of autonomous vehicles: an interview with Abdelrahman Sayed Sayed

  14 Apr 2026
Find out more about work at the intersection of continuous AI models, formal methods, and autonomous systems.

Water flow in prairie watersheds is increasingly unpredictable — but AI could help

  13 Apr 2026
In recent years, the Prairies have seen bigger swings in climate conditions — very wet years followed by very dry ones.

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence