ΑΙhub.org
 

The Machine Ethics Podcast: AI ethics strategy with Reid Blackman


by
03 August 2022



share this:

Reid Blackman
Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology’s impact on society.

AI ethics strategy

In this episode we talk with Reid Blackman about: learning, what it means to be worthy of trust, bullsh*t AI principles, company values, purpose and use in decision making, his AI ethics risk strategy book, machine ethics as a fools errand, weighing metrics for measuring bias, ethics committees, police and the IRB, and much more…

Listen to the episode here:

Reid Blackman, PhD, is the author of Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI (Harvard Business Review Press), Founder and CEO of Virtue, an AI ethical risk consultancy, and he volunteers as the Chief Ethics Officer for the non-profit Government Blockchain Association. He has also been a Senior Advisor to the Deloitte AI Institute, a Founding Member of Ernst & Young’s AI Advisory Board, and sits on the advisory boards of several start-ups. His work has been profiled in The Wall Street Journal and Forbes and he has presented his work to dozens of organizations including Citibank, the FBI, the World Economic Forum, and AWS. Reid’s expertise is relied upon by Fortune 500 companies to educate and train their people and to guide them as they create and scale AI ethical risk programs.


About The Machine Ethics podcast

This podcast was created, and is run by, Ben Byford and collaborators. Over the last few years the podcast has grown into a place of discussion and dissemination of important ideas, not only in AI but in tech ethics generally.

The goal is to promote debate concerning technology and society, and to foster the production of technology (and in particular: decision making algorithms) that promote human ideals.

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with over 10 years of design and coding experience building websites, apps, and games. In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since then, Ben has talked with academics, developers, doctors, novelists and designers about AI, automation and society.

Join in the conversation with us by getting in touch via email here or following us on Twitter and Instagram.




The Machine Ethics Podcast




            AIhub is supported by:



Related posts :



How AI is opening the playbook on sports analytics

  18 Sep 2025
Waterloo researchers create simulated soccer datasets to unlock insights once reserved for pro teams.

Discrete flow matching framework for graph generation

and   17 Sep 2025
Read about work presented at ICML 2025 that disentangles sampling from training.

We risk a deluge of AI-written ‘science’ pushing corporate interests – here’s what to do about it

  16 Sep 2025
A single individual using AI can produce multiple papers that appear valid in a matter of hours.

Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence