ΑΙhub.org
 

The Machine Ethics Podcast: AI ethics strategy with Reid Blackman


by
03 August 2022



share this:

Reid Blackman
Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology’s impact on society.

AI ethics strategy

In this episode we talk with Reid Blackman about: learning, what it means to be worthy of trust, bullsh*t AI principles, company values, purpose and use in decision making, his AI ethics risk strategy book, machine ethics as a fools errand, weighing metrics for measuring bias, ethics committees, police and the IRB, and much more…

Listen to the episode here:

Reid Blackman, PhD, is the author of Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI (Harvard Business Review Press), Founder and CEO of Virtue, an AI ethical risk consultancy, and he volunteers as the Chief Ethics Officer for the non-profit Government Blockchain Association. He has also been a Senior Advisor to the Deloitte AI Institute, a Founding Member of Ernst & Young’s AI Advisory Board, and sits on the advisory boards of several start-ups. His work has been profiled in The Wall Street Journal and Forbes and he has presented his work to dozens of organizations including Citibank, the FBI, the World Economic Forum, and AWS. Reid’s expertise is relied upon by Fortune 500 companies to educate and train their people and to guide them as they create and scale AI ethical risk programs.


About The Machine Ethics podcast

This podcast was created, and is run by, Ben Byford and collaborators. Over the last few years the podcast has grown into a place of discussion and dissemination of important ideas, not only in AI but in tech ethics generally.

The goal is to promote debate concerning technology and society, and to foster the production of technology (and in particular: decision making algorithms) that promote human ideals.

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with over 10 years of design and coding experience building websites, apps, and games. In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since then, Ben has talked with academics, developers, doctors, novelists and designers about AI, automation and society.

Join in the conversation with us by getting in touch via email here or following us on Twitter and Instagram.




The Machine Ethics Podcast




            AIhub is supported by:


Related posts :



What’s coming up at #ICRA2025?

  16 May 2025
Find out what's in store at the IEEE International Conference on Robotics & Automation, which will take place from 19-23 May.

AI Song Contest returns for 2025

  15 May 2025
This year's competition will culminate in a live award show in November.

Robot see, robot do: System learns after watching how-tos

  14 May 2025
Researchers have developed a new robotic framework that allows robots to learn tasks by watching a how-to video

Interview with Ananya Joshi: Real-time monitoring for healthcare data

  13 May 2025
Find out how Ananya worked with domain experts to develop a system to identify respiratory outbreaks.

AI-powered robots help tackle Europe’s growing e-waste problem

  12 May 2025
EU-funded researchers have developed adaptable robots that could transform the way we recycle electronic waste, benefiting both the environment and the economy.

Interview with Onur Boyar: Drug and material design using generative models and Bayesian optimization

  09 May 2025
Find out how Onur is applying machine learning techniques to bioinformatics-related problems.

2025 AI Index Report

  08 May 2025
Read the latest edition of the AI Index Report which tracks and visualises data related to AI.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence