ΑΙhub.org
 

The Machine Ethics podcast: AI regulation with Lofred Madzou


by
20 August 2021



share this:

Lofred Madzou
Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology’s impact on society.

AI regulation

We chat with Lofred Madzou about AI as a journey to understand ourselves through smart machines, scepticism about wholesale job loss, understanding that “you are not your data”, dissecting the European proposal for AI regulation, examples of types of AI activities under regulation, the spirit of the regulation – human rights-centric, risk-based approaches, infringement exposition and compliance…

Listen to the episode here:

Lofred Madzou is a Project Lead for AI at the World Economic Forum, where he oversees global and multistakeholder AI policy projects. He is also a research associate at the Oxford Internet Institute where he investigates various methods to audit AI systems.

Before joining the Forum, he was a policy officer at the French Digital Council, where he advised the French Government on technology policy. Most notably, he has co-written chapter five of the French AI National Strategy, entitled “What Ethics for AI?”. He has a MSc in Data Science and Philosophy from the University of Oxford.


About The Machine Ethics podcast

This podcast was created, and is run by, Ben Byford and collaborators. Over the last few years the podcast has grown into a place of discussion and dissemination of important ideas, not only in AI but in tech ethics generally.

The goal is to promote debate concerning technology and society, and to foster the production of technology (and in particular: decision making algorithms) that promote human ideals.

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with over 10 years of design and coding experience building websites, apps, and games. In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since then, Ben has talked with academics, developers, doctors, novelists and designers about AI, automation and society.

Join in the conversation with us by getting in touch via email here or following us on Twitter and Instagram.




The Machine Ethics Podcast




            AIhub is supported by:


Related posts :



The Machine Ethics podcast: AI Ethics, Risks and Safety Conference 2025

Listen to a special episode recorded at the AI Ethics, Risks and Safety Conference.

Interview with Aneesh Komanduri: Causality and generative modeling

  31 Jul 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.
monthly digest

AIhub monthly digest: July 2025 – RoboCup round-up, ICML in Vancouver, and leveraging feedback in human-robot interactions

  30 Jul 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Interview with Yuki Mitsufuji: Text-to-sound generation

  29 Jul 2025
We hear from Sony AI Lead Research Scientist Yuki Mitsufuji to find out more about his latest research.

Open-source Swiss language model to be released this summer

  29 Jul 2025
This summer, EPFL and ETH Zurich will release a large language model (LLM) developed on public infrastructure.

Interview with Kate Candon: Leveraging explicit and implicit feedback in human-robot interactions

  25 Jul 2025
Hear from PhD student Kate about her work on human-robot interactions.

#RoboCup2025: social media round-up part 2

  24 Jul 2025
Find out what participants got up to during the second half of RoboCup2025 in Salvador, Brazil.

Visualising the digital transformation of work

Does it matter that the existing images of AI and digital technologies are so unrealistic?



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence