ΑΙhub.org
 

The Machine Ethics podcast: Running faster with Enrico Panai


by
13 February 2025



share this:

Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology’s impact on society.

Running faster with Enrico Panai

This episode we’re chatting with Enrico Panai about the elements of the digital revolution, AI transforming data into information, human-computer interaction, the importance of knowing the tech as a tech philosopher, that ethicists should diagnose not judge, quality and making pasta, whether ethics is really a burden for companies or if you can run faster with ethics, and finding a Marx for the digital world.

Listen to the episode here:


Enrico Panai is an AI ethicist with a background in philosophy and extensive consulting experience in Italy. He spent seven years as an adjunct professor of Digital Humanities at the University of Sassari. Since moving to France in 2007, he has continued his work as a consultant. In 2017, he studied Strategies for Cyber Security Awareness at the Institut National de Hautes Études de la Sécurité et de la Justice in Paris. Holding a PhD in Cybergeography and AI Ethics, he is the founder of the consultancy BeEthical.be. He serves as a professor of Responsible AI at EMlyon Business School, ISEP in Paris, and La Cattolica in Milan. Additionally, he is the president of the Association of AI Ethicists.

Currently, his main role is as an officer of the French Standardization Committee for AI and convenor of the working group on fundamental and societal aspects of AI at the European CEN-CENELEC JTC21—the European standardization body focused on producing deliverables that address European market and societal needs. Among the core standards managed are Trustworthiness of AI, Competences of professional AI ethicists and Sustainable AI. His main research interests concern cyber-geography, human-information interaction, philosophy and ethics of information and semantic capital.

About The Machine Ethics podcast

This podcast was created and is run by Ben Byford and collaborators. The podcast, and other content was first created to extend Ben’s growing interest in both the AI domain and in the associated ethics. Over the last few years the podcast has grown into a place of discussion and dissemination of important ideas, not only in AI but in tech ethics generally. As the interviews unfold on they often veer into current affairs, the future of work, environmental issues, and more. Though the core is still AI and AI Ethics, we release content that is broader and therefore hopefully more useful to the general public and practitioners.

The hope for the podcast is for it to promote debate concerning technology and society, and to foster the production of technology (and in particular, decision making algorithms) that promote human ideals.

Join in the conversation by getting in touch via email here or following us on Twitter and Instagram.




The Machine Ethics Podcast




            AIhub is supported by:



Related posts :



How AI is opening the playbook on sports analytics

  18 Sep 2025
Waterloo researchers create simulated soccer datasets to unlock insights once reserved for pro teams.

Discrete flow matching framework for graph generation

and   17 Sep 2025
Read about work presented at ICML 2025 that disentangles sampling from training.

We risk a deluge of AI-written ‘science’ pushing corporate interests – here’s what to do about it

  16 Sep 2025
A single individual using AI can produce multiple papers that appear valid in a matter of hours.

Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence