ΑΙhub.org
 

The Machine Ethics Podcast: Ethics of digital worlds with Richard Bartle


by
12 April 2022



share this:

Richard Bartle
Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology’s impact on society.

Ethics of digital worlds

Richard Bartle joins us again after his appearance on episode 65 to chat about the metaverse, different ways to design AI controlled NPC, the lack of progress of AI in games, ethical considerations of games designers, ethics of AI life, virtualism, ‘smart’ AI, robot rights and more…

Listen to the episode here:

Dr Richard A. Bartle is Honorary Professor of Computer Game Design at the University of Essex, UK. He is best known for having co-written in 1978 the first virtual world, MUD, the progenitor of the £30bn Massively-Multiplayer Online Role-Playing Game industry. His 1996 Player Types model has seen widespread adoption by MMO developers and the games industry in general. His 2003 book, Designing Virtual Worlds, is the standard text on the subject, and he is an influential writer on all aspects of MMO design and development. In 2010, he was the first recipient of the prestigious Game Developers’ Conference Online Game Legend award.


About The Machine Ethics podcast

This podcast was created, and is run by, Ben Byford and collaborators. Over the last few years the podcast has grown into a place of discussion and dissemination of important ideas, not only in AI but in tech ethics generally.

The goal is to promote debate concerning technology and society, and to foster the production of technology (and in particular: decision making algorithms) that promote human ideals.

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with over 10 years of design and coding experience building websites, apps, and games. In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since then, Ben has talked with academics, developers, doctors, novelists and designers about AI, automation and society.

Join in the conversation with us by getting in touch via email here or following us on Twitter and Instagram.




The Machine Ethics Podcast

            AUAI is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Does ‘federated unlearning’ in AI improve data privacy, or create a new cybersecurity risk?

  15 May 2026
As the capacity of AI systems increases apace, so do concerns about the privacy of user data.

Reflections from #AIES2025

and   14 May 2026
We reflect on AIES 2025, outlining a discussion session on LLMs for clinical usage and human rights.

Deep learning-powered biochip to detect genetic markers

System can detect extremely small amounts of microRNAs, genetic markers linked to diseases such as heart disease.

Half of AI health answers are wrong even though they sound convincing – new study

  12 May 2026
Imagine you have just been diagnosed with early-stage cancer and, before your next appointment, you type a question into an AI chatbot.

Gradient-based planning for world models at longer horizons

  11 May 2026
What were the problems that motivated this project and what was the approach to address them?

It’s tempting to offload your thinking to AI. Cognitive science shows why that’s a bad idea

  08 May 2026
Increased offloading to new tools has raised the fear that people will become overly reliant on AI.

Making AI systems more transparent and trustworthy: an interview with Ximing Wen

  07 May 2026
Find out more about Ximing's work, experience as a research intern, and what inspired her to study AI.

Report on foundation model impacts released

  06 May 2026
Partnership on AI publish a progress report on post-deployment governance practices.



AUAI is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence