ΑΙhub.org
 

The Machine Ethics podcast: AI fictions with Alex Shvartsman


by
19 June 2024



share this:


Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology’s impact on society.

AI fictions with Alex Shvartsman

This episode we’re chatting with Alex Shvartsman about our AI future, human crafted storytelling, the generative AI use backlash, disclaimers for generated text, human vs AI authorship, practical or functional goals of LLMs, changing themes in science fiction, a diversity of international perspectives and more…

Listen to the episode here:


Alex Shvartsman resides in Brooklyn, New York, and is the author of Kakistocracy (2023), The Middling Affliction (2022), and Eridani’s Crown (2019) fantasy novels. Over 120 of his stories have appeared in Analog, Nature, Strange Horizons, etc. He won the WSFA Small Press Award for Short Fiction and was a three-time finalist for the Canopus Award for Excellence in Interstellar Fiction.

His translations from Russian have appeared in F&SF, Clarkesworld, Tor.com, Analog, Asimov’s, etc. Alex has edited over a dozen anthologies, including the long-running Unidentified Funny Objects series.


About The Machine Ethics podcast

This podcast was created and is run by Ben Byford and collaborators. The podcast, and other content was first created to extend Ben’s growing interest in both the AI domain and in the associated ethics. Over the last few years the podcast has grown into a place of discussion and dissemination of important ideas, not only in AI but in tech ethics generally. As the interviews unfold on they often veer into current affairs, the future of work, environmental issues, and more. Though the core is still AI and AI Ethics, we release content that is broader and therefore hopefully more useful to the general public and practitioners.

The hope for the podcast is for it to promote debate concerning technology and society, and to foster the production of technology (and in particular, decision making algorithms) that promote human ideals.

Join in the conversation by getting in touch via email here or following us on Twitter and Instagram.




The Machine Ethics Podcast

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

AI and Theory of Mind: an interview with Nitay Alon

  16 Mar 2026
Find out more about how Theory of Mind plays out in deceptive environments, multi-agents systems, the interdisciplinary nature of this field, when to use Theory of Mind, and when not to, and more.
coffee corner

AIhub coffee corner: AI, kids, and the future – “generation AI”

  13 Mar 2026
The AIhub coffee corner captures the musings of AI experts over a short conversation.

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence