ΑΙhub.org
 

The Machine Ethics podcast: Co-design with Pinar Guvenc


by
23 April 2025



share this:


Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology’s impact on society.

Co-design with Pinar Guvenc

This episode we’re chatting with Pinar Guvenc about her “What’s Wrong With” podcast, co-design, whether AI is ready for society and society is ready for AI, what design is, co-creation with AI as a stakeholder, bias in design, small language models, whether AI is making us lazy, human experience, digital life and our attention, and talking to diverse people…

Listen to the episode here:


Pinar Guvenc is Partner at SOUR – an award-winning global design studio with the mission to address social and urban problems – where she leads business and design strategy. She is an educator teaching ethical leadership and co-design at Parsons School of Design, MS Strategic Design and Management and School of Visual Arts MFA Interaction Design. Pinar serves on the Board of Directors of Open Style Lab and advises local businesses in NYC through Pratt Center for Community Development. She is a frequent public speaker and lecturer, and is the host of SOUR’s “What’s Wrong With: The Podcast”, a discussion series with progress makers in diverse fields across the world.

About The Machine Ethics podcast

This podcast was created and is run by Ben Byford and collaborators. The podcast, and other content was first created to extend Ben’s growing interest in both the AI domain and in the associated ethics. Over the last few years the podcast has grown into a place of discussion and dissemination of important ideas, not only in AI but in tech ethics generally. As the interviews unfold on they often veer into current affairs, the future of work, environmental issues, and more. Though the core is still AI and AI Ethics, we release content that is broader and therefore hopefully more useful to the general public and practitioners.

The hope for the podcast is for it to promote debate concerning technology and society, and to foster the production of technology (and in particular, decision making algorithms) that promote human ideals.

Join in the conversation by getting in touch via email here or following us on Twitter and Instagram.




The Machine Ethics Podcast




            AIhub is supported by:



Related posts :



How AI is opening the playbook on sports analytics

  18 Sep 2025
Waterloo researchers create simulated soccer datasets to unlock insights once reserved for pro teams.

Discrete flow matching framework for graph generation

and   17 Sep 2025
Read about work presented at ICML 2025 that disentangles sampling from training.

We risk a deluge of AI-written ‘science’ pushing corporate interests – here’s what to do about it

  16 Sep 2025
A single individual using AI can produce multiple papers that appear valid in a matter of hours.

Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence