Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology’s impact on society.
This time we’re talking AI research with Madhulika Srikumar of Partnership on AI. We chat about managing the risks of AI research, how the AI community should think about the consequences of their research, documenting best practises for AI, OpenAI’s GTP2 research disclosure example, considering unintended consequences & negative downstream outcomes, considering what your research may actually contribute, promoting scientific openness, proportional ethical reflection, research social impact assessments and more…
Listen to the episode here:
Madhulika Srikumar is a program lead at the Safety-Critical AI initiative at Partnership on AI, a multistakeholder non-profit shaping the future of responsible AI. Core areas of her current focus include community engagement on responsible publication norms in AI research and diversity and inclusion in AI teams. Madhu is a lawyer by training and completed her graduate studies (LL.M) at Harvard Law School.
This podcast was created, and is run by, Ben Byford and collaborators. Over the last few years the podcast has grown into a place of discussion and dissemination of important ideas, not only in AI but in tech ethics generally.
The goal is to promote debate concerning technology and society, and to foster the production of technology (and in particular: decision making algorithms) that promote human ideals.
Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with over 10 years of design and coding experience building websites, apps, and games. In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since then, Ben has talked with academics, developers, doctors, novelists and designers about AI, automation and society.