ΑΙhub.org
 

The Machine Ethics Podcast: featuring Marie Oldfield


by
15 May 2023



share this:

marie oldfield
Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology’s impact on society.

The professionalisation of data science with Dr Marie Oldfield

This episode we’re talking with Dr Marie Oldfield on definitions of AI, the education and communication gaps with AI, explainable models, ethics in education, problems with audits and legislation, AI accreditation, importance of interdisciplinary teams, when to use AI or not, and harms from algorithms.

Listen to the episode here:

Marie Oldfield (CStat, CSci, FIScT) is the CEO of Oldfield Consultancy and Kuinua Coaching. She is also a Senior Lecturer in Practice for the London School of Economics. With a background in mathematics and philosophy, she is a trusted advisor to government, defence, and the legal sector, amongst others. She is founder of the Institute of Science and Technology (IST) Artificial Intelligence Group, founder of the IST Women in Tech group, and a Professional Chartership Assessor for the Science Council. Marie is frequently invited to speak on popular podcasts, panels and at conferences about her experience and research in AI and ethics.

Marie founded Oldfield Consultancy to solve complex problems ethically with the latest technology. Oldfield Consultancy provides analytical training for technical and non-technical teams. Marie is passionate about giving back to the global community through extensive pro bono work, with a focus on education, poverty, children, and mental health.


About The Machine Ethics podcast

This podcast was created and is run by Ben Byford and collaborators. The podcast, and other content was first created to extend Ben’s growing interest in both the AI domain and in the associated ethics. Over the last few years the podcast has grown into a place of discussion and dissemination of important ideas, not only in AI but in tech ethics generally. As the interviews unfold on they often veer into current affairs, the future of work, environmental issues, and more. Though the core is still AI and AI Ethics, we release content that is broader and therefore hopefully more useful to the general public and practitioners.

The hope for the podcast is for it to promote debate concerning technology and society, and to foster the production of technology (and in particular, decision making algorithms) that promote human ideals.

Join in the conversation by getting in touch via email here or following us on Twitter and Instagram.




The Machine Ethics Podcast




            AIhub is supported by:



Related posts :



Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.

How the internet and its bots are sabotaging scientific research

  03 Sep 2025
What most people have failed to fully realise is that internet research has brought along risks of data corruption or impersonation.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence