ΑΙhub.org
 

Reflections from #AIES2025


by and
14 May 2026



share this:
Views across Madrid. Image credits: Lucy Smith.

In this piece, we reflect on AIES 2025, and outline the conversations and presentations from a discussion session on LLMs in the context of clinical usage and human rights. This is a crosspost from the latest issue of AI Matters, published by the ACM SIAGI.

This year’s conference on artificial intelligence, ethics and society (AIES) took place in the north of Madrid within the 180m-high tower block that forms the vertical campus of IE University. The event kicked off with a welcome from the chairs and organising committee members, with this opening session also featuring the conference best paper awards.

IE University campus tower block on the right. Image credits: Lucy Smith

IE University campus tower block on the right. Image credits: Lucy Smith

Topics covered during the three-day event included mitigating bias, integrating AI into the workplace, evaluating LLMs in clinical settings, power dynamics in AI ecosystems, and dataset creation. There were two panel discussions included in the programme, with the first of these diving into AI policy and the competing visions of governance, and the second focussing on AI ethics and how, and to whom, this is (and, perhaps, should be) taught.

The organisers experimented with a new format for the contributed talks, with all speakers in a session giving their talks, before taking part in a joint discussion on common themes, and then taking questions from the audience. Two keynotes, given by Miriam Fernandez and Emma Ruttkamp-Bloem covered “responsible AI and the urgent challenge of technology-facilitated gender-based violence” and “the future of AI ethics” respectively.

During the session, “Evaluating LLMs in the Context of Patient Autonomy and Human Rights”, there were 4 interesting presentations, followed by a stimulating panel discussion.

Vyoma Raman presented a human rights risk framework to evaluate whether an AI model poses a risk to human rights, drawing on the UN Guiding Principles on Business and Human Rights. For organisations committed to effectively implementing these principles, she proposed identifying use cases, building benchmarks, and monitoring model performance on those benchmarks. When asked about the ethical considerations of autonomy, Vyoma brought up linguistic conformity as a result of LLMs. For example, the word ‘delve’ now indicates ChatGPT usage, thus marginalising the lexicon of Nigerians, who represent many of the model trainers.

Surprisingly, AI clinical notetakers do not speed up workflows, as they are perceived to add more work, infringe on clinicians’ autonomy, and miss the real issue in play – physician burnout. If LLMs do not work in this narrow domain, they are not likely to work well in more complex, higher risk diagnostic settings. Joshua Skorburg outlined these limitations and emphasised the importance of considering efficacy, before analysing the ethics privacy, bias, and transparency. He later questioned whether the reason for AI companies’ lack of ROI is because they are demanding that we design the world around AI, rather than the other way around.

Ria Vinod offered policy recommendations for the safe and effective governance of genetic data, in light of widespread genetic data collection, gaps in current regulations, and the advancement of AI systems. Genetic data deserves a special legal status due to the high risks to privacy, the fact that you can be identified by other people’s genetic data, and the severity of the potential harms. For example, some companies are proposing the use of genetic data to predict educational attainment in children and assign resources to schools, which would be both scientifically unsound and in line with eugenics.

Rawisara Lohanimit further highlighted the dangers posed by generative models to individuals’ privacy and dignity. In this work, Rawisara and her colleagues systematically examined the popular, publicly available LAION-400M dataset and found images of pregnancy ultrasounds, along with names and locations. In the panel discussion, Rawisara emphasised that when people share data, they do not consider how it could be misused.

Co-organised by AAAI/ACM, AIES provides a global meeting place for ethicists, AI researchers and AI practitioners to exchange ideas. AIES 2026 will be held in Malmö, Sweden, from October 12-14.

Views across Madrid. Image credits: Lucy Smith.
Views across Madrid. Image credits: Lucy Smith.



tags: , , ,


Ella Scallan is Assistant Editor for AIhub
Ella Scallan is Assistant Editor for AIhub

Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AUAI is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Deep learning-powered biochip to detect genetic markers

System can detect extremely small amounts of microRNAs, genetic markers linked to diseases such as heart disease.

Half of AI health answers are wrong even though they sound convincing – new study

  12 May 2026
Imagine you have just been diagnosed with early-stage cancer and, before your next appointment, you type a question into an AI chatbot.

Gradient-based planning for world models at longer horizons

  11 May 2026
What were the problems that motivated this project and what was the approach to address them?

It’s tempting to offload your thinking to AI. Cognitive science shows why that’s a bad idea

  08 May 2026
Increased offloading to new tools has raised the fear that people will become overly reliant on AI.

Making AI systems more transparent and trustworthy: an interview with Ximing Wen

  07 May 2026
Find out more about Ximing's work, experience as a research intern, and what inspired her to study AI.

Report on foundation model impacts released

  06 May 2026
Partnership on AI publish a progress report on post-deployment governance practices.

Forthcoming machine learning and AI seminars: May 2026 edition

  05 May 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 5 May and 30 June 2026.

AI for Science – from cosmology to chemistry

  01 May 2026
How AI is transforming science, from a day conference at the Royal Society



AUAI is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence