ΑΙhub.org
 

Congratulations to the #AIES2025 best paper award winners!


by
21 October 2025



share this:
winners' medal

The eighth AAAI / ACM Conference on Artificial Intelligence, Ethics, and Society (AIES) is currently taking place in Madrid, Spain, running from 20-22 October. During the opening ceremony, the best papers for this year were announced. The four winners are:


AI, Normality, and Oppressive Things
Ting-an Lin and Linus Ta-Lun Huang

Abstract: While it is well-known that AI systems might bring about unfair social impacts by influencing social schemas, much attention has been paid to instances where the content presented by AI systems explicitly demeans marginalized groups or reinforces problematic stereotypes. This paper urges critical scrutiny to be paid to instances that shape social schemas through subtler manners. Drawing from recent philosophical discussions on the politics of artifacts, we argue that many existing AI systems should be identified as what Liao and Huebner called oppressive things when they function to manifest oppressive normality. We first categorize three different ways that AI systems could function to manifest oppressive normality and argue that those seemingly innocuous or even beneficial for the oppressed group might still be oppressive. Even though oppressiveness is a matter of degree, we further identify three features of AI systems that make their oppressive impacts more concerning. We end by discussing potential responses to oppressive AI systems and urge remedies that go beyond fixing the unjust outcomes but also challenge the unjust power hierarchies of oppression.

Read the extended abstract here.


When in Doubt, Cascade: Towards Building Efficient and Capable Guardrails
Manish Nagireddy, Inkit Padhi, Soumya Ghosh, Prasanna Sattigeri

Abstract: Large language models (LLMs) have convincing performance in a variety of downstream tasks. However, these systems are prone to generating undesirable outputs such as harmful and biased text. In order to remedy such generations, the development of guardrail (or detector) models has gained traction. Motivated by findings from developing a detector for social bias, we adopt the notion of a use-mention distinction – which we identified as the primary source of under-performance in the preliminary versions of our social bias detector. Armed with this information, we describe a fully extensible and reproducible synthetic data generation pipeline which leverages taxonomy-driven instructions to create targeted and labeled data. Using this pipeline, we generate over 300K unique contrastive samples and provide extensive experiments to systematically evaluate performance on a suite of open source datasets. We show that our method achieves competitive performance with a fraction of the cost in compute and offers insight into iteratively developing efficient and capable guardrail models.
Warning: This paper contains examples of text which are toxic, biased, and potentially harmful.

Read the paper in full here.


Measuring What Matters: Connecting AI Ethics Evaluations to System Attributes, Hazards, and Harms
Shalaleh Rismani, Renee Shelby, Leah Davis, Negar Rostamzadeh, AJung Moon

Abstract: Over the past decade, an ecosystem of measures has emerged to evaluate the social and ethical implications of AI systems, largely shaped by high-level ethics principles. These measures are developed and used in fragmented ways, without adequate attention to how they are situated in AI systems. In this paper, we examine how existing measures used in the computing literature map to AI system components, attributes, hazards, and harms. Our analysis draws on a scoping review resulting in nearly 800 measures corresponding to 11 AI ethics principles. We find that most measures focus on four principles – fairness, transparency, privacy, and trust – and primarily assess model or output system components. Few measures account for interactions across system elements, and only a narrow set of hazards is typically considered for each harm type. Many measures are disconnected from where harm is experienced and lack guidance for setting meaningful thresholds. These patterns reveal how current evaluation practices remain fragmented, measuring in pieces rather than capturing how harms emerge across systems. Framing measures with respect to system attributes, hazards, and harms can strengthen regulatory oversight, support actionable practices in industry, and ground future research in systems-level understanding.

Read the paper in full here.


Govern with, Not For: Understanding the Stuttering Community’s Preferences and Goals for Speech AI Data Governance in the US and China
Jingjin Li, Peiyao Liu, Rebecca Lietz, Ningjing Tang, Norman Makoto Su, Shaomei Wu

Abstract: Current AI datasets are often created without sufficient governance structures to respect the rights and interests of data contributors, raising significant ethical and safety concerns that disengage marginalized communities from contributing their data. Contesting the historical exclusion of marginalized data contributors and the unique vulnerabilities of speech data, this paper presents a disability-centered, community-led approach to AI data governance. More specifically, we examine the stuttering community’s preferences and needs around effective stuttered speech data governance for AI purposes. We present empirical insights from interviews with stuttering advocates and surveys with people who stutter in both the U.S. and China. Our findings highlight shared demands for transparency, proactive and continuous communication, and robust privacy and security measures, despite distinct social contexts around stuttering. Our work offers actionable insights for disability-centered AI data governance.

Read the paper in full here.




tags: , , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Forthcoming machine learning and AI seminars: December 2025 edition

  01 Dec 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 December 2025 and 31 January 2026.
monthly digest

AIhub monthly digest: November 2025 – learning robust controllers, trust in multi-agent systems, and a new fairness evaluation dataset

  28 Nov 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

EU proposal to delay parts of its AI Act signal a policy shift that prioritises big tech over fairness

  27 Nov 2025
The EC has proposed delaying parts of the act until 2027 following intense pressure from tech companies and the Trump administration.

Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence