ΑΙhub.org
 

Congratulations to the #AIES2025 best paper award winners!


by
21 October 2025



share this:
winners' medal

The eighth AAAI / ACM Conference on Artificial Intelligence, Ethics, and Society (AIES) is currently taking place in Madrid, Spain, running from 20-22 October. During the opening ceremony, the best papers for this year were announced. The four winners are:


AI, Normality, and Oppressive Things
Ting-an Lin and Linus Ta-Lun Huang

Abstract: While it is well-known that AI systems might bring about unfair social impacts by influencing social schemas, much attention has been paid to instances where the content presented by AI systems explicitly demeans marginalized groups or reinforces problematic stereotypes. This paper urges critical scrutiny to be paid to instances that shape social schemas through subtler manners. Drawing from recent philosophical discussions on the politics of artifacts, we argue that many existing AI systems should be identified as what Liao and Huebner called oppressive things when they function to manifest oppressive normality. We first categorize three different ways that AI systems could function to manifest oppressive normality and argue that those seemingly innocuous or even beneficial for the oppressed group might still be oppressive. Even though oppressiveness is a matter of degree, we further identify three features of AI systems that make their oppressive impacts more concerning. We end by discussing potential responses to oppressive AI systems and urge remedies that go beyond fixing the unjust outcomes but also challenge the unjust power hierarchies of oppression.

Read the extended abstract here.


When in Doubt, Cascade: Towards Building Efficient and Capable Guardrails
Manish Nagireddy, Inkit Padhi, Soumya Ghosh, Prasanna Sattigeri

Abstract: Large language models (LLMs) have convincing performance in a variety of downstream tasks. However, these systems are prone to generating undesirable outputs such as harmful and biased text. In order to remedy such generations, the development of guardrail (or detector) models has gained traction. Motivated by findings from developing a detector for social bias, we adopt the notion of a use-mention distinction – which we identified as the primary source of under-performance in the preliminary versions of our social bias detector. Armed with this information, we describe a fully extensible and reproducible synthetic data generation pipeline which leverages taxonomy-driven instructions to create targeted and labeled data. Using this pipeline, we generate over 300K unique contrastive samples and provide extensive experiments to systematically evaluate performance on a suite of open source datasets. We show that our method achieves competitive performance with a fraction of the cost in compute and offers insight into iteratively developing efficient and capable guardrail models.
Warning: This paper contains examples of text which are toxic, biased, and potentially harmful.

Read the paper in full here.


Measuring What Matters: Connecting AI Ethics Evaluations to System Attributes, Hazards, and Harms
Shalaleh Rismani, Renee Shelby, Leah Davis, Negar Rostamzadeh, AJung Moon

Abstract: Over the past decade, an ecosystem of measures has emerged to evaluate the social and ethical implications of AI systems, largely shaped by high-level ethics principles. These measures are developed and used in fragmented ways, without adequate attention to how they are situated in AI systems. In this paper, we examine how existing measures used in the computing literature map to AI system components, attributes, hazards, and harms. Our analysis draws on a scoping review resulting in nearly 800 measures corresponding to 11 AI ethics principles. We find that most measures focus on four principles – fairness, transparency, privacy, and trust – and primarily assess model or output system components. Few measures account for interactions across system elements, and only a narrow set of hazards is typically considered for each harm type. Many measures are disconnected from where harm is experienced and lack guidance for setting meaningful thresholds. These patterns reveal how current evaluation practices remain fragmented, measuring in pieces rather than capturing how harms emerge across systems. Framing measures with respect to system attributes, hazards, and harms can strengthen regulatory oversight, support actionable practices in industry, and ground future research in systems-level understanding.

Read the paper in full here.


Govern with, Not For: Understanding the Stuttering Community’s Preferences and Goals for Speech AI Data Governance in the US and China
Jingjin Li, Peiyao Liu, Rebecca Lietz, Ningjing Tang, Norman Makoto Su, Shaomei Wu

Abstract: Current AI datasets are often created without sufficient governance structures to respect the rights and interests of data contributors, raising significant ethical and safety concerns that disengage marginalized communities from contributing their data. Contesting the historical exclusion of marginalized data contributors and the unique vulnerabilities of speech data, this paper presents a disability-centered, community-led approach to AI data governance. More specifically, we examine the stuttering community’s preferences and needs around effective stuttered speech data governance for AI purposes. We present empirical insights from interviews with stuttering advocates and surveys with people who stutter in both the U.S. and China. Our findings highlight shared demands for transparency, proactive and continuous communication, and robust privacy and security measures, despite distinct social contexts around stuttering. Our work offers actionable insights for disability-centered AI data governance.

Read the paper in full here.




tags: , , , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



From the telegraph to AI, our communications systems have always had hidden environmental costs

  20 Oct 2025
Drawing parallels between new technologies of the past and today.

What’s on the programme at #AIES2025?

  17 Oct 2025
The conference on AI, ethics, and society will take place in Madrid from 20-22 October.

Generative AI model maps how a new antibiotic targets gut bacteria

  16 Oct 2025
Researchers used a GenAI model to reveal how a narrow-spectrum antibiotic attacks disease-causing bacteria.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.

Applying machine learning to chip design and manufacturing: interview with Lorenzo Servadei

  14 Oct 2025
Find out how Lorenzo and his team are using ML and Electronic Design Automation.

Why we should be skeptical of the hasty global push to test 15-year-olds’ AI literacy in 2029

  13 Oct 2025
Are schools set to become testing grounds for AI developments?

Machine learning for atomic-scale simulations: balancing speed and physical laws

How much underlying physics can we safely “shortcut” without breaking a simulation?

Policy design for two-sided platforms with participation dynamics: interview with Haruka Kiyohara

  09 Oct 2025
Studying the long-term impacts of decision-making algorithms on two-sided platforms such as e-commerce or music streaming apps.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence