ΑΙhub.org
 

Congratulations to the #AAAI2025 outstanding paper award winners


by
01 March 2025



share this:

The AAAI 2025 outstanding paper awards were announced during the opening ceremony of the 39th Annual AAAI Conference on Artificial Intelligence on Thursday 27 February. These awards honour papers that “exemplify the highest standards in technical contribution and exposition”. Papers are recommended for consideration during the review process by members of the Program Committee. This year, three papers have been selected as outstanding papers, with a further paper being recognised in the special track on AI for social impact.

AAAI-25 outstanding papers

Every Bit Helps: Achieving the Optimal Distortion with a Few Queries
Soroush Ebadian and Nisarg Shah

Abstract: A fundamental task in multi-agent systems is to match n agents to n alternatives (e.g., resources or tasks). Often, this is accomplished by eliciting agents’ ordinal rankings over the alternatives instead of their exact numerical utilities. While this simplifies elicitation, the incomplete information leads to inefficiency, captured by a worst-case measure called distortion. A recent line of work shows that making just a few queries to each agent regarding their cardinal utility for an alternative can significantly improve the distortion, with [1] achieving O(\sqrt{n}) distortion with two queries per agent. We generalize their result by achieving O(n^{1/\lambda}) distortion with \lambda queries per agent, for any constant \lambda, which is optimal given a previous lower bound by [2]. We also extend our finding to the general social choice problem, where one of m alternatives must be chosen based on the preferences of n agents, and show that O((\min\{n,m\})^{1/\lambda}) distortion can be achieved with \lambda queries per agent, for any constant \lambda, which is also optimal given prior results. Thus, for both problems, our work settles open questions regarding the optimal distortion achievable using a fixed number of cardinal value queries.

Read the paper in full here.


Efficient Rectification of Neuro-Symbolic Reasoning Inconsistencies by Abductive Reflection
Wen-Chao Hu, Yuan Jiang, Zhi-Hua Zhou, Wang-Zhou Dai

Abstract: Neuro-Symbolic (NeSy) AI could be regarded as an analogy to human dual-process cognition, modeling the intuitive System 1 with neural networks and the algorithmic System 2 with symbolic reasoning. However, for complex learning targets, NeSy systems often generate outputs inconsistent with domain knowledge and it is challenging to rectify them. Inspired by the human Cognitive Reflection, which promptly detects errors in our intuitive response and revises them by invoking the System 2 reasoning, we propose to improve NeSy systems by introducing Abductive Reflection (ABL-Refl) based on the Abductive Learning (ABL) framework. ABL-Refl leverages domain knowledge to abduce a reflection vector during training, which can then flag potential errors in the neural network outputs and invoke abduction to rectify them and generate consistent outputs during inference. ABL-Refl is highly efficient in contrast to previous ABL implementations. Experiments show that ABL-Refl outperforms state-of-the-art NeSy methods, achieving excellent accuracy with fewer training resources and enhanced efficiency.

Read the paper in full here.


Revelations: A Decidable Class of POMDPs with Omega-Regular Objectives
Marius Belly, Nathanaël Fijalkow, Hugo Gimbert, Florian Horn, Guillermo Perez, Pierre Vandenhove

Abstract: Partially observable Markov decision processes (POMDPs) form a prominent model for uncertainty in sequential decision making. We are interested in constructing algorithms with theoretical guarantees to determine whether the agent has a strategy ensuring a given specification with probability 1. This well-studied problem is known to be undecidable already for very simple omega-regular objectives, because of the difficulty of reasoning on uncertain events. We introduce a revelation mechanism which restricts information loss by requiring that almost surely the agent has eventually full information of the current state. Our main technical results are to construct exact algorithms for two classes of POMDPs called weakly and strongly revealing. Importantly, the decidable cases reduce to the analysis of a finite belief-support Markov decision process. This yields a conceptually simple and exact algorithm for a large class of POMDPs.

Read the paper in full here.


AAAI-25 outstanding paper – special track on AI for social impact (AISI)

DivShift: Exploring Domain-Specific Distribution Shifts in Large-Scale, Volunteer-Collected Biodiversity Datasets
Elena Sierra, Teja Katterborn, Salim Soltani, Lauren Gillespie, Moisés Expósito-Alonso

Abstract: Large-scale, volunteer-collected datasets of community-identified natural world imagery like iNaturalist have enabled marked performance gains for fine-grained visual classification of plant species using deep learning models. However, such datasets are opportunistic and lack a structured sampling strategy. Resulting geographic, temporal, observation quality, and socioeconomic biases inherent to this volunteer-based participatory data collection process are stymieing the wide uptake of these models for downstream biodiversity monitoring tasks, especially in the Global South. While widely documented in biodiversity modeling literature, the impact of these biases’ downstream distribution shift on deep learning models have not been rigorously quantified. Here we introduce Diversity Shift (DivShift), a framework for quantifying the effects of biodiversity domain-specific distribution shifts on deep learning model performance. We also introduce DivShift – West Coast Plant (DivShift-WCP), a new curated dataset of almost 8 million iNaturalist plant observations across the western coast of North America, for diagnosing the effects of these biases in a controlled case study. Using this new dataset, we contrast computer vision model performance across a variety of these shifts and observe that these biases indeed confound model performance across observation quality, spatial location, and political boundaries. Interestingly, we find for all partitions that accuracy is lower than expected by chance from estimates of dataset shift from the data themselves, implying the structure within natural world images provides significant generalization improvements. From these observations, we suggest recommendations training computer vision models on natural world imagery biodiversity collections.

Read the paper in full here.




tags: , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.

The Machine Ethics podcast: moral agents with Jen Semler

In this episode, Ben and Jen Semler talk about what makes a moral agent, the point of moral agents, philosopher and engineer collaborations, and more.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence