ΑΙhub.org
 

Congratulations to the #AAAI2025 outstanding paper award winners


by
01 March 2025



share this:

The AAAI 2025 outstanding paper awards were announced during the opening ceremony of the 39th Annual AAAI Conference on Artificial Intelligence on Thursday 27 February. These awards honour papers that “exemplify the highest standards in technical contribution and exposition”. Papers are recommended for consideration during the review process by members of the Program Committee. This year, three papers have been selected as outstanding papers, with a further paper being recognised in the special track on AI for social impact.

AAAI-25 outstanding papers

Every Bit Helps: Achieving the Optimal Distortion with a Few Queries
Soroush Ebadian and Nisarg Shah

Abstract: A fundamental task in multi-agent systems is to match n agents to n alternatives (e.g., resources or tasks). Often, this is accomplished by eliciting agents’ ordinal rankings over the alternatives instead of their exact numerical utilities. While this simplifies elicitation, the incomplete information leads to inefficiency, captured by a worst-case measure called distortion. A recent line of work shows that making just a few queries to each agent regarding their cardinal utility for an alternative can significantly improve the distortion, with [1] achieving O(\sqrt{n}) distortion with two queries per agent. We generalize their result by achieving O(n^{1/\lambda}) distortion with \lambda queries per agent, for any constant \lambda, which is optimal given a previous lower bound by [2]. We also extend our finding to the general social choice problem, where one of m alternatives must be chosen based on the preferences of n agents, and show that O((\min\{n,m\})^{1/\lambda}) distortion can be achieved with \lambda queries per agent, for any constant \lambda, which is also optimal given prior results. Thus, for both problems, our work settles open questions regarding the optimal distortion achievable using a fixed number of cardinal value queries.

Read the paper in full here.


Efficient Rectification of Neuro-Symbolic Reasoning Inconsistencies by Abductive Reflection
Wen-Chao Hu, Yuan Jiang, Zhi-Hua Zhou, Wang-Zhou Dai

Abstract: Neuro-Symbolic (NeSy) AI could be regarded as an analogy to human dual-process cognition, modeling the intuitive System 1 with neural networks and the algorithmic System 2 with symbolic reasoning. However, for complex learning targets, NeSy systems often generate outputs inconsistent with domain knowledge and it is challenging to rectify them. Inspired by the human Cognitive Reflection, which promptly detects errors in our intuitive response and revises them by invoking the System 2 reasoning, we propose to improve NeSy systems by introducing Abductive Reflection (ABL-Refl) based on the Abductive Learning (ABL) framework. ABL-Refl leverages domain knowledge to abduce a reflection vector during training, which can then flag potential errors in the neural network outputs and invoke abduction to rectify them and generate consistent outputs during inference. ABL-Refl is highly efficient in contrast to previous ABL implementations. Experiments show that ABL-Refl outperforms state-of-the-art NeSy methods, achieving excellent accuracy with fewer training resources and enhanced efficiency.

Read the paper in full here.


Revelations: A Decidable Class of POMDPs with Omega-Regular Objectives
Marius Belly, Nathanaël Fijalkow, Hugo Gimbert, Florian Horn, Guillermo Perez, Pierre Vandenhove

Abstract: Partially observable Markov decision processes (POMDPs) form a prominent model for uncertainty in sequential decision making. We are interested in constructing algorithms with theoretical guarantees to determine whether the agent has a strategy ensuring a given specification with probability 1. This well-studied problem is known to be undecidable already for very simple omega-regular objectives, because of the difficulty of reasoning on uncertain events. We introduce a revelation mechanism which restricts information loss by requiring that almost surely the agent has eventually full information of the current state. Our main technical results are to construct exact algorithms for two classes of POMDPs called weakly and strongly revealing. Importantly, the decidable cases reduce to the analysis of a finite belief-support Markov decision process. This yields a conceptually simple and exact algorithm for a large class of POMDPs.

Read the paper in full here.


AAAI-25 outstanding paper – special track on AI for social impact (AISI)

DivShift: Exploring Domain-Specific Distribution Shifts in Large-Scale, Volunteer-Collected Biodiversity Datasets
Elena Sierra, Teja Katterborn, Salim Soltani, Lauren Gillespie, Moisés Expósito-Alonso

Abstract: Large-scale, volunteer-collected datasets of community-identified natural world imagery like iNaturalist have enabled marked performance gains for fine-grained visual classification of plant species using deep learning models. However, such datasets are opportunistic and lack a structured sampling strategy. Resulting geographic, temporal, observation quality, and socioeconomic biases inherent to this volunteer-based participatory data collection process are stymieing the wide uptake of these models for downstream biodiversity monitoring tasks, especially in the Global South. While widely documented in biodiversity modeling literature, the impact of these biases’ downstream distribution shift on deep learning models have not been rigorously quantified. Here we introduce Diversity Shift (DivShift), a framework for quantifying the effects of biodiversity domain-specific distribution shifts on deep learning model performance. We also introduce DivShift – West Coast Plant (DivShift-WCP), a new curated dataset of almost 8 million iNaturalist plant observations across the western coast of North America, for diagnosing the effects of these biases in a controlled case study. Using this new dataset, we contrast computer vision model performance across a variety of these shifts and observe that these biases indeed confound model performance across observation quality, spatial location, and political boundaries. Interestingly, we find for all partitions that accuracy is lower than expected by chance from estimates of dataset shift from the data themselves, implying the structure within natural world images provides significant generalization improvements. From these observations, we suggest recommendations training computer vision models on natural world imagery biodiversity collections.

Read the paper in full here.




tags: , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

RWDS Big Questions: how do we highlight the role of statistics in AI?

  25 Mar 2026
Next in our series, the panel explores the statistical underpinning of AI.

A history of RoboCup with Manuela Veloso

  24 Mar 2026
Find out how RoboCup got started and how the competition has evolved, from one of the co-founders.

Information-driven design of imaging systems

  23 Mar 2026
Framework that enables direct evaluation and optimization of imaging systems based on their information content.

Machine learning framework to predict global imperilment status of freshwater fish

  20 Mar 2026
“With our model, decision makers can deploy resources in advance before a species becomes imperiled.”

Interview with AAAI Fellow Yan Liu: machine learning for time series

  19 Mar 2026
Hear from 2026 AAAI Fellow Yan Liu about her research into time series, the associated applications, and the promise of physics-informed models.

A principled approach for data bias mitigation

  18 Mar 2026
Find out more about work presented at AIES 2025 which proposes a new way to measure data bias, along with a mitigation algorithm with mathematical guarantees.

An AI image generator for non-English speakers

  17 Mar 2026
"Translations lose the nuances of language and culture, because many words lack good English equivalents."

AI and Theory of Mind: an interview with Nitay Alon

  16 Mar 2026
Find out more about how Theory of Mind plays out in deceptive environments, multi-agents systems, the interdisciplinary nature of this field, when to use Theory of Mind, and when not to, and more.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence