ΑΙhub.org
 

Congratulations to the #AAAI2025 outstanding paper award winners


by
01 March 2025



share this:

The AAAI 2025 outstanding paper awards were announced during the opening ceremony of the 39th Annual AAAI Conference on Artificial Intelligence on Thursday 27 February. These awards honour papers that “exemplify the highest standards in technical contribution and exposition”. Papers are recommended for consideration during the review process by members of the Program Committee. This year, three papers have been selected as outstanding papers, with a further paper being recognised in the special track on AI for social impact.

AAAI-25 outstanding papers

Every Bit Helps: Achieving the Optimal Distortion with a Few Queries
Soroush Ebadian and Nisarg Shah

Abstract: A fundamental task in multi-agent systems is to match n agents to n alternatives (e.g., resources or tasks). Often, this is accomplished by eliciting agents’ ordinal rankings over the alternatives instead of their exact numerical utilities. While this simplifies elicitation, the incomplete information leads to inefficiency, captured by a worst-case measure called distortion. A recent line of work shows that making just a few queries to each agent regarding their cardinal utility for an alternative can significantly improve the distortion, with [1] achieving O(\sqrt{n}) distortion with two queries per agent. We generalize their result by achieving O(n^{1/\lambda}) distortion with \lambda queries per agent, for any constant \lambda, which is optimal given a previous lower bound by [2]. We also extend our finding to the general social choice problem, where one of m alternatives must be chosen based on the preferences of n agents, and show that O((\min\{n,m\})^{1/\lambda}) distortion can be achieved with \lambda queries per agent, for any constant \lambda, which is also optimal given prior results. Thus, for both problems, our work settles open questions regarding the optimal distortion achievable using a fixed number of cardinal value queries.

Read the paper in full here.


Efficient Rectification of Neuro-Symbolic Reasoning Inconsistencies by Abductive Reflection
Wen-Chao Hu, Yuan Jiang, Zhi-Hua Zhou, Wang-Zhou Dai

Abstract: Neuro-Symbolic (NeSy) AI could be regarded as an analogy to human dual-process cognition, modeling the intuitive System 1 with neural networks and the algorithmic System 2 with symbolic reasoning. However, for complex learning targets, NeSy systems often generate outputs inconsistent with domain knowledge and it is challenging to rectify them. Inspired by the human Cognitive Reflection, which promptly detects errors in our intuitive response and revises them by invoking the System 2 reasoning, we propose to improve NeSy systems by introducing Abductive Reflection (ABL-Refl) based on the Abductive Learning (ABL) framework. ABL-Refl leverages domain knowledge to abduce a reflection vector during training, which can then flag potential errors in the neural network outputs and invoke abduction to rectify them and generate consistent outputs during inference. ABL-Refl is highly efficient in contrast to previous ABL implementations. Experiments show that ABL-Refl outperforms state-of-the-art NeSy methods, achieving excellent accuracy with fewer training resources and enhanced efficiency.

Read the paper in full here.


Revelations: A Decidable Class of POMDPs with Omega-Regular Objectives
Marius Belly, Nathanaël Fijalkow, Hugo Gimbert, Florian Horn, Guillermo Perez, Pierre Vandenhove

Abstract: Partially observable Markov decision processes (POMDPs) form a prominent model for uncertainty in sequential decision making. We are interested in constructing algorithms with theoretical guarantees to determine whether the agent has a strategy ensuring a given specification with probability 1. This well-studied problem is known to be undecidable already for very simple omega-regular objectives, because of the difficulty of reasoning on uncertain events. We introduce a revelation mechanism which restricts information loss by requiring that almost surely the agent has eventually full information of the current state. Our main technical results are to construct exact algorithms for two classes of POMDPs called weakly and strongly revealing. Importantly, the decidable cases reduce to the analysis of a finite belief-support Markov decision process. This yields a conceptually simple and exact algorithm for a large class of POMDPs.

Read the paper in full here.


AAAI-25 outstanding paper – special track on AI for social impact (AISI)

DivShift: Exploring Domain-Specific Distribution Shifts in Large-Scale, Volunteer-Collected Biodiversity Datasets
Elena Sierra, Teja Katterborn, Salim Soltani, Lauren Gillespie, Moisés Expósito-Alonso

Abstract: Large-scale, volunteer-collected datasets of community-identified natural world imagery like iNaturalist have enabled marked performance gains for fine-grained visual classification of plant species using deep learning models. However, such datasets are opportunistic and lack a structured sampling strategy. Resulting geographic, temporal, observation quality, and socioeconomic biases inherent to this volunteer-based participatory data collection process are stymieing the wide uptake of these models for downstream biodiversity monitoring tasks, especially in the Global South. While widely documented in biodiversity modeling literature, the impact of these biases’ downstream distribution shift on deep learning models have not been rigorously quantified. Here we introduce Diversity Shift (DivShift), a framework for quantifying the effects of biodiversity domain-specific distribution shifts on deep learning model performance. We also introduce DivShift – West Coast Plant (DivShift-WCP), a new curated dataset of almost 8 million iNaturalist plant observations across the western coast of North America, for diagnosing the effects of these biases in a controlled case study. Using this new dataset, we contrast computer vision model performance across a variety of these shifts and observe that these biases indeed confound model performance across observation quality, spatial location, and political boundaries. Interestingly, we find for all partitions that accuracy is lower than expected by chance from estimates of dataset shift from the data themselves, implying the structure within natural world images provides significant generalization improvements. From these observations, we suggest recommendations training computer vision models on natural world imagery biodiversity collections.

Read the paper in full here.




tags: , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AUAI is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Maryna Viazovska’s proofs of sphere packing formalized with AI

  27 Apr 2026
Formalization achieved through a collaboration between mathematicians and artificial intelligence tools.

Interview with Deepika Vemuri: interpretability and concept-based learning

  24 Apr 2026
Find out more about Deepika's research bridging the gap between data-driven models and symbolic learning.

As a ‘book scientist’ I work with microscopes, imaging technologies and AI to preserve ancient texts

  23 Apr 2026
Using an array of technologies to recover, understand and preserve many valuable ancient texts.

Sony AI table tennis robot outplays elite human players

  22 Apr 2026
New robot and AI system has beaten professional and elite table tennis players.

Causal models for decision systems: an interview with Matteo Ceriscioli

  21 Apr 2026
How can we integrate causal knowledge into agents or decision systems to make them more reliable?

A model for defect identification in materials

  20 Apr 2026
A new model measures defects that can be leveraged to improve materials’ mechanical strength, heat transfer, and energy-conversion efficiency.

‘Probably’ doesn’t mean the same thing to your AI as it does to you

  17 Apr 2026
Are you sure you and the AI chatbot you’re using are on the same page about probabilities?

Interview with Xinwei Song: strategic interactions in networked multi-agent systems

  16 Apr 2026
Xinwei Song tells us about her research using algorithmic game theory and multi-agent reinforcement learning.



AUAI is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence