ΑΙhub.org
 

Congratulations to the #ECAI2023 outstanding paper award winners


by
06 October 2023



share this:
winners' medal

The 26th European Conference on Artificial Intelligence (ECAI 2023) took place from 30 September – 4 October in Krakow, Poland. On the final day of the conference, the outstanding paper awards were announced. There were two winners in the ECAI 2023 Outstanding Paper category, and one winner in the Outstanding Paper for AI in Social Good category.

ECAI 2023 Outstanding Papers

Selective Learning for Sample-Efficient Training in Multi-Agent Sparse Reward Tasks
Xinning Chen, Xuan Liu, Yanwen Ba, Shigeng Zhang, Bo Ding, Kenli Li

Abstract: Learning effective strategies in sparse reward tasks is one of the fundamental challenges in reinforcement learning. This becomes extremely difficult in multi-agent environments, as the concurrent learning of multiple agents induces the non-stationarity problem and sharply increased joint state space. Existing works have attempted to promote multi-agent cooperation through experience sharing. However, learning from a large collection of shared experiences is inefficient as there are only a few high-value states in sparse reward tasks, which may instead lead to the curse of dimensionality in large-scale multi-agent systems. This paper focuses on sparse-reward multi-agent cooperative tasks and proposes an effective experience-sharing method MASL (Multi-Agent Selective Learning) to boost sample-efficient training by reusing valuable experiences from other agents. MASL adopts a retrogression-based selection method to identify high-value traces of agents from the team rewards, based on which some recall traces are generated and shared among agents to motivate effective exploration. Moreover, MASL selectively considers information from other agents to cope with the non-stationarity issue while enabling efficient training for large-scale agents. Experimental results show that MASL significantly improves sample efficiency compared with state-of-art MARL algorithms in cooperative tasks with sparse rewards.

Read the article in full here.


Theoretical remarks on feudal hierarchies and reinforcement learning
Diogo S Carvalho, Francisco S. Melo, Pedro A Santos

Abstract: Hierarchical reinforcement learning is an increasingly demanded resource for learning to make sequential decisions towards long term goals with successful credit assignment and temporal abstraction. Feudal hierarchies are among the most deployed frameworks. However, there is lack of formalism over the hierarchical structure and of theoretical guarantees. We formalize the common two-level feudal hierarchy as two Markov decision processes, with the one on the high-level being dependent on the policy executed at the low-level. Despite the non-stationarity raised by the dependency, we show that each of the processes presents stable behavior. We then build on the first result to show that, regardless of the convergent learning algorithm used for the low-level, convergence of both prediction and control algorithms at the high-level is guaranteed with probability 1. Our results contribute with theoretical support for the use of feudal hierarchies in combination with standard reinforcement learning methods at each level.

Read the article in full here.


Outstanding Paper for AI in Social Good

Attention Based Models for Cell Type Classification on Single-Cell RNA-Seq Data
Tianxu Wang, Yue Fan, Xiuli Ma

Abstract: Cell type classification serves as one of the most fundamental analyses in bioinformatics. It helps recognizing various cells in cancer microenvironment, discovering new cell types and facilitating other downstream tasks. Single-cell RNA-sequencing (scRNA-seq) technology can profile the whole transcriptome of each cell, thus enabling cell type classification. However, high-dimensional scRNA-seq data pose serious challenges on cell type classification. Existing methods either classify the cells with reliance on the prior knowledge or by using neural networks whose massive parameters are hard to interpret. In this paper, we propose two novel attention-based models for cell type classification on single-cell RNA-seq data. The first model, Cell Feature Attention Network (CFAN), captures the features of a cell and performs attention model on them. To further improve interpretation, the second model, Cell-Gene Representation Attention Network (CGRAN), directly concretizes tokens as cells and genes and uses the cell representation renewed by self-attention over the cell and the genes to predict cell type. Both models show excellent performance in cell type classification; additionally, the key genes with high attention weights in CGRAN indicate and identify the marker genes of the cell types, thus proving the model’s biological interpretation.

Read the article in full here.


You can read the conference contributions in the proceedings, which are open access.



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

AI and Theory of Mind: an interview with Nitay Alon

  16 Mar 2026
Find out more about how Theory of Mind plays out in deceptive environments, multi-agents systems, the interdisciplinary nature of this field, when to use Theory of Mind, and when not to, and more.
coffee corner

AIhub coffee corner: AI, kids, and the future – “generation AI”

  13 Mar 2026
The AIhub coffee corner captures the musings of AI experts over a short conversation.

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence