ΑΙhub.org
 

#AAAI2024 workshops round-up 1: Cooperative multi-agent systems decision-making and learning


by
06 March 2024



share this:

A report on the Cooperative Multi-Agent Systems Decision-Making and Learning: From Individual Needs to Swarm Intelligence workshop, which took place at AAAI 2024, on 26 February.

Organisers: Qin Yang, Matthew E. Taylor, Tianpei Yang, Rui Liu

With the tremendous growth of AI technology, robotics, IoT, and high-speed wireless sensor networks (like 5G) in recent years, an artificial ecosystem has been formed, termed artificial social systems, that involves AI agents from software entities to hardware devices. How to integrate artificial social systems into human society so that they coexist harmoniously is a critical issue. At this point, rational decision-making and efficient learning from multi-agent systems (MAS) interactions are the preconditions to guarantee multi-agents working safely, balancing the group utilities and system costs in the long term, and satisfying group members’ needs in their cooperation. From the cognitive modeling perspective, it may provide a more realistic basis for understanding cooperative multi-agent interactions by embodying realistic constraints, capabilities, and tendencies of individual agents in their interactions, including physical and social environments.

A number of research trends and challenges are informing insights in this field. One important issue is how to model behaviors of cooperative MAS from the individual cognitive model’s aspect, like agent needs and innate value (utility), in their decision-making and learning. The other crucial problem is how to build a robust, stable, and reliable trust network among AI agents, such as trust among robots and between humans and robots, evaluating their performance and status in a common ground when they make collective decisions and learn from interactions in complex and uncertain environments. Furthermore, exploring practical and efficient reinforcement learning (RL) methods, such as deep RL and multi-agent RL, for global and partial cooperation in centralized, decentralized, and distributed ways is still challenging. The complexity of cooperative multi-agent problems will rise rapidly with the number of agents or their behavioral sophistication, especially in determining the action sequence and strategies and learning from the interactions adapting to complex and dynamically changing environments. In the invited speakers section of this workshop, Professor Maria Gini discussed how to coordinate a large number of robots fulfilling a task, Professor Giovanni Beltrame introduced the role of hierarchy in multi-agent decision making, and Professor Christopher Amato clarified fundamental challenges and misunderstandings of multi-agent RL.

Moreover, cooperative MAS considers multiple agents interacting in complex or uncertain environments to jointly solve tasks and maximize the group’s utility. It surveys the system’s utility from individual needs. Balancing the rewards between agents and groups for MAS through interaction and adaptation in cooperation optimizes the global system’s utility and guarantees sustainable development for each group member, much like human society does. Professor Aaron Courville introduced the Q-value shaping method for optimizing an agent’s individual utility while fostering cooperation among adversaries in partially competitive environments. Professor Michael L. Littman discussed the implementation of interacting agents and safe(r) AI in MAS-human interaction. Professor Kevin Leyton-Brown talked about modeling nonstrategic human play in games by describing how the economic rationality of such models can be assessed, and presented some initial experimental findings showing the extent to which these models replicate human-like cognitive biases.

The application domains include: self-driving cars, delivery drones, multi-robot rescue, swarm robots space exploration, automated warehouse systems, IoT devices and smart homes, unmanned medical care systems, automatic planting and harvesting systems, military scouting and patrolling, and real-time strategy (RTS) in video games. Future factories, in particular, are likely to utilize robots for a much broader range of tasks in a much more collaborative manner with humans, which intrinsically requires operation in proximity to humans, raising safety and efficiency issues. On this topic, Professor Sven Koenig introduced multi-agent path finding and its applications, including warehousing, manufacturing, train scheduling. Professor Marco Pavone discussed artificial currency-based government welfare programs, e.g. transit benefits programs that provide eligible users with subsidized public transit.

Fourteen peer-reviewed papers were presented in the workshop, including five oral and nine poster presentations. They covered topics such as MAS RL in communication, traffic routing, Bayesian Soft Actor-Critic, multi-agent imperfect-information games, cognitive MAS RL, innate values-driven RL, edge computing based human-robot cognitive fusion, and relational planning in MAS RL. Some were accepted by the last AAMAS conference, ACM SIGMOD, and top AI journals such as ACM TIST. The recordings, photos and papers are available at the workshop website.

You can watch the recordings of the workshop below:



tags: ,


Qin Yang is an assistant professor at Bradley University
Qin Yang is an assistant professor at Bradley University




            AIhub is supported by:



Related posts :



Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence