ΑΙhub.org
 

Agent Teaming in Mixed-Motive Situations – an AAAI Fall symposium


by
08 January 2024



share this:

Image by Jamillah Knowles & Reset.Tech Australia / © https://au.reset.tech/ / Better Images of AI / Detail from Connected People / Licenced by CC-BY 4.0

The AAAI Symposium on Agent Teaming in Mixed-Motive Situations, held from October 25-27, 2023, showcased the challenges and innovations in multi-agent interactions with varying goals and decision-making processes. The event featured experts from diverse backgrounds, including multi-agent systems, AI, and organizational behavior. Key highlights include:

  • Professor Subbarao Khambhampati’s (Arizona State University) keynote discussed the dual nature of mental modeling in cooperation and competition. The importance of obfuscatory behavior, controlled observability planning, and the use of explanations for model reconciliation was emphasized, particularly regarding trust-building in human-robot interactions.
  • Professor Gita Sukthankar’s (University of Central Florida) talk focused on challenges in teamwork, using a case study on software engineering teams. Innovative techniques for distinguishing effective teams from ineffective ones were explored, setting the stage for discussions on the complexities of mixed-motive scenarios.
  • Dr Marc Steinberg (Office of Naval Research) moderated an interactive discussion exploring research challenges in mixed-motive teams, including modeling humans, experimental setups, and measuring and assessing mixed-motive situations. This discussion provided diverse perspectives on the evolving landscape of agent teaming.
  • Accepted papers covered a wide range of topics, including maximum entropy reinforcement learning, multi-agent path finding, Bayesian inverse planning for communication scenarios, hybrid navigation acceptability, and safety. Talks also delved into challenges in human-robot teams and the importance of aligning robot values with human preferences.
  • Panel sessions explored themes such as team structure, collaboration within diverse teams, the role of game theory, and explicit and implicit communication within teams. Meta-level parameters for multi-agent collaboration and the importance of context in human-agent communication in mixed-motive settings were discussed.
  • Breakout group discussions focused on consensus and negotiation in mixed-motive groups, considering intragroup and intergroup dynamics. The impact of consensus on trust and future work in mixed-motive teaming, including interdisciplinary collaborations and resource identification, were explored.
  • The symposium successfully brought together a community actively addressing challenges in agent teaming within mixed-motive situations. The discussions highlighted the complexities of collaboration, trust-building, and decision-making in diverse multi-agent scenarios. Ongoing research and continued collaboration were emphasized to advance understanding in this field.

Useful links



tags: ,


Suresh Kumaar Jayaraman is a postdoctoral researcher at the Robotics Institute at Carnegie Mellon University.
Suresh Kumaar Jayaraman is a postdoctoral researcher at the Robotics Institute at Carnegie Mellon University.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence