ΑΙhub.org
 

Interview with Kate Larson: Talking multi-agent systems and collective decision-making


by
27 January 2026



share this:

What if AI were designed not only to optimize choices for individuals, but to help groups reach decisions together? At IJCAI 2025 in Montreal, I had the pleasure of speaking with Professor Kate Larson of the University of Waterloo, a leading expert in multi-agent systems whose research explores how AI can support collective decision-making. In this interview, she reflects on what drew her into the field, why she sees AI playing a role in consensus and democratic processes, and why she believes multi-agent systems deserve more attention in a world she views as fundamentally multi-agent.

Hello Professor Larson, thank you very much for joining me for this interview. You’ve spent much of your career at the intersection of AI and decision-making. Could you start by talking about what initially drew you to these questions, and what continues to excite you about them today?

My background wasn’t in computer science or AI—as an undergraduate, I studied pure math. What first drew me in were the formal models, the idea that we could formally think about groups. From there I became interested in preferences and decision-making, especially when you don’t have full information or aren’t even sure of your own preferences. Those problems were both mathematically interesting to model and think about algorithmically, but they also reflected indecisiveness and echoed issues that I was finding in my own life.

More broadly, I’m an introvert who likes people, so I’ve always found interactions fascinating: what it means to interact, what outcomes emerge, and how groups can collectively make good decisions. When you bring in learning over time and how to structure the rules to support group decision-making, it opens up both deep mathematical questions and exciting applications. I like seeing how people work together and get things done, and how we can make that better.

Do you have insights into what are crucial avenues of research in the near future?

Like many people, I’ve been surprised by how powerful large language models have become. That just blows my mind. If you had asked me five years ago whether we’d have such great models, I would have said, ‘No, it’s not happening.’

I do find it exciting that, just using text, we can build complex approximations of the world and query them through natural language. That’s a really interesting paradigm.

It also raises new questions: what does it mean to have agents that can negotiate, with text and tokens as their action space? How does that change the algorithms and models we’ve relied on in the past?

It’s a shift in what’s possible, a different way of thinking about the world.

In recent years, you’ve also turned towards consensus building and AI for democracy. What sparked your interest in this direction?

It goes back to group dynamics and the fundamental question of what counts as a good outcome for a group. There are different ways to answer that, different trade-offs to consider, and I’m interested in how we can model or even learn what the appropriate trade-offs might be. From there, you begin to see really exciting applications.

I find some examples very cool, though I’ve not worked on them directly. For instance, Polis is a deliberative democracy approach that carefully surfaces opinions in order to avoid polarization and identify points of consensus. There are also participatory democracy applications—participatory budgeting is one case—where you can bring these ideas into communities.

The challenge is designing the engagement between the process and the individuals, and then try to better understand how these processes work and what makes them effective.

If we get this right, it could have a real, positive impact on people’s lives.

Have you designed tools for consensus?

We’ve explored it. Do we have deployed tools? No. But we’ve done some preliminary work on alignment—figuring out what good consensus statements look like. For example, if two agents are trying to reach an agreement, what’s the right statement that best represents both of their preferences? We’ve used ideas from social choice to study that and shape the algorithmic processes for consensus.

We’ve also explored deliberation and how carefully structured processes might help reach good agreements if we’re using certain voting rules. The basic idea is that there are some quite complex voting rules out there, which have really nice consensus in representation properties. But they’re really complicated. They’re algorithmically difficult and hard to explain, even to an expert in the field.

So we explored another idea: maybe instead of using only these complex, very mathematically elegant voting rules, what if agents just talk among themselves in clever ways? If we put our effort on that deliberation process, maybe we can use something really simple and still get a good outcome.

That has been our point: sometimes, putting your effort into getting groups together and discussing actually can be as powerful as pouring all your effort into what the underlying aggregation rule should be. If you’re smart about this deliberation process, you can get away with something really simple, and easy to explain.

So that’s the tension we’re trying to explore—overcoming explainability and complexity concerns by shifting the focus on a slightly different aspect of the problem.

You recently co-authored a paper called “Multi-agent risks from advanced AI.” In your opinion, which risks are most underappreciated right now?

I think there are quite a lot of risks, and I’m not sure how I would prioritize them. A big one is that many interactions—between agents or people—rely on trust and commitment. So, I trust that you’ll do something so I can commit to my part. But in multi-agent systems, we don’t always have good ways of reasoning about that. So, developing better ways to handle commitment is a challenge.

I think there are also misalignment questions. In a multi-agent setting, my best action depends on what you’re doing. We might not be adversarial, we might even have similar interests, but if we misunderstand each other’s intentions and get it a little bit wrong, we can have complete misalignment and end up with a very bad outcome.

This is quite well understood in game theory. The classic example is the prisoner’s dilemma: it’s in both our interests to cooperate, but we just can’t achieve it in a one-shot interaction. Then you need mechanisms like learning over time or developing norms, and sometimes that can be hard to do. If we get that wrong, the misalignment of the incentives or goals of others can have cascading effects.

This could be even more serious at scale, with AIs that react very quickly. So, we need safeguards in place.

Is there something you’re working on now regarding multi-agents that you’d like to talk about?

I have a whole bunch of things going on. One problem we’re just beginning to think about—I don’t have concrete results yet, but I’d love to work on it and see others explore it too—is how to use machine learning tools to learn new rules for shaping the behavior of agents—I call them institutions. This is related to work by my former student, Ben.

For example, there are lots of ways to vote. Not in the political sense, but as a way of aggregating preferences of different individuals. There’s a whole vast literature on voting rules and their properties—most of it axiomatic, where we say, “here are the properties we care about, and this rule satisfies some but not others.” I think it would be really interesting to use machine learning to further explore this space: new, data-driven ways of dividing up resources or choosing outcomes, and ultimately designing institutions that guide agents’ behavior in multi-agent systems.

So that’s what interests me: how do we learn the right rules so that systems of learning agents can behave well, even as the rules themselves evolve?

Would you say that it’s a problem that is maybe underexplored in the research community?

I think so, yes. I think the world is inherently multi-agent, yet I think many problems and applications aren’t approached that way. There’s interesting multi-agent research that is being conducted—it has its own conference, and we do have a vibrant sub-community at IJCAI. But I do think it’s still underexplored. There should be much more multi-agent research, because the world itself is multi-agent.

To conclude, since you were program chair for IJCAI 2024 in Jeju, do you have any advice for the organizers of IJCAI 2026, and to the next program chair in particular?

Being program chair is an awful lot of work, and I don’t think you realize how much until it’s over. My advice is to lean on the fantastic IJCAI staff members and take advantage of their expertise. Then, make sure you build a really great team around you. There’ll be chairs for workshops, tutorials, and demos. If you choose the right people for these positions, you can delegate, and they’ll just run with it.

The most challenging part is the reviewing. You have to be proactive, start recruiting reviewers early, and maybe be creative with the reviewing process. That’s the challenge: making sure you have the reviewers lined up, ready to go. I think at one point last year, I woke up in the middle of the night and realized that I was overseeing something like 20,000 people reviewing the deadlines! So, yeah, getting all those community members invested in the reviewing process is definitely very difficult but the success of the conference and the broader research community depends critically on it!

About Kate Larson

Kate Larson is a professor and holds a University Research Chair in the Cheriton School of Computer Science, University of Waterloo and is a research scientist with Google DeepMind. She is interested in algorithmic questions arising in artificial intelligence and multiagent systems with a particular focus on algorithmic game theory and computational social choice, group decision making, preference modelling, and the insights that reinforcement learning can bring to these problems, along with ways of promoting and supporting cooperative AI. She is an AAAI Fellow, and her work has been awarded several best paper awards, including at the International Conference on Autonomous Agents and Multiagent Systems (AAMAS), and ACM Conference on Computer-Supported Cooperative Work and Social Computing (ACM CSCW). Because she likes seeing cooperation in action, she has been involved in organizing and supporting many conferences and workshops in different roles, including AAMAS (general chair) and IJCAI (program chair).



tags: ,


Liliane-Caroline Demers




            AIhub is supported by:



Related posts :



#AAAI2026 social media round up: part 1

  23 Jan 2026
Find out what participants have been getting up to during the first few of days at the conference

Congratulations to the #AAAI2026 outstanding paper award winners

  22 Jan 2026
Find out who has won these prestigious awards at AAAI this year.

3 Questions: How AI could optimize the power grid

  21 Jan 2026
While the growing energy demands of AI are worrying, some techniques can also help make power grids cleaner and more efficient.

Interview with Xiang Fang: Multi-modal learning and embodied intelligence

  20 Jan 2026
In the first of our new series of interviews featuring the AAAI Doctoral Consortium participants, we hear from Xiang Fang.

An introduction to science communication at #AAAI2026

  19 Jan 2026
Find out more about our session on Wednesday 21 January.

Guarding Europe’s hidden lifelines: how AI could protect subsea infrastructure

  15 Jan 2026
EU-funded researchers are developing AI-powered surveillance tools to protect the vast network of subsea cables and pipelines that keep the continent’s energy and data flowing.

What’s coming up at #AAAI2026?

  14 Jan 2026
Find out what's on the programme at the annual AAAI Conference on Artificial Intelligence.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence