ΑΙhub.org
 

Interview with Mario Mirabile: trust in multi-agent systems


by
18 November 2025



share this:

In a new series of interviews, we’re meeting some of the PhD students that were selected to take part in the Doctoral Consortium at the European Conference on Artificial Intelligence (ECAI 2025). During the conference in Bologna, we caught up with Mario Mirabile who is studying for his PhD in trustworthy AI and multi-agent systems at the University of Santiago de Compostela and is a Research Fellow in human-AI interaction at the University of Bologna. Mario, along with co-authors Frida Hartman and Michele Dusi, was also the winner of the ECAI-2025 Diversity & Inclusion Competition, for work entitled “The Last 25 Years of Gender Distribution of Authorship in ECAI Proceedings”. This award was presented at the closing ceremony of the conference.

Could you start by giving us an introduction to the topic you are working on?

I study how to build and measure trust in multi-agent systems, with a practical focus on financial literacy use cases. “Trust” means different things across fields, so my first step was to synthesize views from engineering, cognitive psychology, political science, and ethics into a shared, workable definition.

I’m now working on operationalizing that concept, designing metrics, behaviors, and protocols that let human and AI agents cooperate in non-naïve ways. As the world fills with connected, sensing devices, think social robots and embodied AI, agents don’t just exchange messages; they also interact verbally and physically. That means trust depends not only on what each system does, but on how networks of humans and AIs coordinate, explain decisions, and stay accountable within complex ecosystems. My goal is to make those interactions reliable, understandable, and appropriate for real people learning and making choices about money.

You said you were focusing on finance. Are there particular aspects of finance that you are looking into?

The first phase of my PhD was a bibliometric systematic literature review, which I carried out with my supervisor in Santiago de Compostela, Jose María Alonso-Moral, and my principal investigator in Bologna, Giovanni Emanuele Corazza. That gave us a map of the intellectual landscape. We’re concentrating on financial literacy scenarios, where conversational and non-conversational agents interact directly with people. The focus is on how these systems deliver information, how they frame context, and why they generate one recommendation rather than another.

Was there anything that stood out to you during your literature review?

Quite a lot stood out. Mapping the field so early in the PhD was incredibly helpful because it revealed several significant gaps. For instance, blockchain may no longer be at peak hype, yet it remains important in finance. What surprised me was how few studies actually explore the intersection of blockchain, AI, and trust, even though that combination is full of open questions and practical relevance. There’s a huge research opportunity there.

Another clear trend was the substantial amount of work on conversational AI for financial advice. Chatbots and advisory systems occupy a large share of the literature, which signals strong interest but also highlights areas where deeper evaluation and more rigorous trust frameworks are still missing.

Do you feel like a lot of companies have released things before they are really ready for that deployment?

In many cases, yes. After speaking with several researchers at ECAI, a common theme emerged: a large share of multi-agent systems are developed entirely in laboratory settings. In those environments, agents are usually designed to be cooperative and predictable, which makes sense for controlled experiments. But it means that the complexity of real-world contexts isn’t factored into the design.

I understand why this happens, but it’s still concerning. A system might appear smooth and reliable when evaluated at the individual level, yet its broader societal impact could be disruptive if it isn’t designed and governed with real-world conditions in mind.

Following this literature review, what is the next step in your PhD?

The next major phase of my PhD centers on the question: “How can we effectively design trustworthiness in a multi-agent system for financial literacy, while accounting for diverse user needs and contexts?”

This moves me into the design and implementation stage. I’ll be writing code, testing existing frameworks, and assessing whether they support the kinds of trust mechanisms I want to study. After that, we plan to build a set of AI agents based on those frameworks and run experiments with human participants, starting with a focus on the quality of explanations these agents provide.

What do you think is going to be one of the biggest challenges in your research?

One of the biggest challenges will be alignment. Even in simple one-to-one interactions, such as a user talking with a chatbot, we already see how difficult it is to ensure that the system’s behavior truly reflects human goals and values. Current generative models operate through statistical prediction rather than any grounded understanding of causes or intentions, which limits how well they can align with users in sensitive domains like finance.

Addressing this may require rethinking the underlying architectures, not just adding layers on top of existing models. Several leading researchers have pointed out that our current trajectory may not be sufficient for more advanced forms of intelligence or reliable cooperative behavior. Yet economic pressures push the field in a different direction. Balancing these realities while building trustworthy multi-agent systems is going to be a major challenge.

I was interested to see how you found the Doctoral Consortium experience here at ECAI.

The Doctoral Consortium ran across Saturday and Sunday, and I found it genuinely rewarding. It brought together people from many different backgrounds, which made the discussions especially rich. I met a lot of interesting colleagues, and the environment made it easy to form new connections. In fact, I’ve already started outlining a few potential papers with people I met just a couple of days earlier. It was a very positive experience overall.

Could you tell us an interesting (non AI-related) fact about you?

I’m very involved in activism, and it has shaped a lot of my perspective as a researcher. I co-founded an NGO (“South Working – Lavorare dal Sud”) focused on creating social value in local communities by encouraging remote work opportunities. In Italy, and especially in regions like Sicily where I’m from, many skilled people leave for jobs elsewhere, which creates a real loss of talent. Our aim is to counter that trend by helping people build careers from their home regions. We’ve run projects ranging from digital skills training for young people to supporting remote workers in Southern Italy, and we’ve already seen encouraging results: more than 100,000 remote/south workers identified, and more than 450 NEETs [Not in Education, Employment, or Training] trained through the Digichamps project.

Finally, congratulations on winning the Diversity and Inclusion competition award at ECAI! Could you tell us about the work that won the award?

Thank you! Our project, “The Last 25 Years of Gender Distribution of Authorship in ECAI Proceedings”, examined how gender representation among ECAI authors has evolved from 2000 to 2025 using DBLP metadata and name-based gender inference. We found persistent underrepresentation of women, and our simple regression model projected gender parity around 2089. That projection is meant as a provocation rather than a forecast, a way of highlighting how slow progress would be if nothing changes.
We also created an interactive webpage so the community can explore the data directly.

About Mario

Mario Mirabile is a PhD researcher at the University of Santiago de Compostela and a Research Fellow at the University of Bologna. His work focuses on trustworthy, multi-agent AI and human-AI interaction, with a socio-technical lens on governance. He has collaborated with public and private organizations, including the European Commission’s DG CONNECT.



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Review of “Exploring metaphors of AI: visualisations, narratives and perception”

and   17 Nov 2025
A curated research session at the Hype Studies Conference, “(Don’t) Believe the Hype?!” 10-12 September 2025, Barcelona.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.

Learning from failure to tackle extremely hard problems

  12 Nov 2025
This blog post is based on the work "BaNEL: Exploration posteriors for generative modeling using only negative rewards".

How AI can improve storm surge forecasts to help save lives

  10 Nov 2025
Looking at how AI models can help provide more detailed forecasts more quickly.

Rewarding explainability in drug repurposing with knowledge graphs

and   07 Nov 2025
A RL approach that not only predicts which drug-disease pairs might hold promise but also explains why.

AI Song Contest – vote for your favourite

  06 Nov 2025
Voting is open until 9 November.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence