ΑΙhub.org
 

What happens when we mix multi-agent systems, robotics, software engineering, and verification and validation?

AREA workshop photo - presentation in progress

Artificial Intelligence (AI) is all around us and has many influences in our daily life. We already have access to smart homes, drone delivery, and semi-autonomous driving. Depending on the scenario and chosen technologies, the AI component can come from completely distinct research/industrial areas, which can depend on different assumptions and have radically different implications. For instance, the rational part of the AI component could be instantiated as a multi-agent system, and/or its embodiment could be obtained via robotics.

There is no generally accepted definition of what an agent is [1]. However, we could think of an agent as an intelligent component that resembles how a human being behaves. This grey definition allows us to consider an agent as something that presents features typical of the human being, such as being: autonomous, social, rational, reactive, proactive, and incorporating learning behaviour.

Robotics, which in some sense can be thought as the embodiment of the agent, has found its place in modern society, even though it is far from being a vital part of our daily life. We see robotics as a group containing all technologies that aim to build, assemble and program applications that are composed of software-controlled hardware.

However, if AI is not properly applied, it risks causing more problems than it tries to solve. This is even more relevant when AI is deployed in safety-critical scenarios where an error can cost lives, such as in the case of autonomous cars [2]. Naturally, the problem of trustworthiness in AI systems is not new [3], and the challenge of trustworthiness is not only restricted to AI solutions. Indeed, any software (or hardware) system can suffer from this kind of drawback. Because of this, solutions that can help to improve the trust we can put in AI systems are extremely valuable [4]. We refer to these techniques as verification and validation.

Even though solutions to increase the confidence we put in our AI system exist, it is common for such solutions to be focused on only one specific AI area. We may find approaches to verify and validate software agents [5], as well as we can find approaches to verify and validate robotic applications [6]; nonetheless, we hardly find solutions tackling their combination. This does not only concern the AI component verification, but also its engineering and development. Agents and robots do not always go along with each other; or at least, they are mainly studied and experimented in isolation (w.r.t. one other). Furthermore, there is a lack of software engineering methodologies that could bring all of these aspects together during the development of the system.

The AREA Workshop

The “Agents and Robots for reliable Engineered Autonomy” (AREA) is a series of workshops that brings together researchers from the areas of multi-agent systems, robotics, software engineering, and verification and validation, to foster collaboration and to stimulate research that combines (or applies) some (or all) of these areas. These workshops provide a public forum for researchers to present their work and discuss with researchers from other areas how their approach can be extended to include elements from these areas where appropriate.

AREA logo

To date, there have been two editions of the AREA workshop. AREA 2020 was co-located with ECAI 2020 and it was hosted virtually, due to the COVID-19 pandemic. Being an online event with no cost associated with it, we had a high attendance for the first ever edition of the workshop, maintaining a constant 40 participants throughout all 6 hours of the workshop, with 80 unique participants overall. AREA 2022 was co-located with IJCAI-ECAI 2022 and took place in person on the 24th of July in Vienna, Austria. This second edition was fully in-person and had a registration cost (to be paid to the conference), resulting in an average of 20 participants during the 6 hours of the workshop. We have published formal proceedings with Electronic Proceedings in Theoretical Computer Science (EPTCS) for both past editions, with a total of 10 papers (6 full, 4 short) in AREA 2020 and 8 papers (7 full, 1 short) in AREA 2022. All recordings, proceedings, and special issues of past editions are available within their respective websites.

Originally intended as a biennial event, recently the AREA main organisers have decided to expand to having annual editions. This means that the next edition of AREA will take place in 2023, and we are happy to announce that AREA 2023 will be co-located with ECAI 2023 in Kraków, Poland (pending acceptance of the workshop).

With the AREA workshop series we hope to generate awareness about the necessity (and advantages) of combining these different areas. In our next edition, AREA 2023, we aim to expand the scope of the workshop to include machine learning topics — as long as they relate to one of the four main areas of interest. We are looking for an expert in machine learning to join our 2023 organising committee, as well as inviting several new program committee members with machine learning background. Please contact us if you are interested and/or if you have any questions or comments about the workshop.

References

[1] Wooldridge, M. An Introduction to MultiAgent Systems, 2nd ed.; John Wiley and Sons: Hoboken, NJ, USA, 2009; ISBN 047149691X.
Wooldridge, M., & Jennings, N. R. (1995). Intelligent agents: Theory and practice. The knowledge engineering review, 10(2), 115-152.
[2] Taeihagh, A., & Lim, H. S. M. (2019). Governing autonomous vehicles: emerging responses for safety, liability, privacy, cybersecurity, and industry risks. Transport reviews, 39(1), 103-128.
[3] Castelfranchi, C., & Falcone, R. (2000). Trust and control: A dialectic link. Applied Artificial Intelligence, 14(8), 799-823.
[4] Fisher, M., Cardoso, R. C., Collins, E. C., Dadswell, C., Dennis, L. A., Dixon, C., Farrell, M., Ferrando, A., Huang, X., Jump, M., Kourtis, G., Lisitsa, A., Luckcuck, M., Luo, S., Page, V., Papacchini, F., & Webster, M. (2021). An Overview of Verification and Validation Challenges for Inspection Robots. Robotics, 10(2).
[5] Dennis, L. A., Fisher, M., Webster, M. P., & Bordini, R. H. (2012). Model checking agent programming languages. Automated software engineering, 19(1), 5-63.
[6] Luckcuck, M., Farrell, M., Dennis, L. A., Dixon, C., & Fisher, M. (2019). Formal specification and verification of autonomous robotic systems: A survey. ACM Computing Surveys (CSUR), 52(5), 1-41.



tags:


Rafael C Cardoso is a Lecturer at the University of Aberdeen
Rafael C Cardoso is a Lecturer at the University of Aberdeen

Angelo Ferrando is a Research Fellow at the University of Genova
Angelo Ferrando is a Research Fellow at the University of Genova

Autonomy and Verification Network

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

‘Probably’ doesn’t mean the same thing to your AI as it does to you

  17 Apr 2026
Are you sure you and the AI chatbot you’re using are on the same page about probabilities?

Interview with Xinwei Song: strategic interactions in networked multi-agent systems

  16 Apr 2026
Xinwei Song tells us about her research using algorithmic game theory and multi-agent reinforcement learning.

2026 AI Index Report released

  15 Apr 2026
Find out what the ninth edition of the report, which was published on 13 April, says about trends in AI.

Formal verification for safety evaluation of autonomous vehicles: an interview with Abdelrahman Sayed Sayed

  14 Apr 2026
Find out more about work at the intersection of continuous AI models, formal methods, and autonomous systems.

Water flow in prairie watersheds is increasingly unpredictable — but AI could help

  13 Apr 2026
In recent years, the Prairies have seen bigger swings in climate conditions — very wet years followed by very dry ones.

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.

Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti

  08 Apr 2026
Francesco tells us how LLMs behave in the social network Moltbook, and what this reveals about network dynamics.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence