ΑΙhub.org
 

What happens when we mix multi-agent systems, robotics, software engineering, and verification and validation?

AREA workshop photo - presentation in progress

Artificial Intelligence (AI) is all around us and has many influences in our daily life. We already have access to smart homes, drone delivery, and semi-autonomous driving. Depending on the scenario and chosen technologies, the AI component can come from completely distinct research/industrial areas, which can depend on different assumptions and have radically different implications. For instance, the rational part of the AI component could be instantiated as a multi-agent system, and/or its embodiment could be obtained via robotics.

There is no generally accepted definition of what an agent is [1]. However, we could think of an agent as an intelligent component that resembles how a human being behaves. This grey definition allows us to consider an agent as something that presents features typical of the human being, such as being: autonomous, social, rational, reactive, proactive, and incorporating learning behaviour.

Robotics, which in some sense can be thought as the embodiment of the agent, has found its place in modern society, even though it is far from being a vital part of our daily life. We see robotics as a group containing all technologies that aim to build, assemble and program applications that are composed of software-controlled hardware.

However, if AI is not properly applied, it risks causing more problems than it tries to solve. This is even more relevant when AI is deployed in safety-critical scenarios where an error can cost lives, such as in the case of autonomous cars [2]. Naturally, the problem of trustworthiness in AI systems is not new [3], and the challenge of trustworthiness is not only restricted to AI solutions. Indeed, any software (or hardware) system can suffer from this kind of drawback. Because of this, solutions that can help to improve the trust we can put in AI systems are extremely valuable [4]. We refer to these techniques as verification and validation.

Even though solutions to increase the confidence we put in our AI system exist, it is common for such solutions to be focused on only one specific AI area. We may find approaches to verify and validate software agents [5], as well as we can find approaches to verify and validate robotic applications [6]; nonetheless, we hardly find solutions tackling their combination. This does not only concern the AI component verification, but also its engineering and development. Agents and robots do not always go along with each other; or at least, they are mainly studied and experimented in isolation (w.r.t. one other). Furthermore, there is a lack of software engineering methodologies that could bring all of these aspects together during the development of the system.

The AREA Workshop

The “Agents and Robots for reliable Engineered Autonomy” (AREA) is a series of workshops that brings together researchers from the areas of multi-agent systems, robotics, software engineering, and verification and validation, to foster collaboration and to stimulate research that combines (or applies) some (or all) of these areas. These workshops provide a public forum for researchers to present their work and discuss with researchers from other areas how their approach can be extended to include elements from these areas where appropriate.

AREA logo

To date, there have been two editions of the AREA workshop. AREA 2020 was co-located with ECAI 2020 and it was hosted virtually, due to the COVID-19 pandemic. Being an online event with no cost associated with it, we had a high attendance for the first ever edition of the workshop, maintaining a constant 40 participants throughout all 6 hours of the workshop, with 80 unique participants overall. AREA 2022 was co-located with IJCAI-ECAI 2022 and took place in person on the 24th of July in Vienna, Austria. This second edition was fully in-person and had a registration cost (to be paid to the conference), resulting in an average of 20 participants during the 6 hours of the workshop. We have published formal proceedings with Electronic Proceedings in Theoretical Computer Science (EPTCS) for both past editions, with a total of 10 papers (6 full, 4 short) in AREA 2020 and 8 papers (7 full, 1 short) in AREA 2022. All recordings, proceedings, and special issues of past editions are available within their respective websites.

Originally intended as a biennial event, recently the AREA main organisers have decided to expand to having annual editions. This means that the next edition of AREA will take place in 2023, and we are happy to announce that AREA 2023 will be co-located with ECAI 2023 in Kraków, Poland (pending acceptance of the workshop).

With the AREA workshop series we hope to generate awareness about the necessity (and advantages) of combining these different areas. In our next edition, AREA 2023, we aim to expand the scope of the workshop to include machine learning topics — as long as they relate to one of the four main areas of interest. We are looking for an expert in machine learning to join our 2023 organising committee, as well as inviting several new program committee members with machine learning background. Please contact us if you are interested and/or if you have any questions or comments about the workshop.

References

[1] Wooldridge, M. An Introduction to MultiAgent Systems, 2nd ed.; John Wiley and Sons: Hoboken, NJ, USA, 2009; ISBN 047149691X.
Wooldridge, M., & Jennings, N. R. (1995). Intelligent agents: Theory and practice. The knowledge engineering review, 10(2), 115-152.
[2] Taeihagh, A., & Lim, H. S. M. (2019). Governing autonomous vehicles: emerging responses for safety, liability, privacy, cybersecurity, and industry risks. Transport reviews, 39(1), 103-128.
[3] Castelfranchi, C., & Falcone, R. (2000). Trust and control: A dialectic link. Applied Artificial Intelligence, 14(8), 799-823.
[4] Fisher, M., Cardoso, R. C., Collins, E. C., Dadswell, C., Dennis, L. A., Dixon, C., Farrell, M., Ferrando, A., Huang, X., Jump, M., Kourtis, G., Lisitsa, A., Luckcuck, M., Luo, S., Page, V., Papacchini, F., & Webster, M. (2021). An Overview of Verification and Validation Challenges for Inspection Robots. Robotics, 10(2).
[5] Dennis, L. A., Fisher, M., Webster, M. P., & Bordini, R. H. (2012). Model checking agent programming languages. Automated software engineering, 19(1), 5-63.
[6] Luckcuck, M., Farrell, M., Dennis, L. A., Dixon, C., & Fisher, M. (2019). Formal specification and verification of autonomous robotic systems: A survey. ACM Computing Surveys (CSUR), 52(5), 1-41.



tags:


Rafael C Cardoso is a Lecturer at the University of Aberdeen
Rafael C Cardoso is a Lecturer at the University of Aberdeen

Angelo Ferrando is a Research Fellow at the University of Genova
Angelo Ferrando is a Research Fellow at the University of Genova

Autonomy and Verification Network




            AIhub is supported by:


Related posts :



Call for AI-themed holiday videos, art, poems, and more

Send us your festive offerings!
06 December 2024, by

What’s coming up at #NeurIPS2024?

We take a look at the programme for the forthcoming NeurIPS conference, to take place in Vancouver.
05 December 2024, by

An introduction to science communication at #NeurIPS2024

Find out what we are planning to cover in our session at NeurIPS on 10 December.
04 December 2024, by

Unmasking AlphaFold to predict large protein complexes

“We’re giving a new type of input to AlphaFold. The idea is to get the whole picture, both from experiments and neural networks, making it possible to build larger structures."
03 December 2024, by

How to benefit from AI without losing your human self – a fireside chat from IEEE Computational Intelligence Society

Tayo Obafemi-Ajayi (Missouri State University) chats to Hava T. Siegelmann (University of Massachusetts, Amherst)

AIhub monthly digest: November 2024 – dynamic faceted search, the kidney exchange problem, and AfriClimate AI

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
29 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association