ΑΙhub.org
 

CLAIRE COVID-19 Initiative Video Series: Meet the Team Leaders – Emanuela Girardi

Emanuela Girardi

By Anna Tahovská

CLAIRE, the Confederation of Laboratories for AI Research in Europe, launched its COVID-19 Initiative in March 2020 as the first wave of the pandemic hit the continent. Its objective was to coordinate volunteer efforts from its members to contribute to tackling the effects of the disease. The taskforce was able to quickly gather a group of about 150 researchers, scientists and experts in AI organized into seven topic groups: epidemiological data analysis, mobility data analysis, bioinformatics, medical imaging, social dynamics monitoring, robotics, and scheduling and resource management.
AIhub focus issue on good health and well-being
We brought you a comprehensive article about the activities of this initiative in one of last month’s AI for Good series posts. You can read more about the outcomes and experience of this bottom-up approach in the article: The CLAIRE COVID-19 Initiative: a bottom-up effort from the European AI community.

Now, the CLAIRE Covid-19 Initiative would like to share with you a series of interviews called Meet the Team Leaders, in which the team aim to illustrate the significant work they have contributed to, lessons learned from this process and the outlook for the challenges ahead.

One of the taskforce coordinators, Emanuela Girardi, in her interview outlined the potential future development of this initiative: “We are thinking about the second phase of the taskforce focusing on one side on how to use AI technologies to analyze and improve the European vaccination plan and on the other side leveraging what we learned from this experience, we would like to develop a European initiative focused on AI in health.”

Watch the interview here:

You can also look forward to interviews with Davide Bacciu, Ann Nowé, Jose Sousa and other Topic Leaders from this Initiative.

You will be able to watch the whole series on the CLAIRE YouTube channel. New videos will be posted over the coming days.

Find out more about the CLAIRE COVID-19 task force here.
You can contact the taskforce by email here.

About CLAIRE

The Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE) is an organisation created by the European AI community that seeks to strengthen European excellence in AI research and innovation, with a strong focus on human-centred AI. CLAIRE aims to ensure that societies and citizens across all of Europe, and beyond, benefit from AI as a major driver of innovation, future growth and competitiveness, and to achieve world-wide brand recognition for “AI made in Europe”.

CLAIRE, founded in 2018, has garnered the support of more than 3500 AI experts and stakeholders, who jointly represent the vast majority of Europe’s AI community, spanning academia and industry, research and innovation. Among them are more than 140 fellows from various key scientific associations. CLAIRE has opened administrative offices in The Hague (HQ), Brussels, Oslo, Paris, Prague, Rome, Saarbrücken and Zürich.

CLAIRE’s membership network consists of over 380 research groups and institutions, covering jointly more than 21,000 employees in 35 countries. Furthermore, CLAIRE is currently in the process of setting up an Innovation Network that, together with the established Research Network, will foster a strong link between research and industry.

The CLAIRE vision is officially supported by the governments of nine European countries, 28 scientific associations across all of Europe, the European Association for Artificial Intelligence (EurAI), the Association for the Advancement of Artificial Intelligence (AAAI), and the European Space Agency (ESA).

CLAIRE is also actively liaising with other important AI-organisations, including ELLIS, the HumanE AI consortium, the Big Data Value Association, euRobotics and AI4EU.

Follow CLAIRE on: Twitter | LinkedIn | Facebook | YouTube



tags: , ,


CLAIRE (Confederation of Laboratories for AI Research in Europe)




            AIhub is supported by:



Related posts :



What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Interview with Mario Mirabile: trust in multi-agent systems

  18 Nov 2025
We meet ECAI Doctoral Consortium participant, Mario, to find out more about his research.

Review of “Exploring metaphors of AI: visualisations, narratives and perception”

and   17 Nov 2025
A curated research session at the Hype Studies Conference, “(Don’t) Believe the Hype?!” 10-12 September 2025, Barcelona.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence