ΑΙhub.org
 

CLAIRE COVID-19 Initiative Video Series: Meet the Team Leaders – Marco Aldinucci

Marco Aldinucci

In this penultimate interview in the series of Meet the Team Leaders from the CLAIRE COVID-19 Initiative, we hear from Marco Aldinucci, Computer Science Department, University of Torino.

AIhub focus issue on good health and well-being

Marco Aldinucci is the leader of the Image analysis (CT scans, X-ray) topic group in the CLAIRE COVID-19 Initiative. In this interview you can find out about the group he leads, how AI methods can be used to help analyse medical images, and the challenges the team faced.

To find out more about this series, read our recent post and watch the first video with Emanuela Girardi. You can also watch interviews with Davide Bacciu, Ann Nowé, Jose Sousa, Marco Maratea and Manlio De Domenico.

You may also be interested in the article Marco wrote for AIhub which details the work of the topic group and describes how high-performance computing and AI can combine to good effect.

You can watch the series on the CLAIRE YouTube channel.

Find out more about the CLAIRE COVID-19 task force here.
You can contact the taskforce by email here.



tags: , ,


CLAIRE (Confederation of Laboratories for AI Research in Europe)




            AIhub is supported by:



Related posts :



New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Interview with Mario Mirabile: trust in multi-agent systems

  18 Nov 2025
We meet ECAI Doctoral Consortium participant, Mario, to find out more about his research.

Review of “Exploring metaphors of AI: visualisations, narratives and perception”

and   17 Nov 2025
A curated research session at the Hype Studies Conference, “(Don’t) Believe the Hype?!” 10-12 September 2025, Barcelona.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.

Learning from failure to tackle extremely hard problems

  12 Nov 2025
This blog post is based on the work "BaNEL: Exploration posteriors for generative modeling using only negative rewards".



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence