ΑΙhub.org
 

CLAIRE COVID-19 Initiative Video Series: Meet the Team Leaders – Marco Aldinucci

Marco Aldinucci

In this penultimate interview in the series of Meet the Team Leaders from the CLAIRE COVID-19 Initiative, we hear from Marco Aldinucci, Computer Science Department, University of Torino.

AIhub focus issue on good health and well-being

Marco Aldinucci is the leader of the Image analysis (CT scans, X-ray) topic group in the CLAIRE COVID-19 Initiative. In this interview you can find out about the group he leads, how AI methods can be used to help analyse medical images, and the challenges the team faced.

To find out more about this series, read our recent post and watch the first video with Emanuela Girardi. You can also watch interviews with Davide Bacciu, Ann Nowé, Jose Sousa, Marco Maratea and Manlio De Domenico.

You may also be interested in the article Marco wrote for AIhub which details the work of the topic group and describes how high-performance computing and AI can combine to good effect.

You can watch the series on the CLAIRE YouTube channel.

Find out more about the CLAIRE COVID-19 task force here.
You can contact the taskforce by email here.



tags: , ,


CLAIRE (Confederation of Laboratories for AI Research in Europe)




            AIhub is supported by:



Related posts :



Forthcoming machine learning and AI seminars: December 2025 edition

  01 Dec 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 December 2025 and 31 January 2026.
monthly digest

AIhub monthly digest: November 2025 – learning robust controllers, trust in multi-agent systems, and a new fairness evaluation dataset

  28 Nov 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

EU proposal to delay parts of its AI Act signal a policy shift that prioritises big tech over fairness

  27 Nov 2025
The EC has proposed delaying parts of the act until 2027 following intense pressure from tech companies and the Trump administration.

Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence