ΑΙhub.org
 

#IJCAI2022 invited talk: Insights in medicine with Mihaela van der Schaar


by
26 August 2022



share this:
stethoscope and laptop

The 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence (IJACI-ECAI 2022) took place from 23-29 July, in Vienna. As part of the conference there were eight fascinating invited talks. In this post, we summarise the presentation by Mihaela van der Schaar, University of Cambridge. The title of her talk was “Panning for insights in medicine and beyond: New frontiers in machine learning interpretability”.

Mihaela began by explaining why the field of medicine is so complex. Differences between individuals, due to factors such as genetic background, environmental exposure, and life-style, lead to variations in symptoms, disease trajectories, and responses to treatments. What clinicians have to do is make judgments about each patient based on this very complex web of information. The goal of Mihaela’s research lab is to develop machine learning methods which both address complex problems in medicine, and impower clinicians and patients.

Mihaela believes that there are many opportunities for using machine learning in medicine. For example, it could be used to enable precision medicine, to help understand disease trajectories, to help inform and improve clinical pathways, and to aid new discoveries, to name a few. Mihaela and her team work closely with clinicians to try to understand and model complex problems in medicine. They are developing human-machine partnerships where machine learning augments human skills.

Mihaela talkMihaela delivering her talk at IJCAI-ECAI2022. The opportunities for machine learning in medicine.

To highlight this close collaboration with clinicians, Mihaela pointed to engagement sessions that she initiated two years ago, and so far have involved around 500 clinicians. In these sessions, the clinicians discuss what type of tools they’d like to have and they provide input on the design. If you’d like to watch these sessions they have been recorded and are available on YouTube (see an example here).

The ability to provide explanations is very important in the field of medicine. Mihaela and her team asked clinicians what they wanted from an explanation. They identified three different areas: 1) explanatory patient features and examples, and the ability to explain time-series data and treatment trajectories, 2) personalised explanations – rather than a “one size fits all” explanation, this would provide clinicians with explanations based on specific patients that they select, 3) discovery of governing equations of medicine.

With regards to the third area – the discovery of governing equations using machine learning – Mihaela explained that what she has in mind here is learning equations from data. These could be explicit functions, implicit functions, or ordinary differential equations (ODEs). She believes that we will only have concise, generalizable, transparent and interpretable methods once we’ve been able to distil the laws of medicine into governing equations.

Formulating the problem of using data to discover underlying ODEs. Screenshot from Mihaela’s talk.

The most exciting and challenging frontier of these is learning ODEs. After all, the human body is a dynamical system and we’d like to understand how it changes over time. The problem formulation as follows: there is a dataset of trajectories, and we’d like to learn underlying differential equations.

In recent work, Mihaela and co-authors proposed the Discovery of Closed-form ODE framework (D-CODE). The key insight behind D-CODE is the variational formulation of ODE, which establishes a direct link between the trajectory x(t) and the ODE f while bypassing the unobservable time derivative \dot{x}(t). They developed a novel objective function based on this insight, and proved that it is a valid proxy for the estimation error of the true (but unknown) ODE.

D-CODE motivation slideD-CODE. Screenshot from Mihaela’s talk.

In the experiments, D-CODE successfully discovered the governing equations of a diverse range of dynamical systems under challenging measurement settings with high noise and infrequent sampling. One experiment in particular concerned learning an equation that governs the growth of tumours and the impact of chemotherapy. They used data from eight clinical trials that followed patients over time, and found that their method provided an equation that gave a much better representation of data than previous methodologies.

If you are interested in finding out more about Mihaela’s work, here are some links that she highlighted during her talk:
Engagement sessions with clinicians
Inspiration exchange engagement sessions
D-CODE: Discovering Closed-from ODEs from observed trajectories



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



The Good Robot podcast: Using feminist chatbots to fight trolls with Sarah Ciston

  22 Jan 2025
Eleanor and Kerry chat to Sarah Ciston about the difficult labor of content moderation, chatbots to combat trolls, and more.

An open-source training framework to advance multimodal AI

  22 Jan 2025
EPFL researchers have developed 4M, a next-generation, framework for training versatile and scalable multimodal foundation models.

Optimizing LLM test-time compute involves solving a meta-RL problem

  20 Jan 2025
By altering the LLM training objective, we can reuse existing data along with more test-time compute to train models to do better.

Generating a biomedical knowledge graph question answering dataset

  17 Jan 2025
Introducing PrimeKGQA - a scalable approach to dataset generation, harnessing the power of large language models.

The Machine Ethics podcast: 2024 in review with Karin Rudolph and Ben Byford

Karin Rudolph and Ben Byford talk about 2024 touching on the EU AI Act, agent-based AI and advertising, AI search and access to information, conflicting goals of many AI agents, and much more.

Playbook released with guidance on creating images of AI

  15 Jan 2025
Archival Images of AI project enables the creation of meaningful and compelling images of AI.

The Good Robot podcast: Lithium extraction in the Atacama with Sebastián Lehuedé

  13 Jan 2025
Eleanor and Kerry chat to Sebastián Lehuedé about data activism, the effects of lithium extraction, and the importance of reflexive research ethics.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association