ΑΙhub.org
 

Machine learning for climate science and Earth observation – a webinar from Climate Change AI


by
16 November 2021



share this:

earth
The most recent webinar in the Climate Change AI series covered machine learning for climate science and Earth observation. We heard from two experts in the field, and you can watch the recording below. Maike Sonnewald spoke about trustworthy AI for climate analysis, and Gustau Camps-Valls talked about physics-aware machine learning for Earth sciences.

A robust blueprint for trustworthy AI for climate analysis

Maike Sonnewald, Princeton University.

In her presentation, Maike put forward a blueprint for a transparent machine learning application that reveals 3D ocean current structures from surface fields in climate models. She talked about how she applies this to predict ocean current changes. As a result of climate change there is great variability in global heat transport and this application can aid in understanding that variability. The application is designed to be interpretable and explainable so that it can deliver actionable insights in support of climate decision making.

Physics-aware machine learning for Earth sciences

Gustau Camps-Valls, Universitat de València.

When it comes to Earth science problems, it is desirable to build models that are physically interpretable. Machine learning models are excellent approximators, but very often do not have the laws of physics in-built. This means that consistency and trustworthiness can be compromised. In this talk, Gustau reviewed the main challenges in the field of physics-aware machine learning, and introduced several ways to carry out research at the interface of physics and machine learning.

Useful links

Climate Change AI webpage
Events from Climate Change AI
Webinars from Climate Change AI

AIhub focus issue on climate action

tags: , ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Interview with Mario Mirabile: trust in multi-agent systems

  18 Nov 2025
We meet ECAI Doctoral Consortium participant, Mario, to find out more about his research.

Review of “Exploring metaphors of AI: visualisations, narratives and perception”

and   17 Nov 2025
A curated research session at the Hype Studies Conference, “(Don’t) Believe the Hype?!” 10-12 September 2025, Barcelona.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence