ΑΙhub.org
 

Interview with Paula Harder: super-resolution climate data with physics-based constraints


by
31 August 2022



share this:
Paula Harder

Paula Harder, and co-authors Qidong Yang, Venkatesh Ramesh, Alex Hernandez-Garcia, Prasanna Sattigeri, Campbell D. Watson, Daniela Szwarcman and David Rolnick, recently wrote a paper on Generating physically-consistent high-resolution climate data with hard-constrained neural networks. In this interview, Paula tells us more about how they developed a method for super-resolution climate data where conservation laws are enforced.

What is the topic of the research in your paper?

Our paper looks at super-resolution for climate data, which is called downscaling. Deep learning has been applied a lot recently in that area, but the neural networks employed tend to violate physical laws, such as mass conservation. In this work, we look at how to change neural super-resolution architectures such that given constraints like conservation laws are enforced.

Could you tell us about the implications of your research and why it is an interesting area for study?

With our new methodology super-resolution can be made feasible for scientific application, where a guarantee for conservation of some quantities is required. For example, if we look at climate model data, often already small violations of mass conservation can lead to huge instabilities when the data is fed back into a model. Our method can also help in many other application domains as well as potentially improve super-resolution in general.

super-resolution dataAn example of spatial super-resolution prediction for different methods. Shown here is the low resolution input, different constrained and unconstrained predictions and the high-resolution image as a reference.

Could you explain your methodology?

Our first methodology is introducing a new layer at the end of a neural network, the constraint or renormalization layer. It is an adaption of a softmax layer, such that quantities between low-resolution input and predicted high-resolution output are conserved and the values are forced to be positive. This layer can then also be applied successively if we increase the resolution by a large factor.

What were your main findings?

Interestingly, we found that the constraining methodology not only gives us a prediction that obeys the physical laws but also has an increased predictive accuracy compared to the same architectures without that layer. This effect showed in all the architectures ranging from CNNs, over GANs to RNNs that also do super-resolution in the time dimension.

What further work are you planning in this area?

So far we only used one data set to develop and test our methodology. We would like to extend the application of our work to new data sets in climate science and other areas as well as to new architectures. We also plan to apply the constraining methodology to other climate model tasks besides downscaling.

About Paula

Paula Harder is an intern at Mila and a Ph.D. student in computer science at the Fraunhofer Institute. Her research focuses on physics-constrained deep learning for climate science, where she worked on emulating an aerosol model as a visiting researcher at the University of Oxford. Besides her work on climate machine learning (ML), she did work on adversarial attack detection and was involved with NASA’s and ESA’s Frontier Development Lab for projects on ML for space and earth science. Paula holds a master’s degree in mathematics from the University of Tübingen and worked in the automotive industry as a development engineer.

Read the research in full

Generating physically-consistent high-resolution climate data with hard-constrained neural networks
Paula Harder, Qidong Yang, Venkatesh Ramesh, Alex Hernandez-Garcia, Prasanna Sattigeri, Campbell D. Watson, Daniela Szwarcman and David Rolnick.




AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:



Related posts :

monthly digest

AIhub monthly digest: January 2026 – moderating guardrails, humanoid soccer, and attending AAAI

  30 Jan 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Machine Ethics podcast: 2025 wrap up with Lisa Talia Moretti & Ben Byford

Lisa and Ben chat about the prevalence of AI slop, the end of social media, Grok and explicit content generation, giving legislation more teeth, anthropomorphising reasoning models, and more.

Interview with Kate Larson: Talking multi-agent systems and collective decision-making

  27 Jan 2026
AIhub ambassador Liliane-Caroline Demers caught up with Kate Larson at IJCAI 2025 to find out more about her research.

#AAAI2026 social media round up: part 1

  23 Jan 2026
Find out what participants have been getting up to during the first few of days at the conference

Congratulations to the #AAAI2026 outstanding paper award winners

  22 Jan 2026
Find out who has won these prestigious awards at AAAI this year.

3 Questions: How AI could optimize the power grid

  21 Jan 2026
While the growing energy demands of AI are worrying, some techniques can also help make power grids cleaner and more efficient.

Interview with Xiang Fang: Multi-modal learning and embodied intelligence

  20 Jan 2026
In the first of our new series of interviews featuring the AAAI Doctoral Consortium participants, we hear from Xiang Fang.

An introduction to science communication at #AAAI2026

  19 Jan 2026
Find out more about our session on Wednesday 21 January.


AIhub is supported by:







 













©2026.01 - Association for the Understanding of Artificial Intelligence