ΑΙhub.org
 

Interview with AAAI Fellow Anima Anandkumar: Neural Operators for science and engineering problems


by
20 August 2024



share this:

Each year the Association for the Advancement of Artificial Intelligence (AAAI) recognizes a group of individuals who have made significant, sustained contributions to the field of artificial intelligence by appointing them as Fellows. We’ve been talking to some of the 2024 AAAI Fellows to find out more about their research. In this interview, we meet Anima Anandkumar and find out about her work on Neural Operators, of which she is the inventor.

Why are Neural Operators so powerful, and how do they enable such an advance (over previous neural network methods) for science and engineering problems?

Neural Operators are able to learn complex physical phenomena that occur at multiple resolutions while standard neural networks are unable to do so. Standard neural networks use a fixed number of pixels or resolution to learn a phenomenon, while neural operators represent data as continuous functions. Hence, neural operators can capture fine details that neural networks cannot. You can draw an analogy between vector and raster graphics. With raster graphics, as you zoom in, it gets blurry since it uses a fixed number of pixels. In contrast, with vector graphics, it remains sharp even upon zooming, since it represents shapes via continuous functions.

Neural Operators are relevant for science and engineering problems that happen at multiple scales. For instance, in order to predict how a hurricane develops, we need to capture the fine details: just a coarse view of the hurricane is not enough. Neural Operators can capture these details accurately while being much faster (4-6 orders of magnitude) than traditional numerical models.

Are there particular types of problems for which neural operators are particularly well suited? Likewise, problems where they are not so well suited?

Neural Operators extend standard neural networks to continuous functions, and hence, are a more general class of architectures. They can be applied in any learning scenario, but are particularly effective at learning phenomena not limited to just one resolution.

Looking at one application in particular: weather forecasting. Could you talk about your model FourCastNet and the contributions this work has made to the world of forecasting?

FourCastNet was the first fully AI-based weather model and was built using Neural Operators. It is tens of thousands of times faster than traditional weather models, while also being accurate. While the traditional weather model requires a big supercomputer to run, FourCastNet runs on a gaming GPU and gives a two-week forecast in under a minute. It is now running at ECMWF, the European weather agency, and its prediction charts are available for everyone to check. Our model is also open sourced and can be downloaded by anyone and run easily. During the recent hurricane Beryl, FourCastNet had lower uncertainty compared to traditional models, and hence, accurately predicted the path.

Besides weather forecasting, you’ve applied the neural operators framework to many domains. Are there others that you’d like to highlight?

Indeed, we have applied them widely. Neural Operators are a general AI technique for solving Partial Differential Equations (PDE), which are the “workhorse” of scientific modeling. We have applied them for modeling plasma evolution in nuclear fusion, and it is more than a million times faster than numerical simulations. As such, it can be used to predict and prevent disruptions that occur in the fusion reactor, bringing us closer to the dream of sustainable fusion. We have also applied it for modeling carbon dioxide storage in underground reservoirs, and it is also about a million times faster than traditional simulations. This allows us to plan for climate change mitigation.

We also recently designed a novel medical catheter using Neural Operators. A catheter is a tube used to draw fluids out of the human body, but bacteria tend to swim upstream into the human body causing infections. We trained a Neural Operator model to understand fluid flow and bacterial activity, and it was able to generate an optimal design of ridges on the inside of the tube that creates vortices and helps prevent bacteria from swimming into the human body. We 3D printed and tested it in the lab, and recorded a hundred-fold reduction in bacterial contamination. Hence, Neural Operators are not only effective for simulation, but also for inverse design since they are differentiable.

In your recent TED Talk you outlined a vision for a general AI model for scientific discovery. Are you able to talk a bit about your current progress towards this?

I believe that the future of science is AI+Science: AI that deeply understands the physical world and is able to simulate and design without hallucinations. That will be a big leap from the current state of science which is mostly a trial and error process, where the bottleneck is the time and cost of physical experiments. Having AI replace many of those experiments will be game changing. Language models are unable to do so since they have no physical grounding. This requires a generalist AI model that understands a wide range of physical phenomena. Neural Operators serve as the backbone for such an AI model for universal physical understanding.

About Anima

Anima Anandkumar has done pioneering work on AI algorithms that have revolutionized scientific domains, including weather & climate modeling, drug discovery, scientific simulations, and engineering design. She invented Neural Operators that extend deep learning to modeling multi-scale processes in these scientific domains. She developed the first AI-based high-resolution weather model that showed competitive performance with current practice while being tens of thousands of times faster and is deployed at the European Center for Medium-Range Weather Forecasting (ECMWF), a premier global weather agency.

Anima is a Bren professor at Caltech and a fellow of the AAAI, IEEE, and ACM. She has received several awards, including the Guggenheim and Alfred P. Sloan fellowships, the IEEE Kiyo Tomiyasu Award, the Schmidt Sciences AI2050 senior fellow, the Distinguished Alumnus Award by the Indian Institute of Technology Madras, the NSF Career Award, and best paper awards at venues such as Neural Information Processing and the ACM Gordon Bell Special Prize for HPC-Based COVID-19 Research. She recently presented her work on AI+Science to the White House Science Council (PCAST), the National AI Advisory Committee, and TED 2024.

She received her B. Tech from the Indian Institute of Technology Madras and her Ph.D. from Cornell University and did her postdoctoral research at MIT. She was previously principal scientist at Amazon Web Services and senior director of AI research at NVIDIA.



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.

Congratulations to the #NeurIPS2024 award winners

  11 Dec 2024
Find out who has been recognised by the conference awards.

Multi-agent path finding in continuous environments

and   11 Dec 2024
How can a group of agents minimise their journey length whilst avoiding collisions?




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association