ΑΙhub.org
 

Interview with AAAI Fellow Anima Anandkumar: Neural Operators for science and engineering problems


by
20 August 2024



share this:

Each year the Association for the Advancement of Artificial Intelligence (AAAI) recognizes a group of individuals who have made significant, sustained contributions to the field of artificial intelligence by appointing them as Fellows. We’ve been talking to some of the 2024 AAAI Fellows to find out more about their research. In this interview, we meet Anima Anandkumar and find out about her work on Neural Operators, of which she is the inventor.

Why are Neural Operators so powerful, and how do they enable such an advance (over previous neural network methods) for science and engineering problems?

Neural Operators are able to learn complex physical phenomena that occur at multiple resolutions while standard neural networks are unable to do so. Standard neural networks use a fixed number of pixels or resolution to learn a phenomenon, while neural operators represent data as continuous functions. Hence, neural operators can capture fine details that neural networks cannot. You can draw an analogy between vector and raster graphics. With raster graphics, as you zoom in, it gets blurry since it uses a fixed number of pixels. In contrast, with vector graphics, it remains sharp even upon zooming, since it represents shapes via continuous functions.

Neural Operators are relevant for science and engineering problems that happen at multiple scales. For instance, in order to predict how a hurricane develops, we need to capture the fine details: just a coarse view of the hurricane is not enough. Neural Operators can capture these details accurately while being much faster (4-6 orders of magnitude) than traditional numerical models.

Are there particular types of problems for which neural operators are particularly well suited? Likewise, problems where they are not so well suited?

Neural Operators extend standard neural networks to continuous functions, and hence, are a more general class of architectures. They can be applied in any learning scenario, but are particularly effective at learning phenomena not limited to just one resolution.

Looking at one application in particular: weather forecasting. Could you talk about your model FourCastNet and the contributions this work has made to the world of forecasting?

FourCastNet was the first fully AI-based weather model and was built using Neural Operators. It is tens of thousands of times faster than traditional weather models, while also being accurate. While the traditional weather model requires a big supercomputer to run, FourCastNet runs on a gaming GPU and gives a two-week forecast in under a minute. It is now running at ECMWF, the European weather agency, and its prediction charts are available for everyone to check. Our model is also open sourced and can be downloaded by anyone and run easily. During the recent hurricane Beryl, FourCastNet had lower uncertainty compared to traditional models, and hence, accurately predicted the path.

Besides weather forecasting, you’ve applied the neural operators framework to many domains. Are there others that you’d like to highlight?

Indeed, we have applied them widely. Neural Operators are a general AI technique for solving Partial Differential Equations (PDE), which are the “workhorse” of scientific modeling. We have applied them for modeling plasma evolution in nuclear fusion, and it is more than a million times faster than numerical simulations. As such, it can be used to predict and prevent disruptions that occur in the fusion reactor, bringing us closer to the dream of sustainable fusion. We have also applied it for modeling carbon dioxide storage in underground reservoirs, and it is also about a million times faster than traditional simulations. This allows us to plan for climate change mitigation.

We also recently designed a novel medical catheter using Neural Operators. A catheter is a tube used to draw fluids out of the human body, but bacteria tend to swim upstream into the human body causing infections. We trained a Neural Operator model to understand fluid flow and bacterial activity, and it was able to generate an optimal design of ridges on the inside of the tube that creates vortices and helps prevent bacteria from swimming into the human body. We 3D printed and tested it in the lab, and recorded a hundred-fold reduction in bacterial contamination. Hence, Neural Operators are not only effective for simulation, but also for inverse design since they are differentiable.

In your recent TED Talk you outlined a vision for a general AI model for scientific discovery. Are you able to talk a bit about your current progress towards this?

I believe that the future of science is AI+Science: AI that deeply understands the physical world and is able to simulate and design without hallucinations. That will be a big leap from the current state of science which is mostly a trial and error process, where the bottleneck is the time and cost of physical experiments. Having AI replace many of those experiments will be game changing. Language models are unable to do so since they have no physical grounding. This requires a generalist AI model that understands a wide range of physical phenomena. Neural Operators serve as the backbone for such an AI model for universal physical understanding.

About Anima

Anima Anandkumar has done pioneering work on AI algorithms that have revolutionized scientific domains, including weather & climate modeling, drug discovery, scientific simulations, and engineering design. She invented Neural Operators that extend deep learning to modeling multi-scale processes in these scientific domains. She developed the first AI-based high-resolution weather model that showed competitive performance with current practice while being tens of thousands of times faster and is deployed at the European Center for Medium-Range Weather Forecasting (ECMWF), a premier global weather agency.

Anima is a Bren professor at Caltech and a fellow of the AAAI, IEEE, and ACM. She has received several awards, including the Guggenheim and Alfred P. Sloan fellowships, the IEEE Kiyo Tomiyasu Award, the Schmidt Sciences AI2050 senior fellow, the Distinguished Alumnus Award by the Indian Institute of Technology Madras, the NSF Career Award, and best paper awards at venues such as Neural Information Processing and the ACM Gordon Bell Special Prize for HPC-Based COVID-19 Research. She recently presented her work on AI+Science to the White House Science Council (PCAST), the National AI Advisory Committee, and TED 2024.

She received her B. Tech from the Indian Institute of Technology Madras and her Ph.D. from Cornell University and did her postdoctoral research at MIT. She was previously principal scientist at Amazon Web Services and senior director of AI research at NVIDIA.



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.

The Machine Ethics podcast: moral agents with Jen Semler

In this episode, Ben and Jen Semler talk about what makes a moral agent, the point of moral agents, philosopher and engineer collaborations, and more.

Extending the reward structure in reinforcement learning: an interview with Tanmay Ambadkar

  23 Feb 2026
Find out more about Tanmay's research on RL frameworks, the latest in our series meeting the AAAI Doctoral Consortium participants.

The Good Robot podcast: what makes a drone “good”? with Beryl Pong

  20 Feb 2026
In this episode, Eleanor and Kerry talk to Beryl Pong about what it means to think about drones as “good” or “ethical” technologies.

Relational neurosymbolic Markov models

and   19 Feb 2026
Relational neurosymbolic Markov models make deep sequential models logically consistent, intervenable and generalisable



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence