ΑΙhub.org
 

Researching more data efficient machine learning models


by
12 October 2023



share this:
abstract image - blue blocks in a wavy grid

By Sarah Collins

Researchers have developed a machine learning algorithm that can model complex equations in real-world situations while using far less training data than is normally expected.

The researchers, from the University of Cambridge and Cornell University, found that for partial differential equations – a class of physics equations that describe how things in the natural world evolve in space and time – machine learning models can produce reliable results even when they are provided with limited data.

Their results, reported in the Proceedings of the National Academy of Sciences, could be useful for constructing more time- and cost-efficient machine learning models for applications such as engineering and climate modelling.

Most machine learning models require large amounts of training data before they can begin returning accurate results. Traditionally, a human will annotate a large volume of data – such as a set of images, for example – to train the model.

“Using humans to train machine learning models is effective, but it’s also time-consuming and expensive,” said first author Dr Nicolas Boullé. “We’re interested to know exactly how little data we actually need to train these models and still get reliable results.”

Other researchers have been able to train machine learning models with a small amount of data and get excellent results, but how this was achieved has not been well-explained. For their study, Boullé and his co-authors, Diana Halikias and Alex Townsend from Cornell University, focused on partial differential equations (PDEs).

“PDEs are like the building blocks of physics: they can help explain the physical laws of nature, such as how the steady state is held in a melting block of ice,” said Boullé. “Since they are relatively simple models, we might be able to use them to make some generalisations about why these AI techniques have been so successful in physics.”

The researchers found that PDEs that model diffusion have a structure that is useful for designing AI models. “Using a simple model, you might be able to enforce some of the physics that you already know into the training data set to get better accuracy and performance,” said Boullé.

The researchers constructed an efficient algorithm for predicting the solutions of PDEs under different conditions by exploiting the short and long-range interactions happening. This allowed them to build some mathematical guarantees into the model and determine exactly how much training data was required to end up with a robust model.

“It depends on the field, but for physics, we found that you can actually do a lot with a very limited amount of data,” said Boullé. “It’s surprising how little data you need to end up with a reliable model. Thanks to the mathematics of these equations, we can exploit their structure to make the models more efficient.”

The researchers say that their techniques will allow data scientists to open the ‘black box’ of many machine learning models and design new ones that can be interpreted by humans, although future research is still needed.

“We need to make sure that models are learning the right things, but machine learning for physics is an exciting field – there are lots of interesting maths and physics questions that AI can help us answer,” said Boullé.

Read the research in full

Elliptic PDE learning is provably data-efficient, Nicolas Boullé, Diana Halikias, and Alex Townsend, PNAS (2023).




University of Cambridge

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence