Can machine learning learn new physics?

18 June 2020

share this:


Can machine learning learn new physics – or do we have to put it in by hand? A workshop organised by Ilya Nemenman (Emory University), and featuring a number of experts in the field, aimed to find out more.

There has been a rapid increase in research using machine learning to elucidate experimental data from a range of physical systems, from quantum to biological, from statistical to social. However, can these methods discover fundamentally new physics? Is it unrealistic to expect machine learning systems to be able to infer new physics without specifically adapting them to find what we are looking for? What minimal knowledge do these systems need in order to make discoveries and how would we go about doing this?

These questions, and more, were explored by the eight speakers below in the context of diverse systems, and from general theoretical advances to specific applications. Each speaker delivered a 10-15 min talk, followed by questions/discussion. The speakers discussed some of their current research in the field and opined on where the field is heading, and what is needed to get us there.

The speakers

Aleksandra Walczak (CNRS/ENS Paris) – Generative models of immune repertoires
David Schwab (CUNY) – Renormalizing data
Sam Greydanus (Google Brain) – Nature’s cost function
Max Tegmark (MIT) – Symbolic regression & pregression
Bryan Daniels (Arizona State University) – Inferring logic, not just dynamical models
Andrea Liu (University of Pennsylvania) – Doing “statistical mechanics” with big data
Roger Melko (University of Waterloo) – Machine learning and the complexity of quantum simulation
Lucy Colwell (Cambridge University) – Using simple models to explore the sequence plasticity of viral capsids

You can watch the original live version of the workshop, complete with the chat as it happened in real-time on the Emory TMLS YouTube channel.

Lucy Smith , Managing Editor for AIhub.

            AIhub is supported by:

Related posts :

Counterfactual predictions under runtime confounding

We propose a method for using offline data to build a prediction model that only requires access to the available subset of confounders at prediction time.
07 May 2021, by

Tweet round-up from #ICLR2021

Peruse some of the chat from this week's International Conference on Learning Representations.
06 May 2021, by

Hot papers on arXiv from the past month: April 2021

What’s hot on arXiv? Here are the most tweeted papers that were uploaded onto arXiv during April 2021.
05 May 2021, by

©2021 - Association for the Understanding of Artificial Intelligence