ΑΙhub.org
 

Can machine learning learn new physics?


by
18 June 2020



share this:
electromagnetic-waves

Can machine learning learn new physics – or do we have to put it in by hand? A workshop organised by Ilya Nemenman (Emory University), and featuring a number of experts in the field, aimed to find out more.

There has been a rapid increase in research using machine learning to elucidate experimental data from a range of physical systems, from quantum to biological, from statistical to social. However, can these methods discover fundamentally new physics? Is it unrealistic to expect machine learning systems to be able to infer new physics without specifically adapting them to find what we are looking for? What minimal knowledge do these systems need in order to make discoveries and how would we go about doing this?

These questions, and more, were explored by the eight speakers below in the context of diverse systems, and from general theoretical advances to specific applications. Each speaker delivered a 10-15 min talk, followed by questions/discussion. The speakers discussed some of their current research in the field and opined on where the field is heading, and what is needed to get us there.

The speakers

Aleksandra Walczak (CNRS/ENS Paris) – Generative models of immune repertoires
David Schwab (CUNY) – Renormalizing data
Sam Greydanus (Google Brain) – Nature’s cost function
Max Tegmark (MIT) – Symbolic regression & pregression
Bryan Daniels (Arizona State University) – Inferring logic, not just dynamical models
Andrea Liu (University of Pennsylvania) – Doing “statistical mechanics” with big data
Roger Melko (University of Waterloo) – Machine learning and the complexity of quantum simulation
Lucy Colwell (Cambridge University) – Using simple models to explore the sequence plasticity of viral capsids

You can watch the original live version of the workshop, complete with the chat as it happened in real-time on the Emory TMLS YouTube channel.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.

How the internet and its bots are sabotaging scientific research

  03 Sep 2025
What most people have failed to fully realise is that internet research has brought along risks of data corruption or impersonation.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence