ΑΙhub.org
 

AI for Science – from cosmology to chemistry


by
01 May 2026



share this:

A ring galaxy - only 1 in 10,000 galaxies are ring galaxies. Image credits: NASA and The Hubble Heritage Team (STScI/AURA); Acknowledgment: Ray A. Lucas (STScI/AURA).

On the 31st March, our editorial team headed to the Royal Society for AI for Science. This day-long conference explored how AI is changing the nature of scientific discovery, and was hosted by the Fundamental Research team from the Alan Turing Institute. Nestled in a terrace of 19th century townhouses along the banks of the Thames, the Royal Society looks as grand as the names who have passed through its doors throughout the years.

The Royal Society at 6-9 Carlton House TerraceThe Royal Society at 6-9 Carlton House Terrace. Image credits: Lucy Smith

Prof Jason McEwen, Chief Scientist for the Turing Institute, opened the event with an insightful talk on the nature of scientific revolution, and how the bidirectional relationship between AI and science could spark the next one.

Then, Prof Anna Scaife from the University of Manchester spoke on the use of foundation models for astronomical discovery. Foundation models are generative AI models which can be applied to a wide variety of tasks and inputs. This is perfect for astronomy, which is typically made up of many data modalities.

Galaxy Zoo is a citizen science project where people label images of galaxies, to help astronomers make sense of the masses of data collected by telescope. Using a Galaxy Zoo dataset on 300k galaxies, astronomers built Zoobot – a neural network which categorises galaxies by their morphology. This has led to the discovery of 40k ring galaxies, which were previously thought to be very sparse, as well as the identification of median-age, “green valley” galaxies.

A ring galaxy - only 1 in 10,000 galaxies are ring galaxies. Image credits: NASA and The Hubble Heritage Team (STScI/AURA); Acknowledgment: Ray A. Lucas (STScI/AURA)

Next, Prof Aron Walsh from Imperial College discussed how his team is using AI to discover novel materials. Diffusion models can be used to design crystal compounds, in a process similar to image generation – random noise is gradually removed until a crystal structure with the desired properties emerges. While traditional computational chemistry methods take days, this AI-enabled approach takes just milliseconds.

Another promising application of AI is for climate science and prediction. Dr Scott Hosking, Mission Director of Environmental Forecasting at the Alan Turing Institute, outlined the IceNet programme, which is the first AI-based model which forecasts sea ice levels weeks to months ahead, and has already made an impact in Arctic conservation. He then discussed the FastNet model, which aims to become the UK’s first operational weather model. Developed in collaboration with the Met Office, it uses a 40-year weather record to predict the weather. It has already been shown to outperform physics-based models at predicting the weather in data sparse regions, such as West Africa.

The lecture theatre where AI for Science took place.Photo credits: Lucy Smith

We then heard about the power of AI for cracking nuclear fusion from Dr Lorenzo Zanisi, a Lead Data Scientist at the UK Atomic Energy Authority. Keeping plasma – charged gas particles – hot and dense enough for fusion to happen is a big challenge. There is a reason why it only happens in stars! So far, scientists have had to rely on expensive simulations to model the physics, which can take up to 350 hours to run. AI simulators of nuclear fusion take only milliseconds, promising to accelerate research and to bring us closer to a fusion-powered future.

Dr Miles Cranmer from the University of Cambridge posed the question – why is physics orders of magnitude better at generalising than machine learning? What humans regard as simple tasks tend to not be simple when we try to program them into machines. Dr Cranmer argued that what we consider ‘simple’ is actually a substitute for ‘useful’ – so he posited simplicity as an aesthetic concept, and not reflective of how simple a task really is. On this basis, he argued that AI does not generalise so well because it must learn the world from scratch, whereas humans can reuse useful concepts we have already learned. Perhaps the most powerful AI models will be those which can learn reusable concepts from data. This principle drives his work at Polymathic AI, which is a joint effort from NYU and Cambridge to pool their computing resources to train physics-based foundation models at industrial scales.

The day concluded with a panel discussion from all the speakers, where they highlighted that interpretability is key for AI models, yet there is a trade-off between interpretability and performance of LLMs. The panellists emphasised the importance of pretraining models, as there is no such thing as a non-pretrained model – a model with random weights is just a worse way of initialising it.

AI for Science proved to be an insightful and inspiring day, showcasing some of the most promising applications of AI in scientific research across a variety of domains. Machine learning is unlocking applications in huge datasets that have been collected over decades in weather and cosmology, and speeding up simulations in nuclear fusion and crystal design – scientists are already making huge strides. Who knows what the future holds?

If you’d like to learn more, you can watch a video of the speakers’ reflections here:




Ella Scallan is Assistant Editor for AIhub
Ella Scallan is Assistant Editor for AIhub

            AUAI is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

monthly digest

AIhub monthly digest: April 2026 – machine learning for particle physics, AI Index Report, and table tennis

  30 Apr 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Machine Ethics podcast: organoid computing with Dr Ewelina Kurtys

In this episode, Ben chats to Ewelina about the uses of organoids and energy saving computing, differences between biological neurons and digital neural networks, and much more.

#AAAI2026 invited talk: Yolanda Gil on improving workflows with AI

  28 Apr 2026
Former AAAI president on using AI to help communities of scientists better streamline their research.

Maryna Viazovska’s proofs of sphere packing formalized with AI

  27 Apr 2026
Formalization achieved through a collaboration between mathematicians and artificial intelligence tools.

Interview with Deepika Vemuri: interpretability and concept-based learning

  24 Apr 2026
Find out more about Deepika's research bridging the gap between data-driven models and symbolic learning.

As a ‘book scientist’ I work with microscopes, imaging technologies and AI to preserve ancient texts

  23 Apr 2026
Using an array of technologies to recover, understand and preserve many valuable ancient texts.

Sony AI table tennis robot outplays elite human players

  22 Apr 2026
New robot and AI system has beaten professional and elite table tennis players.

Causal models for decision systems: an interview with Matteo Ceriscioli

  21 Apr 2026
How can we integrate causal knowledge into agents or decision systems to make them more reliable?



AUAI is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence