Image credits: NASA and The Hubble Heritage Team (STScI/AURA); Acknowledgment: Ray A. Lucas (STScI/AURA).
On the 31st March, our editorial team headed to the Royal Society for AI for Science. This day-long conference explored how AI is changing the nature of scientific discovery, and was hosted by the Fundamental Research team from the Alan Turing Institute. Nestled in a terrace of 19th century townhouses along the banks of the Thames, the Royal Society looks as grand as the names who have passed through its doors throughout the years.
The Royal Society at 6-9 Carlton House Terrace. Image credits: Lucy Smith
Prof Jason McEwen, Chief Scientist for the Turing Institute, opened the event with an insightful talk on the nature of scientific revolution, and how the bidirectional relationship between AI and science could spark the next one.
Then, Prof Anna Scaife from the University of Manchester spoke on the use of foundation models for astronomical discovery. Foundation models are generative AI models which can be applied to a wide variety of tasks and inputs. This is perfect for astronomy, which is typically made up of many data modalities.
Galaxy Zoo is a citizen science project where people label images of galaxies, to help astronomers make sense of the masses of data collected by telescope. Using a Galaxy Zoo dataset on 300k galaxies, astronomers built Zoobot – a neural network which categorises galaxies by their morphology. This has led to the discovery of 40k ring galaxies, which were previously thought to be very sparse, as well as the identification of median-age, “green valley” galaxies.
Image credits: NASA and The Hubble Heritage Team (STScI/AURA); Acknowledgment: Ray A. Lucas (STScI/AURA)
Next, Prof Aron Walsh from Imperial College discussed how his team is using AI to discover novel materials. Diffusion models can be used to design crystal compounds, in a process similar to image generation – random noise is gradually removed until a crystal structure with the desired properties emerges. While traditional computational chemistry methods take days, this AI-enabled approach takes just milliseconds.
Another promising application of AI is for climate science and prediction. Dr Scott Hosking, Mission Director of Environmental Forecasting at the Alan Turing Institute, outlined the IceNet programme, which is the first AI-based model which forecasts sea ice levels weeks to months ahead, and has already made an impact in Arctic conservation. He then discussed the FastNet model, which aims to become the UK’s first operational weather model. Developed in collaboration with the Met Office, it uses a 40-year weather record to predict the weather. It has already been shown to outperform physics-based models at predicting the weather in data sparse regions, such as West Africa.
Photo credits: Lucy Smith
We then heard about the power of AI for cracking nuclear fusion from Dr Lorenzo Zanisi, a Lead Data Scientist at the UK Atomic Energy Authority. Keeping plasma – charged gas particles – hot and dense enough for fusion to happen is a big challenge. There is a reason why it only happens in stars! So far, scientists have had to rely on expensive simulations to model the physics, which can take up to 350 hours to run. AI simulators of nuclear fusion take only milliseconds, promising to accelerate research and to bring us closer to a fusion-powered future.
Dr Miles Cranmer from the University of Cambridge posed the question – why is physics orders of magnitude better at generalising than machine learning? What humans regard as simple tasks tend to not be simple when we try to program them into machines. Dr Cranmer argued that what we consider ‘simple’ is actually a substitute for ‘useful’ – so he posited simplicity as an aesthetic concept, and not reflective of how simple a task really is. On this basis, he argued that AI does not generalise so well because it must learn the world from scratch, whereas humans can reuse useful concepts we have already learned. Perhaps the most powerful AI models will be those which can learn reusable concepts from data. This principle drives his work at Polymathic AI, which is a joint effort from NYU and Cambridge to pool their computing resources to train physics-based foundation models at industrial scales.
The day concluded with a panel discussion from all the speakers, where they highlighted that interpretability is key for AI models, yet there is a trade-off between interpretability and performance of LLMs. The panellists emphasised the importance of pretraining models, as there is no such thing as a non-pretrained model – a model with random weights is just a worse way of initialising it.
AI for Science proved to be an insightful and inspiring day, showcasing some of the most promising applications of AI in scientific research across a variety of domains. Machine learning is unlocking applications in huge datasets that have been collected over decades in weather and cosmology, and speeding up simulations in nuclear fusion and crystal design – scientists are already making huge strides. Who knows what the future holds?
If you’d like to learn more, you can watch a video of the speakers’ reflections here: