ΑΙhub.org
 

Using deep learning to help distinguish dark matter from cosmic noise

by
18 September 2024



share this:

Still image from simulation of the formation of dark matter structures from the early universe to today. Gravity makes dark matter clump into dense halos, indicated by bright patches, where galaxies form. In this simulation, a halo like the one that hosts the Milky Way forms and a smaller halo resembling the Large Magellanic Cloud falls toward it. SLAC and Stanford researchers, working with collaborators from the Dark Energy Survey, have used simulations like these to better understand the connection between dark matter and galaxy formation. Credit: Ralf Kaehler/Ethan Nadler/SLAC National Accelerator Laboratory.

By Nik Papageorgiou

Dark matter is the invisible force holding the universe together – or so we think. It makes up around 85% of all matter and around 27% of the universe’s contents, but since we can’t see it directly, we have to study its gravitational effects on galaxies and other cosmic structures. Despite decades of research, the true nature of dark matter remains one of science’s most elusive questions.

According to a leading theory, dark matter might be a type of particle that barely interacts with anything else, except through gravity. But some scientists believe these particles could occasionally interact with each other, a phenomenon known as self-interaction. Detecting such interactions would offer crucial clues about dark matter’s properties.

However, distinguishing the subtle signs of dark matter self-interactions from other cosmic effects, like those caused by active galactic nuclei (AGN) – the supermassive black holes at the centers of galaxies – has been a major challenge. AGN feedback can push matter around in ways that are similar to the effects of dark matter, making it difficult to tell the two apart.

Astronomer David Harvey at EPFL’s Laboratory of Astrophysics has developed a deep-learning algorithm that can help untangle these complex signals. Their machine learning-based method is designed to differentiate between the effects of dark matter self-interactions and those of AGN feedback by analyzing images of galaxy clusters – vast collections of galaxies bound together by gravity. The work promises to enhance the precision of dark matter studies.

Harvey trained a Convolutional Neural Network (CNN) with images from the BAHAMAS-SIDM project, which models galaxy clusters under different dark matter and AGN feedback scenarios. By being fed thousands of simulated galaxy cluster images, the CNN learned to distinguish between the signals caused by dark matter self-interactions and those caused by AGN feedback.

Among the various CNN architectures tested, the most complex – dubbed “Inception” – proved to also be the most accurate. The model was trained on two primary dark matter scenarios, featuring different levels of self-interaction, and validated on additional models, including a more complex, velocity-dependent dark matter model.

Inception achieved an impressive accuracy of 80% under ideal conditions, effectively identifying whether galaxy clusters were influenced by self-interacting dark matter or AGN feedback. It maintained its high performance even when the researchers introduced realistic observational noise that mimics the kind of data we expect from future telescopes like Euclid.

What this means is that Inception – and this approach more generally – could prove incredibly useful for analyzing the massive amounts of data we collect from space. This could prove a promising tool for future dark matter research.

Read the research in full

A deep-learning algorithm to disentangle self-interacting dark matter and AGN feedback models, David Harvey, 2024.




EPFL




            AIhub is supported by:


Related posts :



AIhub coffee corner: Is it the end of GenAI hype?

The AIhub coffee corner captures the musings of AI experts over a short conversation.
08 October 2024, by

ChatGPT is changing the way we write. Here’s how – and why it’s a problem

Have you noticed certain words and phrases popping up everywhere lately?
07 October 2024, by

Will humans accept robots that can lie? Scientists find it depends on the lie

Humans don’t just lie to deceive: sometimes we lie to avoid hurting others, breaking one social norm to uphold another.
04 October 2024, by

Explainable AI for detecting and monitoring infrastructure defects

A team of researchers has demonstrated the feasibility of an AI-driven method for crack detection, growth and monitoring.
03 October 2024, by

The Good Robot podcast: the EU AI Act part 2, with Amba Kak and Sarah Myers West from AI NOW

In the second instalment of their EU AI Act series, Eleanor and Kerry talk to Amba Kak and Sarah Myers West
02 October 2024, by

Forthcoming machine learning and AI seminars: October 2024 edition

A list of free-to-attend AI-related seminars that are scheduled to take place between 1 October and 30 November 2024.
01 October 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association