ΑΙhub.org
 

Using AI to speed up landslide detection


by
11 August 2025



share this:

Rescue teams at one of the landslides following the Taiwan earthquake. Credit: Taitung County Government via Wikimedia Commons

On 3 April 2024, a magnitude 7.4 quake—Taiwan’s strongest in 25 years—shook the country’s eastern coast. Stringent building codes spared most structures, but mountainous and remote villages were devastated by landslides.

When disasters affect large and inaccessible areas, responders often turn to satellite images to pinpoint affected areas and prioritise relief efforts.

But mapping landslides from satellite imagery by eye can be time-intensive, said Lorenzo Nava, who is jointly based at Cambridge’s Departments of Earth Sciences and Geography. “In the aftermath of a disaster, time really matters,” he said. Using AI, he identified 7,000 landslides after the Taiwan earthquake, and within three hours of the satellite imagery being acquired.

Since the Taiwan earthquake, Nava has been developing his AI method alongside an international team. By employing a suite of satellite technologies—including satellites that can see through clouds and at night—the researchers hope to enhance AI’s landslide detection capabilities.

Multiplying hazards

Triggered by major earthquakes or intense rainfall, landslides are often worsened by human activities such as deforestation and construction on unstable slopes. In certain environments, they can trigger additional hazards such as fast-moving debris flows or severe flooding, compounding their destructive impact.

Nava’s work fits into a larger effort at Cambridge to understand how landslides and other hazards can set off cascading ‘multihazard’ chains. The CoMHaz group, led by Maximillian Van Wyk de Vries, Professor of Natural Hazards in the Departments of Geography and Earth Sciences, draws on information from satellite imagery, computer modelling and fieldwork to locate landslides, understand why they happen and ultimately predict their occurrence.

They’re also working with communities to raise landslide awareness. In Nepal, Nava and Van Wyk de Vries teamed up with local scientists and the Climate and Disaster Resilience in Nepal (CDRIN) consortium to pilot an early warning system for Butwal, which sits beneath a massive unstable slope.

Improved AI-detection

Nava is training AI to identify landslides in two types of satellite images—optical images of the ground surface and radar data, the latter of which can penetrate cloud cover and even acquire images at night.

Radar images can, however, be difficult to interpret, as they use greyscale to depict contrasting surface properties and landscape features can also appear distorted. These challenges make radar data well-suited for AI-assisted analysis, helping extract features that may otherwise go unnoticed.

By combining the cloud-penetrating capabilities of radar with the fidelity of optical images, Nava hopes to build an AI-powered model that can accurately spot landslides even in poor weather conditions.

His trial following the 2024 Taiwan earthquake showed promise, detecting thousands of landslides that would otherwise go unnoticed beneath cloud cover. But Nava acknowledges that there is still more work needed, both to improve the model’s accuracy and its transparency.

He wants to build trust in the model and ensure its outputs are interpretable and actionable by decision-makers. “Very often, the decision-makers are not the ones who developed the algorithm,” said Nava. “AI can feel like a black box. Its internal logic is not always transparent, and that can make people hesitant to act on its outputs.

“It’s important to make it easier for end users to evaluate the quality of AI-generated information before incorporating it into important decisions.”

This is something he is now addressing as part of a broader partnership with the European Space Agency (ESA), the World Meteorological Organization (WMO), the International Telecommunication Union’s AI for Good Foundation and Global Initiative on Resilience to Natural Hazards through AI Solutions.

At a recent working group meeting at the ESA Centre for Earth Observation in Italy, the researchers launched a data-science challenge to crowdsource efforts to improve the model. “We’re opening this up and looking for help from the wider coding community,” said Nava.

Beyond improving the model’s functionality, Nava says the goal is to incorporate features that explain its reasoning—potentially using visualisations such as maps that show the likelihood of an image containing landslides to help end users understand the outputs.

“In high-stakes scenarios like disaster response, trust in AI-generated results is crucial. Through this challenge, we aim to bring transparency to the model’s decision-making process, empowering decision-makers on the ground to act with confidence and speed.”

Read the work in full

Brief Communication: AI-driven rapid landslides mapping following the 2024 Hualien City Earthquake in Taiwan, Lorenzo Nava, Alessandro Novellino et al.




University of Cambridge

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence