ΑΙhub.org
 

A behaviour monitoring dataset of wild mammals in the Swiss Alps


by
17 July 2025



share this:

Two roe deer foraging, with manual annotations for each individual animal. Credit: A. Mathis (EPFL).

By Nik Papageorgiou

Have you ever wondered how wild animals behave when no one’s watching? Understanding these behaviors is vital for protecting ecosystems—especially as climate change and human expansion alter natural habitats. But collecting this kind of information without interfering has always been tricky.

Traditionally, researchers relied on direct observation or sensors strapped to animals—methods that are either disruptive or limited in scope. Camera traps offer a less invasive alternative, but they generate vast amounts of footage that’s hard to analyze.

AI could help, but there’s a catch: it needs annotated datasets to learn from. Most current video datasets are either scraped from the internet, missing the authenticity of real wild settings, or are small-scale field recordings lacking detail. And few include the kind of rich context—like multiple camera angles or audio—that’s needed to truly understand complex animal behavior.

Introducing MammAlps

To address this challenge, scientists at EPFL have collected and curated MammAlps, the first richly annotated, multi-view, multimodal wildlife behavior dataset in collaboration with the Swiss National Park. MammAlps is designed to train AI models for species and behavior recognition tasks, and ultimately to help researchers understand animal behavior better. This work could make conservation efforts faster, cheaper, and smarter.

MammAlps was developed by Valentin Gabeff, a PhD student at EPFL under the supervision of Professors Alexander Mathis and Devis Tuia, together with their respective research teams.

How MammAlps was developed

The researchers set up nine camera traps that recorded more than 43 hours of raw footage over the course of several weeks. The team then meticulously processed it, using AI tools to detect and track individual animals, resulting in 8.5 hours of material showing wildlife interaction.

They labeled behaviors using a hierarchical approach, categorizing each moment at two levels: high-level activities like foraging or playing, and finer actions like walking, grooming, or sniffing. This structure allows AI models to interpret behaviors more accurately by linking detailed movements to broader behavioral patterns.

To provide AI models with richer context, the team supplemented video with audio recordings and captured “reference scene maps” that documented environmental factors like water sources, bushes, and rocks. This addition al data enables better interpretation of habitat-specific behaviors. They also cross-referenced weather conditions and counts of individuals per event to create more complete scene descriptions.

“By incorporating other modalities alongside video, we’ve shown that AI models can better identify animal behavior,” explains Alexander Mathis. “This multi-modal approach gives us a more complete picture of wildlife behavior.”

A new standard to wildlife monitoring

MammAlps brings a new standard to wildlife monitoring: a full sensory snapshot of animal behavior across multiple angles, sounds, and contexts. It also introduces a “long-term event understanding” benchmark, meaning scientists can now study not just isolated behaviors from short clips, but broader ecological scenes over time—like a wolf stalking a deer across several camera views.

Research is still ongoing. The team is currently processing data collected in 2024 and carries out more fieldwork in 2025. These additional surveys are necessary to expand the set of recordings for rare species such as alpine hares and lynx and are also useful to develop methods for the temporal analysis of wildlife behavior over multiple seasons.

Building more datasets like MammAlps could radically scale up current wildlife monitoring efforts by enabling AI models to identify behaviors of interest from hundreds of hours of video. This would provide wildlife conservationists with timely, actionable insights. Over time, this could make it easier to track how climate change, human encroachment, or disease outbreaks impact wildlife behavior, and help protect vulnerable species.

For more information about MammAlps and access to the dataset, visit the project webpage.

Read the work in full

MammAlps: A multi-view video behavior monitoring dataset of wild mammals in the Swiss Alps, Valentin Gabeff, Haozhe Qi, Brendan Flaherty, Gencer Sumbül, Alexander Mathis, Devis Tuia.



tags: ,


EPFL




            AIhub is supported by:



Related posts :



Self-supervised learning for soccer ball detection and beyond: interview with winners of the RoboCup 2025 best paper award

  19 Sep 2025
Method for improving ball detection can also be applied in other fields, such as precision farming.

How AI is opening the playbook on sports analytics

  18 Sep 2025
Waterloo researchers create simulated soccer datasets to unlock insights once reserved for pro teams.

Discrete flow matching framework for graph generation

and   17 Sep 2025
Read about work presented at ICML 2025 that disentangles sampling from training.

We risk a deluge of AI-written ‘science’ pushing corporate interests – here’s what to do about it

  16 Sep 2025
A single individual using AI can produce multiple papers that appear valid in a matter of hours.

Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence