ΑΙhub.org
 

Helping drone swarms avoid obstacles without hitting each other


by
14 June 2021



share this:

 Enrica Soria, a PhD student at LIS © Alain Herzog / 2021 EPFL
Enrica Soria, a PhD student at LIS © Alain Herzog / 2021 EPFL

By Clara Marc

Engineers at EPFL have developed a predictive control model that allows swarms of drones to fly in cluttered environments quickly and safely. It works by enabling individual drones to predict their own behaviour and that of their neighbours in the swarm.

There is strength in numbers. That’s true not only for humans, but for drones too. By flying in a swarm, they can cover larger areas and collect a wider range of data, since each drone can be equipped with different sensors.

Preventing drones from bumping into each other

One reason why drone swarms haven’t been used more widely is the risk of gridlock within the swarm. Studies on the collective movement of animals show that each agent tends to coordinate its movements with the others, adjusting its trajectory so as to keep a safe inter-agent distance or to travel in alignment, for example.

“In a drone swarm, when one drone changes its trajectory to avoid an obstacle, its neighbours automatically synchronize their movements accordingly,” says Dario Floreano, a professor at EPFL’s School of Engineering and head of the Laboratory of Intelligent Systems (LIS). “But that often causes the swarm to slow down, generates gridlock within the swarm or even leads to collisions.”

Not just reacting, but also predicting

Enrica Soria, a PhD student at LIS, has come up with a new method for getting around that problem. She has developed a predictive control model that allows drones to not just react to others in a swarm, but also to anticipate their own movements and predict those of their neighbours. “Our model gives drones the ability to determine when a neighbour is about to slow down, meaning the slowdown has less of an effect on their own flight,” says Soria. The model works by programming in locally controlled, simple rules, such as a minimum inter-agent distance to maintain, a set velocity to keep, or a specific direction to follow. Soria’s work has just been published in Nature Machine Intelligence.

With Soria’s model, drones are much less dependent on commands issued by a central computer. Drones in aerial light shows, for example, get their instructions from a computer that calculates each one’s trajectory to avoid a collision. “But with our model, drones are commanded using local information and can modify their trajectories autonomously,” says Soria.

A model inspired by nature

Tests run at LIS show that Soria’s system improves the speed, order and safety of drone swarms in areas with a lot of obstacles. “We don’t yet know if, or to what extent, animals are able to predict the movements of those around them,” says Floreano. “But biologists have recently suggested that the synchronized direction changes observed in some large groups would require a more sophisticated cognitive ability than what has been believed until now.”

Read the article in full

Predictive control of aerial swarms in cluttered environments
Enrica Soria, Fabrizio Schiano, and Dario Floreano




EPFL




            AIhub is supported by:



Related posts :



ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Interview with Mario Mirabile: trust in multi-agent systems

  18 Nov 2025
We meet ECAI Doctoral Consortium participant, Mario, to find out more about his research.

Review of “Exploring metaphors of AI: visualisations, narratives and perception”

and   17 Nov 2025
A curated research session at the Hype Studies Conference, “(Don’t) Believe the Hype?!” 10-12 September 2025, Barcelona.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.

Learning from failure to tackle extremely hard problems

  12 Nov 2025
This blog post is based on the work "BaNEL: Exploration posteriors for generative modeling using only negative rewards".

How AI can improve storm surge forecasts to help save lives

  10 Nov 2025
Looking at how AI models can help provide more detailed forecasts more quickly.

Rewarding explainability in drug repurposing with knowledge graphs

and   07 Nov 2025
A RL approach that not only predicts which drug-disease pairs might hold promise but also explains why.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence