ΑΙhub.org
 

Helping drone swarms avoid obstacles without hitting each other


by
14 June 2021



share this:

 Enrica Soria, a PhD student at LIS © Alain Herzog / 2021 EPFL
Enrica Soria, a PhD student at LIS © Alain Herzog / 2021 EPFL

By Clara Marc

Engineers at EPFL have developed a predictive control model that allows swarms of drones to fly in cluttered environments quickly and safely. It works by enabling individual drones to predict their own behaviour and that of their neighbours in the swarm.

There is strength in numbers. That’s true not only for humans, but for drones too. By flying in a swarm, they can cover larger areas and collect a wider range of data, since each drone can be equipped with different sensors.

Preventing drones from bumping into each other

One reason why drone swarms haven’t been used more widely is the risk of gridlock within the swarm. Studies on the collective movement of animals show that each agent tends to coordinate its movements with the others, adjusting its trajectory so as to keep a safe inter-agent distance or to travel in alignment, for example.

“In a drone swarm, when one drone changes its trajectory to avoid an obstacle, its neighbours automatically synchronize their movements accordingly,” says Dario Floreano, a professor at EPFL’s School of Engineering and head of the Laboratory of Intelligent Systems (LIS). “But that often causes the swarm to slow down, generates gridlock within the swarm or even leads to collisions.”

Not just reacting, but also predicting

Enrica Soria, a PhD student at LIS, has come up with a new method for getting around that problem. She has developed a predictive control model that allows drones to not just react to others in a swarm, but also to anticipate their own movements and predict those of their neighbours. “Our model gives drones the ability to determine when a neighbour is about to slow down, meaning the slowdown has less of an effect on their own flight,” says Soria. The model works by programming in locally controlled, simple rules, such as a minimum inter-agent distance to maintain, a set velocity to keep, or a specific direction to follow. Soria’s work has just been published in Nature Machine Intelligence.

With Soria’s model, drones are much less dependent on commands issued by a central computer. Drones in aerial light shows, for example, get their instructions from a computer that calculates each one’s trajectory to avoid a collision. “But with our model, drones are commanded using local information and can modify their trajectories autonomously,” says Soria.

A model inspired by nature

Tests run at LIS show that Soria’s system improves the speed, order and safety of drone swarms in areas with a lot of obstacles. “We don’t yet know if, or to what extent, animals are able to predict the movements of those around them,” says Floreano. “But biologists have recently suggested that the synchronized direction changes observed in some large groups would require a more sophisticated cognitive ability than what has been believed until now.”

Read the article in full

Predictive control of aerial swarms in cluttered environments
Enrica Soria, Fabrizio Schiano, and Dario Floreano




EPFL

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence