ΑΙhub.org
 

Modeling the minutia of motor manipulation with AI


by
11 November 2024



share this:

Image: ©2024 EPFL – CC-BY-SA 4.0.

By Michael David Mitchell

In neuroscience and biomedical engineering, accurately modeling the complex movements of the human hand has long been a significant challenge. Current models often struggle to capture the intricate interplay between the brain’s motor commands and the physical actions of muscles and tendons. This gap not only hinders scientific progress but also limits the development of effective neuroprosthetics aimed at restoring hand function for those with limb loss or paralysis.

EPFL professor Alexander Mathis and his team have developed an AI-driven approach that advances our understanding of these complex motor functions. The team used a creative machine learning strategy that combined curriculum-based reinforcement learning with detailed biomechanical simulations.

Mathis’s research presents a detailed, dynamic, and anatomically accurate model of hand movement that takes direct inspiration from the way humans learn intricate motor skills. This research not only won the MyoChallenge at the NeurIPS conference in 2022, but the results have also been published in the journal Neuron.

Virtually controlling Baoding balls

“What excites me most about this research is that we’re diving deep into the core principles of human motor control—something that’s been a mystery for so long. We’re not just building models; we’re uncovering the fundamental mechanics of how the brain and muscles work together” says Mathis.

The NeurIPS challenge by Meta motivated the EPFL team to find a new approach to a technique in AI known as reinforcement learning. The task was to build an AI that precisely manipulate two Baoding balls—each controlled by 39 muscles in a highly coordinated manner. This seemingly simple task is extraordinarily difficult to replicate virtually, given the complex dynamics of hand movements, including muscle synchronization and balance maintenance.

In this highly competitive environment, three graduate students—Alberto Chiappa from Alexander Mathis’ group, Pablo Tano and Nisheet Patel from Alexandre Pouget’s group at the University of Geneva—outperformed their rivals by a significant margin. Their AI model achieved a 100% success rate in the first phase of the competition, surpassing the closest competitor. Even in the more challenging second phase, their model showed its strength in ever more difficult situations and maintained a commanding lead to win the competition.

Breaking the tasks down in smaller parts – and repeat them

“To win, we took inspiration from how humans learn sophisticated skills in a process known as part-to-whole training in sports science,” says Mathis. This part-to-whole approach inspired the curriculum learning method used in the AI model, where the complex task of controlling hand movements was broken down into smaller, manageable parts.

“To overcome the limitations of current machine learning models, we applied a method called curriculum learning. After 32 stages and nearly 400 hours of training, we successfully trained a neural network to accurately control a realistic model of the human hand,” says Alberto Chiappa.

A key reason for the model’s success is its ability to recognize and use basic, repeatable movement patterns, known as motor primitives. In an exciting scientific twist, this approach to learning behavior could inform neuroscience about the brain’s role in determining how motor primitives are learned to master new tasks. This intricate interplay between the brain and muscle manipulation points to how challenging it can be to build machines and prosthetics that truly mimic human movement.

“You need a large degree of movement and a model that resembles a human brain to accomplish a variety of everyday tasks. Even if each task can be broken down into smaller parts, each task needs a different set of these motor primitives to be done well,” says Mathis.

Harness AI in the exploration and understanding of biological systems

Silvestro Micera, a leading researcher in neuroprosthetics at EPFL’s Neuro X Institute and collaborator with Mathis, highlights the critical importance of this research for understanding the future potential and the current limits of even the most advanced prosthetics. “What we really miss right now is a deeper understanding of how finger movement and grasping motor control are achieved. This work goes exactly in this very important direction,” Micera notes. “We know how important it is to connect the prosthesis to the nervous system, and this research gives us a solid scientific foundation that reinforces our strategy.”

Abigail Ingster, a bachelor student at the time of the competition and recipient of EPFL’s Summer in the Lab fellowship, played a pivotal role in analyzing the policy. With her fellowship supporting hands-on research experience, Abigail worked closely with PhD student Alberto Chiappa and Professor Mathis to delve into the intricate workings of the AI’s learned policy.

Read the work in full

Acquiring musculoskeletal skills with curriculum-based reinforcement learning, Alberto Silvio Chiappa, Pablo Tano, Nisheet Patel, Abigaïl Ingster, Alexandre Pouget and Alexander Mathis, Neuron (2024).




EPFL

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

An AI image generator for non-English speakers

  17 Mar 2026
"Translations lose the nuances of language and culture, because many words lack good English equivalents."

AI and Theory of Mind: an interview with Nitay Alon

  16 Mar 2026
Find out more about how Theory of Mind plays out in deceptive environments, multi-agents systems, the interdisciplinary nature of this field, when to use Theory of Mind, and when not to, and more.
coffee corner

AIhub coffee corner: AI, kids, and the future – “generation AI”

  13 Mar 2026
The AIhub coffee corner captures the musings of AI experts over a short conversation.

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence