ΑΙhub.org
 

Modeling the minutia of motor manipulation with AI


by
11 November 2024



share this:

Image: ©2024 EPFL – CC-BY-SA 4.0.

By Michael David Mitchell

In neuroscience and biomedical engineering, accurately modeling the complex movements of the human hand has long been a significant challenge. Current models often struggle to capture the intricate interplay between the brain’s motor commands and the physical actions of muscles and tendons. This gap not only hinders scientific progress but also limits the development of effective neuroprosthetics aimed at restoring hand function for those with limb loss or paralysis.

EPFL professor Alexander Mathis and his team have developed an AI-driven approach that advances our understanding of these complex motor functions. The team used a creative machine learning strategy that combined curriculum-based reinforcement learning with detailed biomechanical simulations.

Mathis’s research presents a detailed, dynamic, and anatomically accurate model of hand movement that takes direct inspiration from the way humans learn intricate motor skills. This research not only won the MyoChallenge at the NeurIPS conference in 2022, but the results have also been published in the journal Neuron.

Virtually controlling Baoding balls

“What excites me most about this research is that we’re diving deep into the core principles of human motor control—something that’s been a mystery for so long. We’re not just building models; we’re uncovering the fundamental mechanics of how the brain and muscles work together” says Mathis.

The NeurIPS challenge by Meta motivated the EPFL team to find a new approach to a technique in AI known as reinforcement learning. The task was to build an AI that precisely manipulate two Baoding balls—each controlled by 39 muscles in a highly coordinated manner. This seemingly simple task is extraordinarily difficult to replicate virtually, given the complex dynamics of hand movements, including muscle synchronization and balance maintenance.

In this highly competitive environment, three graduate students—Alberto Chiappa from Alexander Mathis’ group, Pablo Tano and Nisheet Patel from Alexandre Pouget’s group at the University of Geneva—outperformed their rivals by a significant margin. Their AI model achieved a 100% success rate in the first phase of the competition, surpassing the closest competitor. Even in the more challenging second phase, their model showed its strength in ever more difficult situations and maintained a commanding lead to win the competition.

Breaking the tasks down in smaller parts – and repeat them

“To win, we took inspiration from how humans learn sophisticated skills in a process known as part-to-whole training in sports science,” says Mathis. This part-to-whole approach inspired the curriculum learning method used in the AI model, where the complex task of controlling hand movements was broken down into smaller, manageable parts.

“To overcome the limitations of current machine learning models, we applied a method called curriculum learning. After 32 stages and nearly 400 hours of training, we successfully trained a neural network to accurately control a realistic model of the human hand,” says Alberto Chiappa.

A key reason for the model’s success is its ability to recognize and use basic, repeatable movement patterns, known as motor primitives. In an exciting scientific twist, this approach to learning behavior could inform neuroscience about the brain’s role in determining how motor primitives are learned to master new tasks. This intricate interplay between the brain and muscle manipulation points to how challenging it can be to build machines and prosthetics that truly mimic human movement.

“You need a large degree of movement and a model that resembles a human brain to accomplish a variety of everyday tasks. Even if each task can be broken down into smaller parts, each task needs a different set of these motor primitives to be done well,” says Mathis.

Harness AI in the exploration and understanding of biological systems

Silvestro Micera, a leading researcher in neuroprosthetics at EPFL’s Neuro X Institute and collaborator with Mathis, highlights the critical importance of this research for understanding the future potential and the current limits of even the most advanced prosthetics. “What we really miss right now is a deeper understanding of how finger movement and grasping motor control are achieved. This work goes exactly in this very important direction,” Micera notes. “We know how important it is to connect the prosthesis to the nervous system, and this research gives us a solid scientific foundation that reinforces our strategy.”

Abigail Ingster, a bachelor student at the time of the competition and recipient of EPFL’s Summer in the Lab fellowship, played a pivotal role in analyzing the policy. With her fellowship supporting hands-on research experience, Abigail worked closely with PhD student Alberto Chiappa and Professor Mathis to delve into the intricate workings of the AI’s learned policy.

Read the work in full

Acquiring musculoskeletal skills with curriculum-based reinforcement learning, Alberto Silvio Chiappa, Pablo Tano, Nisheet Patel, Abigaïl Ingster, Alexandre Pouget and Alexander Mathis, Neuron (2024).




EPFL




            AIhub is supported by:


Related posts :



New AI tool generates realistic satellite images of future flooding

  24 Dec 2024
The method could help communities visualize and prepare for approaching storms.

2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association