ΑΙhub.org
 

#ICML2020 invited talk: Brenna Argall – “Human and machine learning for assistive autonomy”


by
31 July 2020



share this:

ICML
The second invited talk at ICML2020 was given by Brenna Argall. Her presentation covered the use of machine learning within the domain of assistive machines for rehabilitation. She described the efforts of her lab towards customising assistive autonomous machines so that users can decide the level of control they keep, and how much autonomy they hand over to the machine.

Within the field of rehabilitation, machines are used in order to rehabilitate the human body, and also to bridge gaps in functionality that are left as a result of injury or disease. Brenna noted that when we are replacing loss of function there persists a challenge in the field of how to capture control signals from the human body in order to operate the machine.

The powered wheelchair is the most ubiquitous assisted machine. The interfaces that are typically used to control the chair are joysticks, a head array, and sip-and-puff systems (signals are sent using air pressure via a tube which the user controls by inhaling or exhaling).

Robotic arms are a much more complex control problem as they typically possess six degrees of freedom (but it could be even more). There are so many modes of control it becomes difficult for a human (especially with limited mobility function) to use such an assistive machine.

The difficulty of this problem led Brenna and her team to work on a possible solution: to combine these assistive machines with sensors and machine learning to produce an “assistive robot”. What Brenna has seen is that there is a need for customisation when they introduce this autonomy to a machine. Most users do not want a 100% autonomous machine – they still want to be in control. The challenge is to customise the autonomy to their personal needs. This is summarised nicely in the short video below.

In preliminary studies, the team determined the level of autonomy by constantly asking the participants whether they wanted more or less autonomy for a particular task. They then adjusted the system accordingly. The next step for the research was to automate some of this and pass it off to adaptive algorithms.

There are a few salient characteristics of adaptation under assistive autonomy:

1) Feedback signal for learning

The questions that need to be answered here include: do we formulate this feedback signal as a reward, as a supervised label, or is it a correction or demonstration? Is the feedback given to us explicitly (as seen in the preliminary study) or is it something that is implicit that we are trying to extract from the characteristics of the person using the machine?

To begin to answer some of these questions Brenna and her team are carrying out is a large-scale characterisation study with a robotic wheelchair to determine how people operate their machines. They have measured how people adapt to the chair controls over time. You can read more about their research here.

2) Masked and filtered information

The information that is obtained from the human can be masked or filtered, either because of their impairment or because of the (often limited) interface that they are required to use. There is also the question of whether the autonomy should be trusting the control signals from the human. You can find out more about this topic in this article.

3) Temporal considerations

The first consideration is the physiology of the human, whose condition may change over time. There is also the factor of familiarisation – people generally get better at using a particular piece of technology after practice. Importantly, the rate of adaptation also needs to be considered – should these adaptations be short-term or long-term?

4) Human-autonomy co-adaptation

If the autonomy is adapting to the human, because the human is adapting to the autonomy, this leads to a moving target of co-adaptation. How to handle this is an open research question.

A piece of research in progress is being carried out in collaboration with Sandro Mussa-Ivaldi (Northwestern University), whose group has developed a wearable device that can control systems by shoulder movements. Brenna plans to use this to control a robotic arm, using machine learning to gauge the level of autonomy needed. The goal is to start with a high level of robotics autonomy. As the human becomes more proficient in controlling the arm the robot steps back and allows the human to take over. The end point would be the human completely controlling the arm. In this case robot machine learning is being used to elicit human motor learning.

If you are interested in finding out more about the work outlined in this talk, this seminar given by Brenna in 2016 details some of the work from her lab: “Human Autonomy through Robotics Autonomy”.

About Brenna Argall

Brenna Argall is an associate professor of Mechanical Engineering, Computer Science, and Physical Medicine & Rehabilitation at Northwestern University. She is director of the assistive & rehabilitation robotics laboratory (argallab) at the Shirley Ryan AbilityLab (formerly the Rehabilitation Institute of Chicago). The mission of the argallab is to advance human ability by leveraging robotics autonomy. Argall is a 2016 recipient of the NSF CAREER award, and was named one of the 40 under 40 by Crain’s Chicago Business. Her PhD in Robotics (2009) was received from the Robotics Institute at Carnegie Mellon University. Prior to joining Northwestern, she was a postdoctoral fellow at the École Polytechnique Fédérale de Lausanne (EPFL). More recently, she was a visiting fellow at the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland (2019).



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AUAI is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Half of AI health answers are wrong even though they sound convincing – new study

  12 May 2026
Imagine you have just been diagnosed with early-stage cancer and, before your next appointment, you type a question into an AI chatbot.

Gradient-based planning for world models at longer horizons

  11 May 2026
What were the problems that motivated this project and what was the approach to address them?

It’s tempting to offload your thinking to AI. Cognitive science shows why that’s a bad idea

  08 May 2026
Increased offloading to new tools has raised the fear that people will become overly reliant on AI.

Making AI systems more transparent and trustworthy: an interview with Ximing Wen

  07 May 2026
Find out more about Ximing's work, experience as a research intern, and what inspired her to study AI.

Report on foundation model impacts released

  06 May 2026
Partnership on AI publish a progress report on post-deployment governance practices.

Forthcoming machine learning and AI seminars: May 2026 edition

  05 May 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 5 May and 30 June 2026.

AI for Science – from cosmology to chemistry

  01 May 2026
How AI is transforming science, from a day conference at the Royal Society
monthly digest

AIhub monthly digest: April 2026 – machine learning for particle physics, AI Index Report, and table tennis

  30 Apr 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.



AUAI is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence