ΑΙhub.org
 

#ICML2020 invited talk: Brenna Argall – “Human and machine learning for assistive autonomy”

by
31 July 2020



share this:

ICML
The second invited talk at ICML2020 was given by Brenna Argall. Her presentation covered the use of machine learning within the domain of assistive machines for rehabilitation. She described the efforts of her lab towards customising assistive autonomous machines so that users can decide the level of control they keep, and how much autonomy they hand over to the machine.

Within the field of rehabilitation, machines are used in order to rehabilitate the human body, and also to bridge gaps in functionality that are left as a result of injury or disease. Brenna noted that when we are replacing loss of function there persists a challenge in the field of how to capture control signals from the human body in order to operate the machine.

The powered wheelchair is the most ubiquitous assisted machine. The interfaces that are typically used to control the chair are joysticks, a head array, and sip-and-puff systems (signals are sent using air pressure via a tube which the user controls by inhaling or exhaling).

Robotic arms are a much more complex control problem as they typically possess six degrees of freedom (but it could be even more). There are so many modes of control it becomes difficult for a human (especially with limited mobility function) to use such an assistive machine.

The difficulty of this problem led Brenna and her team to work on a possible solution: to combine these assistive machines with sensors and machine learning to produce an “assistive robot”. What Brenna has seen is that there is a need for customisation when they introduce this autonomy to a machine. Most users do not want a 100% autonomous machine – they still want to be in control. The challenge is to customise the autonomy to their personal needs. This is summarised nicely in the short video below.

In preliminary studies, the team determined the level of autonomy by constantly asking the participants whether they wanted more or less autonomy for a particular task. They then adjusted the system accordingly. The next step for the research was to automate some of this and pass it off to adaptive algorithms.

There are a few salient characteristics of adaptation under assistive autonomy:

1) Feedback signal for learning

The questions that need to be answered here include: do we formulate this feedback signal as a reward, as a supervised label, or is it a correction or demonstration? Is the feedback given to us explicitly (as seen in the preliminary study) or is it something that is implicit that we are trying to extract from the characteristics of the person using the machine?

To begin to answer some of these questions Brenna and her team are carrying out is a large-scale characterisation study with a robotic wheelchair to determine how people operate their machines. They have measured how people adapt to the chair controls over time. You can read more about their research here.

2) Masked and filtered information

The information that is obtained from the human can be masked or filtered, either because of their impairment or because of the (often limited) interface that they are required to use. There is also the question of whether the autonomy should be trusting the control signals from the human. You can find out more about this topic in this article.

3) Temporal considerations

The first consideration is the physiology of the human, whose condition may change over time. There is also the factor of familiarisation – people generally get better at using a particular piece of technology after practice. Importantly, the rate of adaptation also needs to be considered – should these adaptations be short-term or long-term?

4) Human-autonomy co-adaptation

If the autonomy is adapting to the human, because the human is adapting to the autonomy, this leads to a moving target of co-adaptation. How to handle this is an open research question.

A piece of research in progress is being carried out in collaboration with Sandro Mussa-Ivaldi (Northwestern University), whose group has developed a wearable device that can control systems by shoulder movements. Brenna plans to use this to control a robotic arm, using machine learning to gauge the level of autonomy needed. The goal is to start with a high level of robotics autonomy. As the human becomes more proficient in controlling the arm the robot steps back and allows the human to take over. The end point would be the human completely controlling the arm. In this case robot machine learning is being used to elicit human motor learning.

If you are interested in finding out more about the work outlined in this talk, this seminar given by Brenna in 2016 details some of the work from her lab: “Human Autonomy through Robotics Autonomy”.

About Brenna Argall

Brenna Argall is an associate professor of Mechanical Engineering, Computer Science, and Physical Medicine & Rehabilitation at Northwestern University. She is director of the assistive & rehabilitation robotics laboratory (argallab) at the Shirley Ryan AbilityLab (formerly the Rehabilitation Institute of Chicago). The mission of the argallab is to advance human ability by leveraging robotics autonomy. Argall is a 2016 recipient of the NSF CAREER award, and was named one of the 40 under 40 by Crain’s Chicago Business. Her PhD in Robotics (2009) was received from the Robotics Institute at Carnegie Mellon University. Prior to joining Northwestern, she was a postdoctoral fellow at the École Polytechnique Fédérale de Lausanne (EPFL). More recently, she was a visiting fellow at the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland (2019).



tags: ,


Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Are emergent abilities of large language models a mirage? – Interview with Brando Miranda

We hear about work that won a NeurIPS 2023 outstanding paper award.
25 April 2024, by

We built an AI tool to help set priorities for conservation in Madagascar: what we found

Daniele Silvestro has developed a tool that can help identify conservation and restoration priorities.
24 April 2024, by

Interview with Mike Lee: Communicating AI decision-making through demonstrations

We hear from AAAI/SIGAI Doctoral Consortium participant Mike Lee about his research on explainable AI.
23 April 2024, by

Machine learning viability modelling of vertical-axis wind turbines

Researchers have used a genetic learning algorithm to identify optimal pitch profiles for the turbine blades.
22 April 2024, by

The Machine Ethics podcast: What is AI? Volume 3

This is a bonus episode looking back over answers to our question: What is AI?
19 April 2024, by

DataLike: Interview with Tẹjúmádé Àfọ̀njá

"I place an emphasis on wellness and meticulously plan my schedule to ensure I can make meaningful contributions to what's important to me."




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association