ΑΙhub.org
 

#ICML2020 invited talk: Brenna Argall – “Human and machine learning for assistive autonomy”


by
31 July 2020



share this:

ICML
The second invited talk at ICML2020 was given by Brenna Argall. Her presentation covered the use of machine learning within the domain of assistive machines for rehabilitation. She described the efforts of her lab towards customising assistive autonomous machines so that users can decide the level of control they keep, and how much autonomy they hand over to the machine.

Within the field of rehabilitation, machines are used in order to rehabilitate the human body, and also to bridge gaps in functionality that are left as a result of injury or disease. Brenna noted that when we are replacing loss of function there persists a challenge in the field of how to capture control signals from the human body in order to operate the machine.

The powered wheelchair is the most ubiquitous assisted machine. The interfaces that are typically used to control the chair are joysticks, a head array, and sip-and-puff systems (signals are sent using air pressure via a tube which the user controls by inhaling or exhaling).

Robotic arms are a much more complex control problem as they typically possess six degrees of freedom (but it could be even more). There are so many modes of control it becomes difficult for a human (especially with limited mobility function) to use such an assistive machine.

The difficulty of this problem led Brenna and her team to work on a possible solution: to combine these assistive machines with sensors and machine learning to produce an “assistive robot”. What Brenna has seen is that there is a need for customisation when they introduce this autonomy to a machine. Most users do not want a 100% autonomous machine – they still want to be in control. The challenge is to customise the autonomy to their personal needs. This is summarised nicely in the short video below.

In preliminary studies, the team determined the level of autonomy by constantly asking the participants whether they wanted more or less autonomy for a particular task. They then adjusted the system accordingly. The next step for the research was to automate some of this and pass it off to adaptive algorithms.

There are a few salient characteristics of adaptation under assistive autonomy:

1) Feedback signal for learning

The questions that need to be answered here include: do we formulate this feedback signal as a reward, as a supervised label, or is it a correction or demonstration? Is the feedback given to us explicitly (as seen in the preliminary study) or is it something that is implicit that we are trying to extract from the characteristics of the person using the machine?

To begin to answer some of these questions Brenna and her team are carrying out is a large-scale characterisation study with a robotic wheelchair to determine how people operate their machines. They have measured how people adapt to the chair controls over time. You can read more about their research here.

2) Masked and filtered information

The information that is obtained from the human can be masked or filtered, either because of their impairment or because of the (often limited) interface that they are required to use. There is also the question of whether the autonomy should be trusting the control signals from the human. You can find out more about this topic in this article.

3) Temporal considerations

The first consideration is the physiology of the human, whose condition may change over time. There is also the factor of familiarisation – people generally get better at using a particular piece of technology after practice. Importantly, the rate of adaptation also needs to be considered – should these adaptations be short-term or long-term?

4) Human-autonomy co-adaptation

If the autonomy is adapting to the human, because the human is adapting to the autonomy, this leads to a moving target of co-adaptation. How to handle this is an open research question.

A piece of research in progress is being carried out in collaboration with Sandro Mussa-Ivaldi (Northwestern University), whose group has developed a wearable device that can control systems by shoulder movements. Brenna plans to use this to control a robotic arm, using machine learning to gauge the level of autonomy needed. The goal is to start with a high level of robotics autonomy. As the human becomes more proficient in controlling the arm the robot steps back and allows the human to take over. The end point would be the human completely controlling the arm. In this case robot machine learning is being used to elicit human motor learning.

If you are interested in finding out more about the work outlined in this talk, this seminar given by Brenna in 2016 details some of the work from her lab: “Human Autonomy through Robotics Autonomy”.

About Brenna Argall

Brenna Argall is an associate professor of Mechanical Engineering, Computer Science, and Physical Medicine & Rehabilitation at Northwestern University. She is director of the assistive & rehabilitation robotics laboratory (argallab) at the Shirley Ryan AbilityLab (formerly the Rehabilitation Institute of Chicago). The mission of the argallab is to advance human ability by leveraging robotics autonomy. Argall is a 2016 recipient of the NSF CAREER award, and was named one of the 40 under 40 by Crain’s Chicago Business. Her PhD in Robotics (2009) was received from the Robotics Institute at Carnegie Mellon University. Prior to joining Northwestern, she was a postdoctoral fellow at the École Polytechnique Fédérale de Lausanne (EPFL). More recently, she was a visiting fellow at the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland (2019).



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.

Congratulations to the #NeurIPS2024 award winners

  11 Dec 2024
Find out who has been recognised by the conference awards.

Multi-agent path finding in continuous environments

and   11 Dec 2024
How can a group of agents minimise their journey length whilst avoiding collisions?




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association