ΑΙhub.org
 

#ICML2020 invited talk: Brenna Argall – “Human and machine learning for assistive autonomy”


by
31 July 2020



share this:

ICML
The second invited talk at ICML2020 was given by Brenna Argall. Her presentation covered the use of machine learning within the domain of assistive machines for rehabilitation. She described the efforts of her lab towards customising assistive autonomous machines so that users can decide the level of control they keep, and how much autonomy they hand over to the machine.

Within the field of rehabilitation, machines are used in order to rehabilitate the human body, and also to bridge gaps in functionality that are left as a result of injury or disease. Brenna noted that when we are replacing loss of function there persists a challenge in the field of how to capture control signals from the human body in order to operate the machine.

The powered wheelchair is the most ubiquitous assisted machine. The interfaces that are typically used to control the chair are joysticks, a head array, and sip-and-puff systems (signals are sent using air pressure via a tube which the user controls by inhaling or exhaling).

Robotic arms are a much more complex control problem as they typically possess six degrees of freedom (but it could be even more). There are so many modes of control it becomes difficult for a human (especially with limited mobility function) to use such an assistive machine.

The difficulty of this problem led Brenna and her team to work on a possible solution: to combine these assistive machines with sensors and machine learning to produce an “assistive robot”. What Brenna has seen is that there is a need for customisation when they introduce this autonomy to a machine. Most users do not want a 100% autonomous machine – they still want to be in control. The challenge is to customise the autonomy to their personal needs. This is summarised nicely in the short video below.

In preliminary studies, the team determined the level of autonomy by constantly asking the participants whether they wanted more or less autonomy for a particular task. They then adjusted the system accordingly. The next step for the research was to automate some of this and pass it off to adaptive algorithms.

There are a few salient characteristics of adaptation under assistive autonomy:

1) Feedback signal for learning

The questions that need to be answered here include: do we formulate this feedback signal as a reward, as a supervised label, or is it a correction or demonstration? Is the feedback given to us explicitly (as seen in the preliminary study) or is it something that is implicit that we are trying to extract from the characteristics of the person using the machine?

To begin to answer some of these questions Brenna and her team are carrying out is a large-scale characterisation study with a robotic wheelchair to determine how people operate their machines. They have measured how people adapt to the chair controls over time. You can read more about their research here.

2) Masked and filtered information

The information that is obtained from the human can be masked or filtered, either because of their impairment or because of the (often limited) interface that they are required to use. There is also the question of whether the autonomy should be trusting the control signals from the human. You can find out more about this topic in this article.

3) Temporal considerations

The first consideration is the physiology of the human, whose condition may change over time. There is also the factor of familiarisation – people generally get better at using a particular piece of technology after practice. Importantly, the rate of adaptation also needs to be considered – should these adaptations be short-term or long-term?

4) Human-autonomy co-adaptation

If the autonomy is adapting to the human, because the human is adapting to the autonomy, this leads to a moving target of co-adaptation. How to handle this is an open research question.

A piece of research in progress is being carried out in collaboration with Sandro Mussa-Ivaldi (Northwestern University), whose group has developed a wearable device that can control systems by shoulder movements. Brenna plans to use this to control a robotic arm, using machine learning to gauge the level of autonomy needed. The goal is to start with a high level of robotics autonomy. As the human becomes more proficient in controlling the arm the robot steps back and allows the human to take over. The end point would be the human completely controlling the arm. In this case robot machine learning is being used to elicit human motor learning.

If you are interested in finding out more about the work outlined in this talk, this seminar given by Brenna in 2016 details some of the work from her lab: “Human Autonomy through Robotics Autonomy”.

About Brenna Argall

Brenna Argall is an associate professor of Mechanical Engineering, Computer Science, and Physical Medicine & Rehabilitation at Northwestern University. She is director of the assistive & rehabilitation robotics laboratory (argallab) at the Shirley Ryan AbilityLab (formerly the Rehabilitation Institute of Chicago). The mission of the argallab is to advance human ability by leveraging robotics autonomy. Argall is a 2016 recipient of the NSF CAREER award, and was named one of the 40 under 40 by Crain’s Chicago Business. Her PhD in Robotics (2009) was received from the Robotics Institute at Carnegie Mellon University. Prior to joining Northwestern, she was a postdoctoral fellow at the École Polytechnique Fédérale de Lausanne (EPFL). More recently, she was a visiting fellow at the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland (2019).



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Forthcoming machine learning and AI seminars: April 2025 edition

  01 Apr 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 April and 31 May 2025.

AI can be a powerful tool for scientists. But it can also fuel research misconduct

  31 Mar 2025
While AI is allowing scientists to make technological breakthroughs, there’s also a darker side to the use of AI in science: scientific misconduct is on the rise.
monthly digest

AIhub monthly digest: March 2025 – human-allied AI, differential privacy, and social media microtargeting

  28 Mar 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

AI ring tracks spelled words in American Sign Language

  27 Mar 2025
In its current form, SpellRing could be used to enter text into computers or smartphones via fingerspelling.

How AI images are ‘flattening’ Indigenous cultures – creating a new form of tech colonialism

  26 Mar 2025
AI-generated stock images that claim to depict “Indigenous Australians”, don’t resemble Aboriginal and Torres Strait Islander peoples.

Interview with Lea Demelius: Researching differential privacy

  25 Mar 2025
We hear from doctoral consortium participant Lea Demelius who is investigating the trade-offs and synergies that arise between various requirements for trustworthy AI.

The Machine Ethics podcast: Careful technology with Rachel Coldicutt

This episode, Ben chats to Rachel Coldicutt about AI taxonomy, innovating for everyone not just the few, responsibilities of researchers, and more.

Interview with AAAI Fellow Roberto Navigli: multilingual natural language processing

  21 Mar 2025
Roberto tells us about his career path, some big research projects he’s led, and why it’s important to follow your passion.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association