ECAI plenary talk: Carme Torras on assistive AI

30 September 2020

share this:

This month saw the European Conference on AI (ECAI 2020) go digital. Included in the programme were five plenary talks. In this article we summarise the talk by Professor Carme Torras who gave an overview of her group’s work on assistive AI, and talked about the ethics of this field.

Carme is based at the Institut de Robòtica i Informàtica Industrial (CSIC-UPC) in Barcelona. Her lab includes an assisted living facility where the team can test their robots in real-life situations. One of the specialities of the team is cloth manipulation. This is a very important process when it comes to assistive robotics. In industry, most research is carried out on rigid objects, with more flexible objects, such as clothes and fabric, receiving relatively little attention. Therefore, Carme believes it’s very important to work with materials that are common in assistive environments, such as clothes and sheets.

Carme gave an overview of one large project that she leads: CLOTHILDE (CLOTH manIpulation Learning from DEmonstrations). The aim of the research is to combine techniques from topology and machine learning to develop a general theory of cloth manipulation. The three applications for this work are: 1) Housekeeping and hospital logistics, 2) Automation in the clothing industry, 3) Increasing autonomy of the elderly and disabled.

Carme’s personal favourite application is the third: increasing autonomy of the elderly and disabled, and this is what she focussed on during the presentation. In this space her team are aiming for robot-human collaboration as opposed to complete automation. There are a number of research goals in assistive AI technologies:

  1. They must be easy to use by non-experts
  2. They need to be intrinsically safe
  3. They must be tolerant to noisy perceptions and inaccurate actions
  4. They must be able to assess uncertainty with respect to a human’s goals and intentions
  5. They should be explainable
  6. They should be able to collaborate with people

The CLOTHILDE project team are working towards all of these goals and Carme showed some video footage of the impressive progress so far. This included a robot learning through demonstration to fold a polo shirt. After an initial human demonstration, reinforcement learning was then used to improve the technique. A camera above the folded shirt could report to the robot how accurate the folding was looking at the shape and the wrinkles in the garment after folding. You can find out more about this research here and here.

Another demonstration showed a robot putting a shoe on a person’s foot. The robot adapts to user preferences through interaction. In addition, throughout the process the robot “speaks” to the person describing the action it is taking. This communication will be essential for user trust when the machine is used in the community. You can find out more from this research article.

Another big European project that Carme talked about was Socrates – a project to develop social robots in the field of care for the elderly. She showed one particular robot in action. This machine was designed to assist people with Alzheimer’s to complete a brain-training game, suggesting possible moves to the patient, and providing encouragement. The robot is also able to record the progress of the patient over long periods of time. You can read more about this research here.

The second part of Carme’s talk focussed on the ethical implications of work on assistive AI. The research community in this field are very keen to do good with their work and to pursue ethical AI routes. There are a number of ethical issues, such as impact on the job market, legal liability, privacy and the digital gap, that shared with other fields. Due to the nature of assistive robots there are additional areas for concern that include communication, decision making, feelings and relationships and human enhancement.

One way to instil ethical principles is through regulation and standards (for example). Another critically important way is through education and dissemination. The public need to be aware of the benefits and potential risks of AI technologies. There are a number of educational initiatives regarding ethics underway, and a few of these were explained in the presentation. Carme noted that the ACM/IEEE Computer Science Curriculum consists of 18 knowledge areas, one of which is “social issues and professional practice”, which includes courses on ethics in technology, professional ethics, and society and technology. She highlighted a quote from Barbara Grosz: “By making ethical reasoning a central element in the curriculum, students can learn to think not only about what technology they could create, but also whether they should create that technology”.

Much of the ethical education is carried out using philosophical textbooks and papers. However, Carme noted that these are often too dry and abstract for technology students. To better engage students, classical science fiction work is actually being increasingly used for education. The use of fiction has a number of benefits including allowing students to discuss and reason about difficult issues without making the discussion personal.

Carme concluded her presentation by saying that the we have an amazing future ahead of us when it comes to assistive AI. Robots are here to stay so it is up to us to determine the roles for both humans and the robots in this future.

About Carme Torras

Carme Torras is Research Professor at the Spanish Scientific Research Council (CSIC). She received MSc degrees in Mathematics and Computer Science from the Universitat de Barcelona and the University of Massachusetts, respectively, and a PhD degree in Computer Science from the Technical University of Catalonia (UPC). She is IEEE Fellow, EurAI Fellow, member of Academia Europaea, member of the Reial Acadèmia de Ciències i Arts de Barcelona, and she was Editor of the IEEE Trans. on Robotics.

Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.

            AIhub is supported by:

Related posts :

The Machine Ethics Podcast: featuring Marc Steen

In this episode, Ben chats to Marc Steen about AI as tools, the ethics of business models, writing "Ethics for People Who Work in Tech", and more.
06 June 2023, by

On privacy and personalization in federated learning: a retrospective on the US/UK PETs challenge

Studying the use of differential privacy in personalized, cross-silo federated learning.
05 June 2023, by

VISION AI Open Day: Trustworthy AI

Watch the roundtable discussion on trustworthy AI, with a focus on generative models, from the AI Open Day held in Prague.
02 June 2023, by

PeSTo: an AI tool for predicting protein interactions

The model can predict the binding interfaces of proteins when they bind other proteins, nucleic acids, lipids, ions, and small molecules.
01 June 2023, by

Tetris reveals how people respond to an unfair AI algorithm

An experiment in which two people play a modified version of Tetris revealed that players who get fewer turns perceive the other player as less likeable, regardless of whether a person or an algorithm allocates the turns.
31 May 2023, by

AIhub monthly digest: May 2023 – mitigating biases, ICLR invited talks, and Eurovision fun

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
30 May 2023, by

©2021 - Association for the Understanding of Artificial Intelligence


©2021 - ROBOTS Association