ΑΙhub.org
 

ECAI plenary talk: Carme Torras on assistive AI

by
30 September 2020



share this:


This month saw the European Conference on AI (ECAI 2020) go digital. Included in the programme were five plenary talks. In this article we summarise the talk by Professor Carme Torras who gave an overview of her group’s work on assistive AI, and talked about the ethics of this field.

Carme is based at the Institut de Robòtica i Informàtica Industrial (CSIC-UPC) in Barcelona. Her lab includes an assisted living facility where the team can test their robots in real-life situations. One of the specialities of the team is cloth manipulation. This is a very important process when it comes to assistive robotics. In industry, most research is carried out on rigid objects, with more flexible objects, such as clothes and fabric, receiving relatively little attention. Therefore, Carme believes it’s very important to work with materials that are common in assistive environments, such as clothes and sheets.

Carme gave an overview of one large project that she leads: CLOTHILDE (CLOTH manIpulation Learning from DEmonstrations). The aim of the research is to combine techniques from topology and machine learning to develop a general theory of cloth manipulation. The three applications for this work are: 1) Housekeeping and hospital logistics, 2) Automation in the clothing industry, 3) Increasing autonomy of the elderly and disabled.

Carme’s personal favourite application is the third: increasing autonomy of the elderly and disabled, and this is what she focussed on during the presentation. In this space her team are aiming for robot-human collaboration as opposed to complete automation. There are a number of research goals in assistive AI technologies:

  1. They must be easy to use by non-experts
  2. They need to be intrinsically safe
  3. They must be tolerant to noisy perceptions and inaccurate actions
  4. They must be able to assess uncertainty with respect to a human’s goals and intentions
  5. They should be explainable
  6. They should be able to collaborate with people

The CLOTHILDE project team are working towards all of these goals and Carme showed some video footage of the impressive progress so far. This included a robot learning through demonstration to fold a polo shirt. After an initial human demonstration, reinforcement learning was then used to improve the technique. A camera above the folded shirt could report to the robot how accurate the folding was looking at the shape and the wrinkles in the garment after folding. You can find out more about this research here and here.

Another demonstration showed a robot putting a shoe on a person’s foot. The robot adapts to user preferences through interaction. In addition, throughout the process the robot “speaks” to the person describing the action it is taking. This communication will be essential for user trust when the machine is used in the community. You can find out more from this research article.

Another big European project that Carme talked about was Socrates – a project to develop social robots in the field of care for the elderly. She showed one particular robot in action. This machine was designed to assist people with Alzheimer’s to complete a brain-training game, suggesting possible moves to the patient, and providing encouragement. The robot is also able to record the progress of the patient over long periods of time. You can read more about this research here.

The second part of Carme’s talk focussed on the ethical implications of work on assistive AI. The research community in this field are very keen to do good with their work and to pursue ethical AI routes. There are a number of ethical issues, such as impact on the job market, legal liability, privacy and the digital gap, that shared with other fields. Due to the nature of assistive robots there are additional areas for concern that include communication, decision making, feelings and relationships and human enhancement.

One way to instil ethical principles is through regulation and standards (for example). Another critically important way is through education and dissemination. The public need to be aware of the benefits and potential risks of AI technologies. There are a number of educational initiatives regarding ethics underway, and a few of these were explained in the presentation. Carme noted that the ACM/IEEE Computer Science Curriculum consists of 18 knowledge areas, one of which is “social issues and professional practice”, which includes courses on ethics in technology, professional ethics, and society and technology. She highlighted a quote from Barbara Grosz: “By making ethical reasoning a central element in the curriculum, students can learn to think not only about what technology they could create, but also whether they should create that technology”.

Much of the ethical education is carried out using philosophical textbooks and papers. However, Carme noted that these are often too dry and abstract for technology students. To better engage students, classical science fiction work is actually being increasingly used for education. The use of fiction has a number of benefits including allowing students to discuss and reason about difficult issues without making the discussion personal.

Carme concluded her presentation by saying that the we have an amazing future ahead of us when it comes to assistive AI. Robots are here to stay so it is up to us to determine the roles for both humans and the robots in this future.

About Carme Torras

Carme Torras is Research Professor at the Spanish Scientific Research Council (CSIC). She received MSc degrees in Mathematics and Computer Science from the Universitat de Barcelona and the University of Massachusetts, respectively, and a PhD degree in Computer Science from the Technical University of Catalonia (UPC). She is IEEE Fellow, EurAI Fellow, member of Academia Europaea, member of the Reial Acadèmia de Ciències i Arts de Barcelona, and she was Editor of the IEEE Trans. on Robotics.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



The Turing Lectures: Can we trust AI? – with Abeba Birhane

Abeba covers biases in data, the downstream impact on AI systems and our daily lives, how researchers are tackling the problem, and more.
21 November 2024, by

Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association