ΑΙhub.org
 

ECAI plenary talk: Carme Torras on assistive AI


by
30 September 2020



share this:


This month saw the European Conference on AI (ECAI 2020) go digital. Included in the programme were five plenary talks. In this article we summarise the talk by Professor Carme Torras who gave an overview of her group’s work on assistive AI, and talked about the ethics of this field.

Carme is based at the Institut de Robòtica i Informàtica Industrial (CSIC-UPC) in Barcelona. Her lab includes an assisted living facility where the team can test their robots in real-life situations. One of the specialities of the team is cloth manipulation. This is a very important process when it comes to assistive robotics. In industry, most research is carried out on rigid objects, with more flexible objects, such as clothes and fabric, receiving relatively little attention. Therefore, Carme believes it’s very important to work with materials that are common in assistive environments, such as clothes and sheets.

Carme gave an overview of one large project that she leads: CLOTHILDE (CLOTH manIpulation Learning from DEmonstrations). The aim of the research is to combine techniques from topology and machine learning to develop a general theory of cloth manipulation. The three applications for this work are: 1) Housekeeping and hospital logistics, 2) Automation in the clothing industry, 3) Increasing autonomy of the elderly and disabled.

Carme’s personal favourite application is the third: increasing autonomy of the elderly and disabled, and this is what she focussed on during the presentation. In this space her team are aiming for robot-human collaboration as opposed to complete automation. There are a number of research goals in assistive AI technologies:

  1. They must be easy to use by non-experts
  2. They need to be intrinsically safe
  3. They must be tolerant to noisy perceptions and inaccurate actions
  4. They must be able to assess uncertainty with respect to a human’s goals and intentions
  5. They should be explainable
  6. They should be able to collaborate with people

The CLOTHILDE project team are working towards all of these goals and Carme showed some video footage of the impressive progress so far. This included a robot learning through demonstration to fold a polo shirt. After an initial human demonstration, reinforcement learning was then used to improve the technique. A camera above the folded shirt could report to the robot how accurate the folding was looking at the shape and the wrinkles in the garment after folding. You can find out more about this research here and here.

Another demonstration showed a robot putting a shoe on a person’s foot. The robot adapts to user preferences through interaction. In addition, throughout the process the robot “speaks” to the person describing the action it is taking. This communication will be essential for user trust when the machine is used in the community. You can find out more from this research article.

Another big European project that Carme talked about was Socrates – a project to develop social robots in the field of care for the elderly. She showed one particular robot in action. This machine was designed to assist people with Alzheimer’s to complete a brain-training game, suggesting possible moves to the patient, and providing encouragement. The robot is also able to record the progress of the patient over long periods of time. You can read more about this research here.

The second part of Carme’s talk focussed on the ethical implications of work on assistive AI. The research community in this field are very keen to do good with their work and to pursue ethical AI routes. There are a number of ethical issues, such as impact on the job market, legal liability, privacy and the digital gap, that shared with other fields. Due to the nature of assistive robots there are additional areas for concern that include communication, decision making, feelings and relationships and human enhancement.

One way to instil ethical principles is through regulation and standards (for example). Another critically important way is through education and dissemination. The public need to be aware of the benefits and potential risks of AI technologies. There are a number of educational initiatives regarding ethics underway, and a few of these were explained in the presentation. Carme noted that the ACM/IEEE Computer Science Curriculum consists of 18 knowledge areas, one of which is “social issues and professional practice”, which includes courses on ethics in technology, professional ethics, and society and technology. She highlighted a quote from Barbara Grosz: “By making ethical reasoning a central element in the curriculum, students can learn to think not only about what technology they could create, but also whether they should create that technology”.

Much of the ethical education is carried out using philosophical textbooks and papers. However, Carme noted that these are often too dry and abstract for technology students. To better engage students, classical science fiction work is actually being increasingly used for education. The use of fiction has a number of benefits including allowing students to discuss and reason about difficult issues without making the discussion personal.

Carme concluded her presentation by saying that the we have an amazing future ahead of us when it comes to assistive AI. Robots are here to stay so it is up to us to determine the roles for both humans and the robots in this future.

About Carme Torras

Carme Torras is Research Professor at the Spanish Scientific Research Council (CSIC). She received MSc degrees in Mathematics and Computer Science from the Universitat de Barcelona and the University of Massachusetts, respectively, and a PhD degree in Computer Science from the Technical University of Catalonia (UPC). She is IEEE Fellow, EurAI Fellow, member of Academia Europaea, member of the Reial Acadèmia de Ciències i Arts de Barcelona, and she was Editor of the IEEE Trans. on Robotics.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

‘Probably’ doesn’t mean the same thing to your AI as it does to you

  17 Apr 2026
Are you sure you and the AI chatbot you’re using are on the same page about probabilities?

Interview with Xinwei Song: strategic interactions in networked multi-agent systems

  16 Apr 2026
Xinwei Song tells us about her research using algorithmic game theory and multi-agent reinforcement learning.

2026 AI Index Report released

  15 Apr 2026
Find out what the ninth edition of the report, which was published on 13 April, says about trends in AI.

Formal verification for safety evaluation of autonomous vehicles: an interview with Abdelrahman Sayed Sayed

  14 Apr 2026
Find out more about work at the intersection of continuous AI models, formal methods, and autonomous systems.

Water flow in prairie watersheds is increasingly unpredictable — but AI could help

  13 Apr 2026
In recent years, the Prairies have seen bigger swings in climate conditions — very wet years followed by very dry ones.

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.

Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti

  08 Apr 2026
Francesco tells us how LLMs behave in the social network Moltbook, and what this reveals about network dynamics.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence