ΑΙhub.org
 

ECAI plenary talk: Carme Torras on assistive AI


by
30 September 2020



share this:


This month saw the European Conference on AI (ECAI 2020) go digital. Included in the programme were five plenary talks. In this article we summarise the talk by Professor Carme Torras who gave an overview of her group’s work on assistive AI, and talked about the ethics of this field.

Carme is based at the Institut de Robòtica i Informàtica Industrial (CSIC-UPC) in Barcelona. Her lab includes an assisted living facility where the team can test their robots in real-life situations. One of the specialities of the team is cloth manipulation. This is a very important process when it comes to assistive robotics. In industry, most research is carried out on rigid objects, with more flexible objects, such as clothes and fabric, receiving relatively little attention. Therefore, Carme believes it’s very important to work with materials that are common in assistive environments, such as clothes and sheets.

Carme gave an overview of one large project that she leads: CLOTHILDE (CLOTH manIpulation Learning from DEmonstrations). The aim of the research is to combine techniques from topology and machine learning to develop a general theory of cloth manipulation. The three applications for this work are: 1) Housekeeping and hospital logistics, 2) Automation in the clothing industry, 3) Increasing autonomy of the elderly and disabled.

Carme’s personal favourite application is the third: increasing autonomy of the elderly and disabled, and this is what she focussed on during the presentation. In this space her team are aiming for robot-human collaboration as opposed to complete automation. There are a number of research goals in assistive AI technologies:

  1. They must be easy to use by non-experts
  2. They need to be intrinsically safe
  3. They must be tolerant to noisy perceptions and inaccurate actions
  4. They must be able to assess uncertainty with respect to a human’s goals and intentions
  5. They should be explainable
  6. They should be able to collaborate with people

The CLOTHILDE project team are working towards all of these goals and Carme showed some video footage of the impressive progress so far. This included a robot learning through demonstration to fold a polo shirt. After an initial human demonstration, reinforcement learning was then used to improve the technique. A camera above the folded shirt could report to the robot how accurate the folding was looking at the shape and the wrinkles in the garment after folding. You can find out more about this research here and here.

Another demonstration showed a robot putting a shoe on a person’s foot. The robot adapts to user preferences through interaction. In addition, throughout the process the robot “speaks” to the person describing the action it is taking. This communication will be essential for user trust when the machine is used in the community. You can find out more from this research article.

Another big European project that Carme talked about was Socrates – a project to develop social robots in the field of care for the elderly. She showed one particular robot in action. This machine was designed to assist people with Alzheimer’s to complete a brain-training game, suggesting possible moves to the patient, and providing encouragement. The robot is also able to record the progress of the patient over long periods of time. You can read more about this research here.

The second part of Carme’s talk focussed on the ethical implications of work on assistive AI. The research community in this field are very keen to do good with their work and to pursue ethical AI routes. There are a number of ethical issues, such as impact on the job market, legal liability, privacy and the digital gap, that shared with other fields. Due to the nature of assistive robots there are additional areas for concern that include communication, decision making, feelings and relationships and human enhancement.

One way to instil ethical principles is through regulation and standards (for example). Another critically important way is through education and dissemination. The public need to be aware of the benefits and potential risks of AI technologies. There are a number of educational initiatives regarding ethics underway, and a few of these were explained in the presentation. Carme noted that the ACM/IEEE Computer Science Curriculum consists of 18 knowledge areas, one of which is “social issues and professional practice”, which includes courses on ethics in technology, professional ethics, and society and technology. She highlighted a quote from Barbara Grosz: “By making ethical reasoning a central element in the curriculum, students can learn to think not only about what technology they could create, but also whether they should create that technology”.

Much of the ethical education is carried out using philosophical textbooks and papers. However, Carme noted that these are often too dry and abstract for technology students. To better engage students, classical science fiction work is actually being increasingly used for education. The use of fiction has a number of benefits including allowing students to discuss and reason about difficult issues without making the discussion personal.

Carme concluded her presentation by saying that the we have an amazing future ahead of us when it comes to assistive AI. Robots are here to stay so it is up to us to determine the roles for both humans and the robots in this future.

About Carme Torras

Carme Torras is Research Professor at the Spanish Scientific Research Council (CSIC). She received MSc degrees in Mathematics and Computer Science from the Universitat de Barcelona and the University of Massachusetts, respectively, and a PhD degree in Computer Science from the Technical University of Catalonia (UPC). She is IEEE Fellow, EurAI Fellow, member of Academia Europaea, member of the Reial Acadèmia de Ciències i Arts de Barcelona, and she was Editor of the IEEE Trans. on Robotics.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Interview with Joseph Marvin Imperial: aligning generative AI with technical standards

  02 Apr 2025
Joseph tells us about his PhD research so far and his experience at the AAAI 2025 Doctoral Consortium.

Forthcoming machine learning and AI seminars: April 2025 edition

  01 Apr 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 April and 31 May 2025.

AI can be a powerful tool for scientists. But it can also fuel research misconduct

  31 Mar 2025
While AI is allowing scientists to make technological breakthroughs, there’s also a darker side to the use of AI in science: scientific misconduct is on the rise.
monthly digest

AIhub monthly digest: March 2025 – human-allied AI, differential privacy, and social media microtargeting

  28 Mar 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

AI ring tracks spelled words in American Sign Language

  27 Mar 2025
In its current form, SpellRing could be used to enter text into computers or smartphones via fingerspelling.

How AI images are ‘flattening’ Indigenous cultures – creating a new form of tech colonialism

  26 Mar 2025
AI-generated stock images that claim to depict “Indigenous Australians”, don’t resemble Aboriginal and Torres Strait Islander peoples.

Interview with Lea Demelius: Researching differential privacy

  25 Mar 2025
We hear from doctoral consortium participant Lea Demelius who is investigating the trade-offs and synergies that arise between various requirements for trustworthy AI.

The Machine Ethics podcast: Careful technology with Rachel Coldicutt

This episode, Ben chats to Rachel Coldicutt about AI taxonomy, innovating for everyone not just the few, responsibilities of researchers, and more.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association