ΑΙhub.org
 

Interview with Guillem Alenyà – discussing assistive robotics, human-robot interaction, and more


by
14 January 2021



share this:

Guillem Alenya
Guillem Alenyà is Director of the Institut de Robòtica i Informàtica Industrial, CSIC-UPC, in Barcelona. His research activities include assistive robotics, robot adaptation, human-robot interactions and grasping of deformables. We spoke about some of the projects he is involved in and his plans for future work.

Your group is involved in a large number of projects. Maybe we could start with the SOCRATES project. Could you tell us a bit about that?

The SOCRATES project is about quality of interaction between robots and users, and our role is focussed on adapting the robot behaviour to user needs. We have concentrated on a very nice use case: cognitive training of mild dementia patients. We are working with a day-care facility in Barcelona and asked if we could provide some technological help for the caregiver. The patients need to do exercises, basically cognitive training, to decrease the rate of cognitive decline. These mind exercises help with the rest of their treatments. The problem is that they have a single caregiver with twenty patients and that means the attention each patient can receive is limited.

AIhub focus issue on good health and well-being

Our answer was to use an assistive robot which monitors the cognitive training game. In the game the patient has a selection of numbered tokens which they must order in some way, for example: put in ascending order, or select only the odd numbers. The robot provides help to the patient. What we are doing on the project, which is a scientific novelty, is to try and monitor what the current difficulty of the game is, what the current ability of the person is, and then try to provide adequate help to the person. Essentially, we are aiming to personalise the interactions.

It is a very complex problem, because you first start with a lot of tokens, but once you have placed some tokens there are fewer to choose from for the next move so the game becomes easier as it progresses. People also get used to the task and learn. During the first interactions they find it tricky, but after, for example, ten experiences it becomes easier. The caregivers would also like to have an additional degree of freedom. Sometimes people get bored because it’s too easy, or disheartened because it’s too difficult. This mood is very difficult to sense using a robot. Importantly though, we don’t have to do this. Our idea is not to replace the caregiver with a robot, but to give tools to the caregiver. The caregiver has the possibility to programme the robot and to personalise for each patient.

PhD student Antonio Andriella won the 2019 Marie Curie Alumni Association (MCAA) and Vikki Academy contest for this work. The prize was creation of this animated video.

So, how does the robot interact with the user?

There are various levels. The first just keeps the patient focussed or provides encouragement, for example by saying “it’s your turn” or “well done”. The next level can also give hints: “look at the right side of the board, not the left side”, or “your solution is among these three numbers” – and accompanying the hints with pointing.

The complexity is that we don’t want the patients to always complete things perfectly; we want them to make mistakes. The game has to be hard enough so they make mistakes, as it has to be a cognitive exercise, but not too difficult. Keeping this balance is the role of the robot; we do this by providing the level of help best suited to the user, and by changing the robot behaviour during the game.

assistance robot in action
The assistance robot in action.

What are the future plans for the project? Are you at the stage where you can test it with real patients?

We are now performing experiments with real patients, although it took a while to get to that stage because of the ethical issues we needed to consider and because of the COVID pandemic. Previously, we had run trials with regular people and we performed a lot of evaluations on the system. One thing we realised was that humans rely heavily on non-verbal cues and we were not able to do that with the robot. Our non-verbal cues were motions of the arm, but they took a lot of time. People expect quicker non-verbal interactions. Observing patients with caregivers we realised that facial expressions were really important so we added a screen with eyes that can move to represent different expressions. These facial expressions are accompanied by small sounds to affirm the expression.

I guess it’s difficult to put a timeframe on when this is something that might be available for caregivers.

Well, we are creating the basic science, but as for having a real product we would need a company to mass-produce it. We need to sit at the same table with all the various actors: us, as the creators of the technology, industry, as the creators of the products, and also the public sector, who are basically the consumers – they would be providing the tool to their personnel. We need agreement between these three sectors.

We also need to consider the ethical aspect. It is true that, as the robot will be interacting with people, many ethical considerations should come into play. We have to take into account the fact that we are shaping the way people see robots and assistive technologies.

Therefore, it’s complicated to get a real product, due to both the ethical protocols and also because we need industrial and public sector engagement and partnerships. We would like to set up a reference lab for assistive technologies here in Barcelona where all these actors from across various sectors (research, industry, the public sector, care workers, policy makers) can come and find facilities to evaluate these kinds of technologies. We believe that this lab will provide a route to getting these technologies out of the lab and into the market more quickly.

Could you tell us about some of the other projects that are taking place in your lab?

Another area we focus on is assistive robots where there is physical contact, for example, helping with dressing or feeding.

Working with Carme Torras, we realised that there was a category of objects that are not really tackled in research, and that was textiles (or deformable objects). This is a hard problem – the perception of the state of the object is very difficult. If we consider a mug on a table, and we want to grasp it, it is reasonably easy to determine the mug shape and the hand position needed to grasp. However, imagine that you have a t-shirt that has been dropped on top of the table. First, recognising that this is a t-shirt is hard; a t-shirt can have multiple shapes. So, perception is harder. Then, you have to grasp it. What we realised at the very beginning was that the way you try to grasp is very important. As soon as you touch the t-shirt the shape changes, so everything you know about the state of the garment changes.

One of the projects we have in the group is called CLOTHILDE. This is focused on grasping clothing – we are trying to develop a new theory based on topological mathematics that tries to define the states of a garment in a way that means it can be handled by robotic algorithms. We are planning to study the whole pipeline, not only recognising the states of the garment, but being able to manipulate the garment so that it can be unfolded, folded or laid out on a table, for example.

One of the objectives of the project is to help with dressing. In order to help with dressing you have to grasp the clothing from a very particular point. This has a lot of applications. For example, in disaster situations it could be used to help dress doctors in very sanitised environments.

For us it’s always very important to have people inside the loop. We always try to personalise the robots depending on the needs of the person. For example, take the case of feeding. The robot should adapt because different people have different preferences. They may prefer that food approaches from a certain side. The robot should also follow certain social rules. For example, if someone is talking then the robot should not put food in front of their mouth. These are common sense rules that are very difficult to embed inside an assistive robot.

Another project (BURG) is concerned with developing benchmarks for understanding grasping. This video demonstrates cloth manipulation with respect to the particular benchmark of spreading a tablecloth over a table.

Do you use demonstrations to train the robots to learn the required motion?

Yes, we try to use demonstrations but this is only the initial state; we also do a little bit more. The nice thing about demonstrations is that you can have natural motions. But then the motion is fixed so it’s only really useful if you want to replicate the same motion. We apply some reinforcement learning techniques in order to try and adapt these learnt motions to slight variations which may occur in the environment.

Could you talk a bit about your research group?

One of the strengths of our group is that we have people who specialise in different areas of robotics. We have perception specialists, semantic knowledge representation and decision making experts, and we have people who specialise in learning by demonstration. This means that we can complete the whole robotics cycle within our group. Although it can sometimes be a challenge to get a lot of people in different areas to speak the same language, in general it is very positive. When we have a problem, we have many different views and sometimes our solutions are different from solutions reached by other groups due to our rich variety of views.

We now also have people devoted to ethics as this is clearly a very important matter. We realise that we want to put the robots in human environments, meaning interacting with people. So, at some point we need to think about the ethics of this and the human relationships that we are influencing by inserting a robot in a home or a day care centre for example. In fact, Carme is tutoring a PhD on robot ethics relating to this.

That leads on the question of ethical considerations when designing and programming a robot.

It’s very difficult; as people we take for granted some behaviours because of our common sense. But robots do not have common sense, so you have to consider the things that might happen that are not desirable. Carme is part of different ethical committees and we are both worried and interested. Worried because we don’t want to do harm, and interested because we want to do the best that we can in this regard.

We are very happy with our group – we think that from the point of view of the robotics and the ethics we are fine. That is one of the reasons we are promoting this new lab. We need a place for gathering all the different knowledge that is required for assistive robotics and this includes law, ethics, psychology, medicine, industry. If all the actors are not at the table together then opportunities can be missed.

Thinking about the next 5 to 10 years, what do you think will happen in this field?

In terms of research I think it will continue producing very nice demonstrations. I think if robots are going to reach maturity and become a real product they should be focused on simple actions. A unique robot success case is robot vacuum cleaners; this is because this is a very particular and closed application.

Our systems are harder because our robots will be interacting with humans. Here there are two visions. Either we produce the perfect robot that does everything. This is very costly and it’s not going to happen in the next five years. Or, we produce simple robots dedicated to simple tasks, that can be complex in some parts (for example, interactions), and here we can make use of the nice technology that artificial intelligence has produced, such as conversation agents or voice recognition. We use Amazon Alexa in our lab, for example. Expectations of robots are very high. We need to come up with simple, dedicated applications. We think that research should go for hard problems; for example manipulation, and robot adaptation.

I don’t envisage that any of this technology will be in our homes in the next five years. But, some subsets of this technology may be if we can find simpler applications. What we have learned during the COVID crisis is that robots were not there to help. There has been a lot of news about robots able to do things. I’m part of a CLAIRE taskforce that is devoted to evaluating the use of robots during COVID and our conclusions were that they were not used that much, unfortunately. But, this is not a flaw, it’s an opportunity. Now we have time to think about why that was the case, and about how we could solve this situation; which steps should we take, so that next time we have a crisis like this robots can effectively help.

About Guillem Alenyà

Guillem Alenyà is Researcher and Director at the Institut de Robotica i Informàtica Industrial (IRI), a joint centre of the Spanish Scientific Research Council (CSIC) and Polytechnic University of Catalonia (UPC). He has participated and coordinated numerous scientific projects involving robot adaptation for assistive robotics, decision making and knowledge representation in Human Robot Interaction specially in tasks involving contact, and in benchmarking vision and grasping in garment manipulation. He has published more than 100 papers in international journals and conferences. He coordinates several technological transfer projects and is co-founder of the DATISION spin-off.

Further information

SOCRATES project page
CLOTHILDE project page
BURG project page
IMAGINE project page

In November, Carme Torras won a National Research Award from the Spanish government. The highest distinction given in Spain and recognizes her pioneering contributions in the areas of intelligent robotics and social robotics.

Guillem Alenyà was recently inaugurated as Director of the IRI.



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.

Congratulations to the #NeurIPS2024 award winners

  11 Dec 2024
Find out who has been recognised by the conference awards.

Multi-agent path finding in continuous environments

and   11 Dec 2024
How can a group of agents minimise their journey length whilst avoiding collisions?




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association