ΑΙhub.org
 

Five things to know about: making self-driving cars safe


by
12 October 2020



share this:

By Jonathan O’Callaghan

On 18 September, the European Commission published an independent expert report that looks at some of the outstanding safety and ethical issues around connected and automated vehicles (CAVs).

We spoke to three experts involved in the report about what steps they think still need to be taken to make CAVs safe, what challenges still need to be overcome, and how we can prepare for a future in which both computer-driven and human-driven cars are on our roads.

  1. Self-driving cars and humans must be able to monitor each other

CAVs need to be able to understand the limitations of their human driver, and vice versa, says Marieke Martens, a professor in automated vehicles and human interaction from Eindhoven University of Technology in the Netherlands. In other words, the human driver needs to be ready to take control of the car in certain situations, such as dealing with roadworks, while the car also needs to be able to monitor the capacity of the human in the car.

“We (need) systems that can predict and understand what people can do,” she said, adding that under certain conditions these systems could decide when it’s better to take control or alert the driver. For example, if the driver is fatigued or not paying attention, she says, then the car “should notice and take proper actions”, such as telling the driver to pay attention or explaining that action needs to be taken.

Professor Martens added that rather than just a screen telling the driver automated features have been activated, better interfaces known as HMIs (human machine interaction) will need to be developed to enable communication between the driver and the car “so that the person really understands what the car can and cannot do, and the car really understands what the person can and cannot do.”

  1. We need to test them in more diverse environments

Much of the self-driving car testing that has happened so far has been in relatively easy environments, says Dr John Danaher from the National University of Ireland, Galway, a lecturer in law who focuses on the implications of new technologies. In order to prove they can be safer than human-driven cars, we will need to show they can handle more taxing situations.

“There are some questions about whether they are genuinely safe,” he said. “You need to do more testing to actually ascertain their true risk potential, and you also need to test them in more diverse environments, which is something that hasn’t really been done (to a sufficient degree).

“They tend to be tested in relatively controlled environments like motorways or highways, which are relatively more predictable and less accident-prone than driving on wet and windy country roads. The jury is still out on whether they are going to be less harmful, but that is certainly the marketing pitch.”

  1. The private data of people in the vehicle must be protected

When we talk about CAVs, we often discuss how they share information with other road users. But, notes Professor Sandra Wachter from the University of Oxford, UK, an Associate Professor in the law and ethics of AI, data, and robotics, that raises the significant issue of data protection. “That’s not really the fault of anybody, it’s just the technology needs that type of data,” she said, adding that we need to take the privacy risks seriously.

That includes sharing location data and other information that could reveal a lot about a person when one car talks to another. “It could be things like sexual orientation, ethnicity, health status,” said Professor Wachter, with things like ethnicity being possible to glean from a postcode for example. “Basically anything about your life can be inferred from those types of data.”

Solutions include making sure CAVs comply with existing legal frameworks in other areas, such as the General Data Protection Regulation (GDPR) in Europe, and deleting data when it is no longer needed. But further safeguards might be needed to deal with privacy concerns caused by CAVs. “Those things are very important,” said Professor Wachter.

  1. CAVs can predict the future, but humans cannot – we need to account for that

One of the major benefits of CAVs is they can interact both with other cars and the roads themselves, such as with traffic lights, to provide a safer driving experience. But, says Professor Martens, in a world where both CAVs and human drivers are on the roads together, some limitations of human drivers will need to be taken into account.

“It’s really important to make sure that (CAVs) naturally blend into the current traffic we have,” she said, noting they will initially need to be designed to act human. “A connected car has more information than a manually driven vehicle,” she said. “They can start to accelerate when a traffic light is still red, because they know it will be green very soon.”

Another scenario might be at a pedestrian crossing, where an automated car can stop more quickly than a human car – but pedestrians and cyclists will need to know a car is automated in order to feel safe enough to cross. “I think in the transition period in the next couple of decades, cars need to behave within specific boundaries that are what people are used to,” said Professor Martens.

  1. CAVs must enable a wider range of people to safely travel in vehicles

CAVs could be safely used by people who currently aren’t able to travel by car. For example, elderly or disabled people who would struggle to drive a car themselves could now enjoy an experience used by many others, said Professor Wachter.

“We want to make sure that the technology is improving accessibility,” she said. “That means accessibility and mobility, for example, of people who are at the moment not able to enjoy the benefits of driving. Designers (need to) keep that in mind, that it’s actually the service of a wider community and not just for a few people.

“For elderly people, it could potentially be very helpful, and for people who are no longer able to drive themselves, or somebody that is injured at the moment, or children could be another group if those systems are safe. You could (also) think about carpooling opportunities that could have a positive impact on the environment.”

The research in this article was funded by the EU.

This post Five things to know about: making self-driving cars safe was originally published on Horizon: the EU Research & Innovation magazine | European Commission.




Horizon brings you the latest news and features about thought-provoking science and innovative research projects funded by the EU.
Horizon brings you the latest news and features about thought-provoking science and innovative research projects funded by the EU.




            AIhub is supported by:


Related posts :



2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.

Congratulations to the #NeurIPS2024 award winners

  11 Dec 2024
Find out who has been recognised by the conference awards.

Multi-agent path finding in continuous environments

and   11 Dec 2024
How can a group of agents minimise their journey length whilst avoiding collisions?




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association