ΑΙhub.org
 

Ethics of connected and automated vehicles: a European Commission expert group report


by
02 October 2020



share this:
AIhub | EU flag

On 18 September the European Commission published a report on the Ethics of Connected and Automated Vehicles (CAVs). Written by an independent group of experts, the report includes twenty recommendations on road safety, privacy, fairness, AI explainability and responsibility, for the development and deployment of connected and automated vehicles.

The recommendations have been made actionable for three stakeholder groups:
1. Manufacturers and deployers (e.g. car manufacturers, suppliers, software developers and mobility service providers);
2. Policymakers (persons working at national, European and international agencies and institutions such as the European Commission and the EU National Ministries)
3. Researchers (e.g. persons working at universities, research institutes and R&D departments).

The aim of the report is to “promote a safe and responsible transition to connected and automated vehicles (CAVs) by supporting stakeholders in the systematic inclusion of ethical considerations in the development and regulation of CAVs”.

The report recognises the potential of CAV technology to deliver benefits, such as reduced fatalities and emissions, but also recognises that technological progress alone is not sufficient to realise this potential. In order to deliver the desired results, the future vision for CAVs should incorporate a broader set of ethical, legal and societal considerations into the development, deployment and use of CAVs.

The 20 ethical recommendations are as follows:

  1. Ensure that CAVs reduce physical harm to persons.
  2. Prevent unsafe use by inherently safe design.
  3. Define clear standards for responsible open road testing.
  4. Consider revision of traffic rules to promote safety of CAVs and investigate exceptions to non-compliance with existing rules by CAVs.
  5. Redress inequalities in vulnerability among road users.
  6. Manage dilemmas by principles of risk distribution and shared ethical principles.
  7. Safeguard informational privacy and informed consent.
  8. Enable user choice, seek informed consent options and develop related best practice industry standards.
  9. Develop measures to foster protection of individuals at group level.
  10. Develop transparency strategies to inform users and pedestrians about data collection and associated rights.
  11. Prevent discriminatory differential service provision.
  12. Audit CAV algorithms.
  13. Identify and protect CAV relevant high-value datasets as public and open infrastructural resources.
  14. Reduce opacity in algorithmic decisions.
  15. Promote data, algorithmic, AI literacy and public participation.
  16. Identify the obligations of different agents involved in CAVs.
  17. Promote a culture of responsibility with respect to the obligations associated with CAVs.
  18. Ensure accountability for the behaviour of CAVs (duty to explain).
  19. Promote a fair system for the attribution of moral and legal culpability for the behaviour of CAVs.
  20. Create fair and effective mechanisms for granting compensation to victims of crashes or other accidents involving CAVs.

All of these points are considered in detail in the report and are accompanied by suggested actions for each of the stakeholder groups.

Read the report in full to find out more

Ethics of connected and automated vehicles – report
Ethics of connected and automated vehicles – factsheet
Ethics of connected and automated vehicles – infographic




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence