ΑΙhub.org
 

European Union liability rules for artificial intelligence


by
06 October 2022



share this:
EU flag

Last week, the European Commission released a proposal for an Artificial Intelligence Liability Directive (AILD). It forms the next step in the development of a legal framework for AI.

In its white paper, published in 2020, the Commission undertook to promote the uptake of artificial intelligence and to assess the risks. Later, in 2021, the Commission proposed a legal framework which aimed to “address the risks generated by specific uses of AI through a set of rules focusing on the respect of fundamental rights and safety.”

One of the objectives of the 2020 white paper was to formulate a proposal for an AI liability directive. The purpose of this proposal is to “to contribute to the proper functioning of the internal market by harmonising certain national non-contractual fault-based liability rules, so as to ensure that persons claiming compensation for damage caused to them by an AI system enjoy a level of protection equivalent to that enjoyed by persons claiming compensation for damage caused without the involvement of an AI system.”

You can read the proposed directive in full here.

Find out more




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



How AI is opening the playbook on sports analytics

  18 Sep 2025
Waterloo researchers create simulated soccer datasets to unlock insights once reserved for pro teams.

Discrete flow matching framework for graph generation

and   17 Sep 2025
Read about work presented at ICML 2025 that disentangles sampling from training.

We risk a deluge of AI-written ‘science’ pushing corporate interests – here’s what to do about it

  16 Sep 2025
A single individual using AI can produce multiple papers that appear valid in a matter of hours.

Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence