ΑΙhub.org
 

European parliament approves draft EU AI act


by
16 June 2023



share this:
EU flag

An important milestone in the process of EU AI legislation was taken on 14 June when the European parliament voted in favour of adopting the proposed AI act (with 499 votes in favour, 28 against and 93 abstentions). The next step will involve talks with EU member states on the final form of the law. The aim is to reach an agreement by the end of this year.

At the core of the proposed act is a risk-based approach, which establishes obligations for providers and those deploying AI systems depending on the level of risk posed.

AI systems deemed to present an “unacceptable risk” would be completely prohibited. In the draft act, this includes “real-time” biometric identification systems (when deployed in publicly accessible spaces), systems that deploy harmful manipulative “subliminal techniques”, systems that exploit specific vulnerable groups, and systems used by public authorities, or on their behalf, for social scoring purposes.

Systems classified as “high risk” would be subject to new regulations including registration of these systems by the providers in an EU-wide database before releasing to the market, and the necessity to comply with a range of requirements including those relating to risk management, testing, technical robustness, data training and data governance, transparency, human oversight, and cybersecurity. Such high-risk applications will include AI systems that pose significant harm to people’s health, safety, fundamental rights or the environment.

AI systems presenting “limited risk” would be subject to a limited set of transparency obligations. All other AI systems presenting only low or minimal risk could be developed and used in the EU without conforming to any additional legal obligations.

On the subject of generative AI, systems based on such models, like ChatGPT, would have to comply with transparency requirements (disclosing that the content was AI-generated, also helping distinguish deep-fake images from real ones) and ensure safeguards against generating illegal content. Detailed summaries of the copyrighted data used for their training would also have to be made publicly available.

You can read more details about the proposed AI act in this document.



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



AAAI presidential panel – AI reasoning

  09 Jan 2026
Watch the third panel discussion in this series from AAAI.

The Machine Ethics podcast: Companion AI with Giulia Trojano

Ben chats to Giulia Trojano about AI as an economic narrative, companion chatbots, deskilling of digital literacy, chatbot parental controls, differences between social AI and general AI services and more.

What are small language models and how do they differ from large ones?

  06 Jan 2026
Let’s explore what makes SLMs and LLMs different – and how to choose the right one for your situation.

Forthcoming machine learning and AI seminars: January 2026 edition

  05 Jan 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 5 January and 28 February 2026.

AAAI presidential panel – AI perception versus reality video discussion

  02 Jan 2026
Watch the second panel discussion in this series from AAAI.

More than half of new articles on the internet are being written by AI

  31 Dec 2025
The line between human and machine authorship is blurring, particularly as it’s become increasingly difficult to tell whether something was written by a person or AI.
monthly digest

2025 digest of digests

  30 Dec 2025
We look back through the archives of our monthly digests to pick out some highlights from the year.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence