ΑΙhub.org
 

European Vision for AI 2021 – an event for all


by
28 April 2021



share this:
european vision for AI logo

The European Vision for AI event, held on 22 April 2021, provided an opportunity for the public to hear from members of the European artificial intelligence (AI) community and representatives from the European Commission and parliament. The morning-long session was organised by the VISION project partners in cooperation with four networks of AI centres of excellence (AI4Media, ELISE, TAILOR, Humane-AI-Net). These networks were launched within the European Union’s Horizon 2020 Programme in September 2020 and are bringing together scientists across Europe.

This event followed hot on the heels of the announcement from the European Commission regarding proposed new rules and actions for artificial intelligence. During the morning, the speakers provided some context and details around this and there was plenty of interesting discussion on potential paths forward for AI in Europe. In particular, conversations focussed on the European ecosystem of trust and the proposed legal framework, and the development of a European ecosystem of excellence.

The event was chaired by Holger Hoos (Leiden University, Netherlands), coordinator of the VISION Project and Chair of the Board of Directors of CLAIRE. He saw this event as a step towards closer communication between the European AI community and the general public: “We aim to get young people, researchers, innovators, companies, and policy makers, at least virtually, around one table and discuss ambitious European plans for AI as well as the emerging ecosystem of AI excellence.

Throughout the sessions there was a strong emphasis on trustworthy AI that works for all citizens. The “European approach to AI” that the European networks want to see would bring together excellence and trust, with the goal of creating a world-leading ecosystem committed to the “AI for Good” and “AI for All” concepts.

During the event introduction there was an audience participation poll, asking us to share the first word that comes to mind when we hear “European AI”. “Trustworthy” came out on top, followed by “human-centric“. Another poll, later on in the proceedings, revealed that 70% of the audience felt that Europe was “probably” or “absolutely” going to be a leader in trustworthy human-centric AI.

As well as hearing more about the proposed ecosystems of AI excellence, and further details on the proposed regulation, there was also a parallel session with three topics to choose from. Participants could opt to learn more about the European focus on society, industry, or skills and training.

If you are interested in finding out more you can watch the livestream from the event in full here. The programme from the day is here.

About the event organisers

This event was organised by the consortium of partners of the project VISION, the coordination and support action (CSA) awarded under the H2020-ICT-48-2020 call. The aim of VISION is to reinforce, interconnect and mobilise Europe’s AI community. Europe has been investing in the European model of AI, with a new set of four European networks of AI excellence centres.

Launched in September 2020, these four networks of excellence centres – AI4Media, ELISE, TAILOR and Humane-AI-Net – are now working on various aspects of trustworthy, human-centric AI. In parallel to these efforts, the VISION project aims to create connections, synergy and joint initiatives between these networks as well as with key stakeholders across Europe. These projects are key components in the European Commission’s AI strategy.



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence