ΑΙhub.org
 

CLAIRE endorses EU plan for AI and makes 10 key recommendations


by
19 June 2020



share this:
CLAIRE

In February this year, the European Commission released a white paper entitled: On Artificial Intelligence – A European approach to excellence and trust. With the public consultation phase on this document now closed, CLAIRE (Confederation of Laboratories for Artificial Intelligence Research in Europe) have published their response, which largely endorses the EC plans.

CLAIRE note that the plans and actions outlined in the EC white paper are closely aligned with their vision for European excellence in human-centred AI. One idea they believe has considerable potential is the concept of a CERN-inspired “lighthouse centre” that will bring together top researchers from across Europe and around the world.

Holger Hoos, Professor of Machine Learning at Leiden University, The Netherlands, and Chairman of the Board of CLAIRE said “The white paper offers a compelling blueprint. Now, important details need to be filled in, for example on how to balance supporting excellence within the European AI ecosystem along with a broader network, whose members are of key importance for reaching critical mass and ensuring global impact.”

At the heart of CLAIRE’s detailed response are 10 key recommendations:

  1. Make sure to complement the push for AI regulation with swift and substantial investment into AI research, including curiosity-driven, foundational research – Europe cannot be a leader in AI regulation without being a leader in AI, and it cannot be a leader in AI applications or innovations without being a leader in foundational AI research.
  2. Create streamlined allocation mechanisms of AI research support, focussing on those researchers and institutions with a track record of excellence in AI as well as on those with demonstrated potential for excellence; the latter is of key importance in order to make the best use of Europe’s vast pool of talent.
  3. Adopt a definition of AI that captures what distinguishes AI approaches from other
    kinds of advanced computation: they exhibit key aspects of behaviour considered as intelligent in humans.
    With a non-standard definition of AI, there is a risk that support as well as regulation are misaligned with what is commonly understood to constitute AI technology.
  4. Focus “AI made in Europe” on “AI for Good” and “AI for All”; take global leadership, together with like-minded partners, in supporting publically funded, large-scale AI research and innovation that can compete at the level of large US and Chinese companies, while focusing on areas specifically relevant for societies.
  5. Establish a clear strategy for coordinating and structuring an AI-based innovation ecosystem across Europe. Change existing policy instruments and strategies to take into account the significant role of entrepreneurs and private capital in the modern, AI-driven innovation economy.
  6. Establish policies to increase uptake of AI and investment in AI-driven product and market development among the engines of the European economy.
  7. Invest in promoting broader awareness of AI in society, and specifically of how AI technologies affect society and citizens; this is critical for the responsible use of AI and forms the basis for constructive engagement based on realistic expectations and adequate perception of risks.
  8. Build upon investments and tangible results of Horizon2020 programme in Responsible Research and Innovation (RRI) to ensure that research and innovation in the field of AI achieve socio-economic benefits in Europe and strengthen democratic institutions, rule of law and human rights.
  9. Expand lessons learned in the areas of Privacy and Safety by Design in the last two decades and apply them to Ethics by Design for AI by means of developing standards, metrics, legislation and institutional mechanisms for auditing, monitoring, inspection and certification.
  10. Create the proposed lighthouse centre in a way that effectively achieves critical mass, synergy, and cohesion across the European AI ecosystem without permanently dislocating talent from where it is needed most. Make sure this is focussed on excellence and a site selection process grounded and transparently managed on the basis of politically neutral, externally validated criteria. Ensure this provides much-needed, large-scale data and computing infrastructure.

You can read the full response, which includes further specific recommendations, here.

About CLAIRE

CLAIRE (Confederation of Laboratories for Artificial Intelligence Research in Europe) is an organisation created by the European AI community that seeks to strengthen European excellence in AI research and innovation, with a strong focus on human-centred AI. CLAIRE’s membership network consists of over 370 research groups and research institutions, covering jointly more than 21,000 employees in 35 countries. Find out more about CLAIRE here.



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence