ΑΙhub.org
 

CLAIRE endorses EU plan for AI and makes 10 key recommendations

by
19 June 2020



share this:
CLAIRE

In February this year, the European Commission released a white paper entitled: On Artificial Intelligence – A European approach to excellence and trust. With the public consultation phase on this document now closed, CLAIRE (Confederation of Laboratories for Artificial Intelligence Research in Europe) have published their response, which largely endorses the EC plans.

CLAIRE note that the plans and actions outlined in the EC white paper are closely aligned with their vision for European excellence in human-centred AI. One idea they believe has considerable potential is the concept of a CERN-inspired “lighthouse centre” that will bring together top researchers from across Europe and around the world.

Holger Hoos, Professor of Machine Learning at Leiden University, The Netherlands, and Chairman of the Board of CLAIRE said “The white paper offers a compelling blueprint. Now, important details need to be filled in, for example on how to balance supporting excellence within the European AI ecosystem along with a broader network, whose members are of key importance for reaching critical mass and ensuring global impact.”

At the heart of CLAIRE’s detailed response are 10 key recommendations:

  1. Make sure to complement the push for AI regulation with swift and substantial investment into AI research, including curiosity-driven, foundational research – Europe cannot be a leader in AI regulation without being a leader in AI, and it cannot be a leader in AI applications or innovations without being a leader in foundational AI research.
  2. Create streamlined allocation mechanisms of AI research support, focussing on those researchers and institutions with a track record of excellence in AI as well as on those with demonstrated potential for excellence; the latter is of key importance in order to make the best use of Europe’s vast pool of talent.
  3. Adopt a definition of AI that captures what distinguishes AI approaches from other
    kinds of advanced computation: they exhibit key aspects of behaviour considered as intelligent in humans.
    With a non-standard definition of AI, there is a risk that support as well as regulation are misaligned with what is commonly understood to constitute AI technology.
  4. Focus “AI made in Europe” on “AI for Good” and “AI for All”; take global leadership, together with like-minded partners, in supporting publically funded, large-scale AI research and innovation that can compete at the level of large US and Chinese companies, while focusing on areas specifically relevant for societies.
  5. Establish a clear strategy for coordinating and structuring an AI-based innovation ecosystem across Europe. Change existing policy instruments and strategies to take into account the significant role of entrepreneurs and private capital in the modern, AI-driven innovation economy.
  6. Establish policies to increase uptake of AI and investment in AI-driven product and market development among the engines of the European economy.
  7. Invest in promoting broader awareness of AI in society, and specifically of how AI technologies affect society and citizens; this is critical for the responsible use of AI and forms the basis for constructive engagement based on realistic expectations and adequate perception of risks.
  8. Build upon investments and tangible results of Horizon2020 programme in Responsible Research and Innovation (RRI) to ensure that research and innovation in the field of AI achieve socio-economic benefits in Europe and strengthen democratic institutions, rule of law and human rights.
  9. Expand lessons learned in the areas of Privacy and Safety by Design in the last two decades and apply them to Ethics by Design for AI by means of developing standards, metrics, legislation and institutional mechanisms for auditing, monitoring, inspection and certification.
  10. Create the proposed lighthouse centre in a way that effectively achieves critical mass, synergy, and cohesion across the European AI ecosystem without permanently dislocating talent from where it is needed most. Make sure this is focussed on excellence and a site selection process grounded and transparently managed on the basis of politically neutral, externally validated criteria. Ensure this provides much-needed, large-scale data and computing infrastructure.

You can read the full response, which includes further specific recommendations, here.

About CLAIRE

CLAIRE (Confederation of Laboratories for Artificial Intelligence Research in Europe) is an organisation created by the European AI community that seeks to strengthen European excellence in AI research and innovation, with a strong focus on human-centred AI. CLAIRE’s membership network consists of over 370 research groups and research institutions, covering jointly more than 21,000 employees in 35 countries. Find out more about CLAIRE here.



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



The Turing Lectures: Can we trust AI? – with Abeba Birhane

Abeba covers biases in data, the downstream impact on AI systems and our daily lives, how researchers are tackling the problem, and more.
21 November 2024, by

Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association