EU challenges for an AI human-centric approach: lessons learnt from ECAI 2020

25 September 2020

share this:

By Atia Cortés and Francesca Foffano, AI4EU Observatory

During this period of progressive development and deployment of artificial intelligence, discussions around the ethical, legal, socio-economic and cultural implications of its use are increasing. What are the challenges and the strategy, and what are the values that Europe can bring to this domain?

During the European Conference on AI (ECAI 2020), two special events in the format of panels discussed the challenges of AI made in the European Union, the shape of future research and industry, and the strategy to retain talent and compete with other world powers. This article collects some of the main messages from these two sessions, which included the participation of AI experts from leading European organisations and networks.

Since the publication of European directives and guidance, such as the EC White Paper on AI and the Trustworthy AI Guidelines, Europe has been laying the foundation for the future vision of AI. The European strategy for AI builds on the well-known and accepted principles found in the Charter of Fundamental Rights of the European Commission and the Universal Declaration of Human Rights to define a human-centric approach, whose primary purpose is to enhance human capabilities and societal well-being. This means building AI technologies that will empower society instead of replicating jobs or replacing humans in workstations. However, the European Union is facing challenges in achieving its vision to become a world competitor.

Funding programmes such as Horizon 2020 are evidence that Europe is strong in research, but this has not been enough until now to boost the development of AI technologies that have an impact on the global market. This problem is two-fold: on the one hand, socio-technical requirements such as robustness, data governance or transparency need to be put in place to infuse trust in society. This involves defining protocols that facilitate a joint effort to show evidence of trustworthiness and ethical norms. On the other hand, the European Union needs to improve collaboration with all the state members to be able to curb the fragmentation of competition among them and embrace a common strategy. The ability to coordinate and spread the strategic vision in large scale is an opportunity for Europe to emerge from the shadows of the big competitors China and the USA.

To improve Europe’s competitiveness in the global digital economy and achieve technological sovereignty, the Commission has proposed the Digital Europe Programme, which will focus on building the strategic digital capabilities of the EU and facilitate the deployment of digital technologies, including artificial intelligence. The European strategies on AI and digital transformation include the perspective of education to provide citizens with the skills to understand the capabilities of AI, and implementation of methodologies and tools to redistribute jobs and workforce. This is a first step towards creating a strong and appealing reality that can support fundamental and purpose-driven research. Moreover, ensuring long-term research funding will enable us to build and grow talent in Europe, as well as to attract and retain it, building a world-class European research capacity.

The objective of the new funding programme ICT-48 is to contribute to this challenge by enhancing collaboration between academia and industry, which will lead new lines of industrial research, increasing innovation and ensuring trustworthy AI. It is expected that this process will result in public-private partnerships that will foster the creation of an ecosystem of excellence to boost the uptake of AI technology and services across sectors in Europe. Still, it will also help growing an ecosystem of trust to foster the use of AI solutions and promote technology transfer in European organisations.

The next few years will be decisive for Europe to show that it is capable of being competitive through the development of trustworthy AI technology and can create a positive impact for society, as well as the public and private sector. There is an agreement on using ethics as a technology driver to build AI systems made in Europe for European organisations and citizens. However, there is still a concern around how to define European ethical values and promote their adoption without regulations. Thus, AI research and development should be open to an interdisciplinary perspective that brings together the scientific community with other experts such as lawyers, ethicists, philosophers of science, policy-makers, designers and to take into account the needs of the different sectoral industries. The role of AI research networks such as CLAIRE, TAILOR, Humane-AI Net, AI4Media and ELISE will be fundamental to boost the human-centric approach of AI made in Europe. The European Commission has launched different initiatives to reinforce long-term research in Europe, like the European Research Council, and to promote and monitor the development of Trustworthy requirements of AI solutions through entities such as AI Watch and the High Level Experts Group on AI.

The EU Challenges Panels were organised by Alberto Bugarín (Universidade de Santiago de Compostela) and Ulises Cortés (Universitat Politècnica de Catalunya and AI4EU).

The ECAI panel discussions that this article summarises are as follows:

Panel 1: H2020 came to an end: What is next? The European Strategy for AI
Suso Baleato (Harvard University and Univ. Santiago de Compostela, member of OECD Expert Group on AI (AIGO) and OECD AI Policy observatory).

Lucilla Sioli (Director for Artificial Intelligence and Digital Industry, DG CONNECT, EU Commission)
Barry O’Sullivan (EurAI President and vice-president of the EU HLEG on AI)
Mikaela Poulymenopoulou (Scientific Officer of the ERC Executive Agency)
Paul Desruelle (AI Watch, European Commission – Joint Research Centre, Seville)

Panel 2: Challenges for European Research in AI
Michela Milano (Univ. Bologna, EurAI and AI4EU)

Holger Hoos (Leiden University, CLAIRE)
Paul Lukowicz (DFKI, HumanE-AI Net)
Samuel Kaski (Aalto University and University of Manchester, ELISE)
Fredrik Heintz (Linköping University, TAILOR)
Yiannis Kompatsiaris (CERTH, Information Technologies Institute, AI4MEDIA)


            AIhub is supported by:

Related posts :

Keeping learning-based control safe by regulating distributional shift

We propose a new framework to reason about the safety of a learning-based controller with respect to its training distribution.
30 September 2022, by

Bipedal robot achieves Guinness World Record in 100 metres

Cassie the robot, developed at Oregon State University, records the fastest 100 metres by a bipedal robot.
29 September 2022, by

#IJCAI2022 distinguished paper – Plurality veto: A simple voting rule achieving optimal metric distortion

How can we create a voting system that best represents the preferences of the voters?
28 September 2022, by

AIhub monthly digest: September 2022 – environmental conservation, retrosynthesis, and RoboCup

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
27 September 2022, by

The Machine Ethics Podcast: Rights, trust and ethical choice with Ricardo Baeza-Yates

Host Ben Byford chats to Ricardo Baeza-Yates about responsible AI, the importance of AI governance, questioning people's intent to create AGI, and more.
26 September 2022, by

Recurrent model-free RL can be a strong baseline for many POMDPs

Considering an approach for dealing with realistic problems with noise and incomplete information.
23 September 2022, by

©2021 - Association for the Understanding of Artificial Intelligence


©2021 - ROBOTS Association