ΑΙhub.org
 

First analysis of the EU whitepaper on AI

by
28 February 2020



share this:
Image courtesy of ALLAI.

By Virginia Dignum, Catelijne Muller and Andreas Theodorou

This week, Europe took a clear stance on AI; foster the uptake of AI technologies, underpinned by what it calls ‘an ecosystem of excellence’, while also ensuring their compliance with European ethical norms, legal requirements and social values, ‘an ecosystem of trust’. While the Whitepaper on AI of the European Commission does not propose legislation yet, it announces some bold legislative measures, that will likely materialize by the end of 2020.

As happy as we are to see this strong focus on governance of AI systems, we need to point out that the first step towards governance is to have a clear understanding of what it is that needs governing and why. The working definition of AI provided in the White Paper is: “AI is a collection of technologies that combine data, algorithms and computing power.” This is later refined by claiming that AI is the combination of the first two, i.e. data and algorithms. This definition, however, applies to any piece of software ever written, not just AI. The governance of software in general, while being an important issue in itself, is beyond the scope of this paper.

So, what is AI? What makes AI different from other technologies so that specific governance and regulation is needed? Data and algorithms only refer to the ontological association of different components and not explicitly to the behaviour of the system. This could give organizations the opportunity to easily circumvent any regulation by claiming that a product is ‘dumb’ software and avoid compliance with any AI-specific requirements.

Even if we understand the desire to simplify things, AI is not simple, nor is its definition. The recent success of AI and the hype around it, have created a plethora of (pseudo-)definitions, ranging from ultra-simplified ones as put forward in the Whitepaper, to pure magical ones. Depending on the focus and the context, AI has been referred to as: (i) a technology; (ii) a next step in digitization, by which the view is that everything is AI; (iii) a field of science aimed at modelling intelligence as means to understand intelligence; or (iv) a ‘magic’ tool or entity all-knowing, all-powerful, that happens to us without us having any power to control it.

How we would describe it, AI technology is a piece of software with the following characteristics: it operates autonomously (i.e. without direct user control), its results are statistical (i.e. it does not link cause and effect) it is adaptable (i.e. it adapts its behaviour as it learns more about the context in which is applied) and it is interactive (i.e. its actions and results affect and are affected by us humans and our social and physical environment).

However, most importantly, AI systems are more than just the sum of their software components. AI systems also comprise the socio-technical system around it. When considering governance, the focus should not just be on the technology, but more on the social structures around it: the organizations, people and institutions that create, develop, deploy, use, and control it, and the people that are affected by it, such as citizens in their relation to governments, consumers, workers or even society as a whole.

“AI race” versus “AI exploration”

We have said it before, the metaphor of AI as a race, promoted throughout the Whitepaper, is not only wrong but potentially even dangerous. A race implies a finish line and an explicit direction to follow. The idea of an “ultimate algorithm or an ultimate AI-ruler” simply feeds into the unscientific narrative of ‘super-intelligence’, damaging public trust in the technology and distracting from real-world governance problems. The field of AI is vast and its full potential is far from being fully explored.

Even though we are now seeing many results from the application of a specific type of technique (deep learning, which is roughly based on artificial neural networks), one just needs to look at the past to know that it is unwise to put all your eggs in the same basket. Deep learning applications are far from being intelligent enough to solve all our problems. Even if such systems would excel at identifying patterns, e.g. identify cats in pictures, or cancer cells, with an accuracy close to or higher than humans, the system will have no understanding of the meaning of cat, or a cancer cell. It will only be able to provide a label to a specific pattern. And even then, it will still have great difficulties in describing the properties of a cat, let alone that it will be able to use its understanding of cats to understand dogs, or chickens.

Ultimately, trustworthy AI cannot be a choice between an accurate black box AI-system or an explainable but less accurate AI-system. We need both. This means that a new generation of AI systems is needed that integrate data-driven approaches with knowledge-driven, reasoning-based approaches, with human values and principles at the center. Here European research has an important advantage: since the early AI days, European researchers have excelled in a variety of approaches to design and verify artificially intelligent systems. Rather than blindly racing with others, European researchers approach the problem as explorers: mapping a wide field of possibilities and plotting promising results, such as in symbolic AI (that can link cause to effect) and hybrid systems.

The Whitepaper, in a one-liner statement, acknowledges the need for such systems for the purposes of explainability. But the advantages of hybrid systems go beyond explainability. They entail the ability to speed-up and/or restrain learning, validate and verify the machine learning model, and more. The Commission needs to acknowledge leading European research efforts in this direction, and encourage these approaches. By equating data as the essential component for AI, the White Paper excludes non-data-driven approaches, e.g. expert systems, knowledge reasoning and representation, reactive planning, argumentation and others, from being considered AI and, therefore, from being subject to the regulatory framework.

Bias and transparency

The focus on data-driven systems extends even further where the Whitepaper focuses on bias in relation to data. Luckily it dismisses the often heard argument that both humans and artefacts can act on bias, by stating that intelligent systems can enshrine and further disseminate and amplify our biases while also obscuring their existence and without the social control mechanisms that govern human behaviour.

It however overlooks that not all biases are the result of low-quality data. The design of any artefact is in itself an accumulation of biased choices, ranging from the inputs considered to the goals set to optimize for; Does the system optimize for pure efficiency, or does it take the effect on workers and the environment into account? Is the goal of the system to find as many potential fraudsters as possible, or does it avoid flagging innocent people? All these choices are in one way or another driven by the inherent biases of the person(s) making them.

In short, suggesting that we can remove all biases in (or even with) AI is wishful thinking at best and an error of language at worst. In either case, for the purposes of any regulatory framework we should not merely focus on technical solutions at dataset level, but devise socio-technical processes that help us: a) understand the potential legal, ethical and social effects of the AI-system and improve our design and implementation choices based on that understanding; b) audit our algorithms and their output to make any biases transparent; and c) continuously monitor the workings of the systems to mitigate the ill effects of any biases. To this effect, the Whitepaper correctly promotes the need for traceability on the decisions made by the human actors related to the design, development, and deployment of a system. This form of transparency within the social structure helps users (both expert and non-expert users) to calibrate their trust to the machine, testers to debug the system, auditors to investigate incidents and determine accountability and liability. These are all existing approaches in software engineering that we can use, in stead of reinvent for AI. The Whitepaper unfortunately does not acknowledge this broader perspective on transparency by merely focussing on a binary dogma of “opaque high-performing systems versus transparent low-performing systems”. As such it promotes transparency only for expert technical users and not for the broader group of non-technical deployers, users and affectees.

AI does not operate in a lawless world

The Whitepaper acknowledges that AI does not operate in a lawless world, thus ending the discussion on whether AI should be regulated or not and (hopefully) silencing the voices that claim that AI is an unregulated technology (and should stay that way).

Secondly, it emphasizes that AI has impact on our fundamental rights. This is important, because many of us take our fundamental rights and freedoms for granted. Our freedom of speech and expression, our right to a private life, our right to a fair trial, to fair and open elections, to assembly and demonstration and our right not to be discriminated against, these are all rights that are simply part of our lives. But these are also the rights that are jeopardized by certain types and uses of AI. For example, facial recognition has already shown to affect our right to freedom of assembly and demonstration when people in Hong Kong started covering their faces and using lasers to avoid being caught by facial recognition cameras. For this reason, the Council of Europe, the ‘home‘ of the European Convention on Human Rights and the European Court of Human Rights, is currently investigating a binding legal instrument for AI.

Liability

It continues by announcing adjustments to the existing safety and liability regimes. The Commission correctly takes a clear stance on the applicability of existing liability regimes to AI. It further announces that it will build on those regimes to address the new risks AI can create, tackle enforcement lacunae where it is difficult to determine the actual responsible economic operator, and make them adaptable to the changing functionality of AI systems. Persons having suffered harm as a result of an AI-system, should have the same level of protection and means of redress as persons having suffered harm from any other tool, according to the Whitepaper.

Prior conformity assessment for high-risk AI

A truly eye-catching announcement is the idea of putting in place prior conformity assessments for high-risk AI that would need to go through rigorous testing and validation before entering the EU internal market. The Commission explicitly mentions that this obligation will apply to all actors irrespective of their location. It is not for nothing that Mark Zuckerberg visited Brussels this week. The conformity assessment would apply to Facebook’s AI applications as well if they were to be deployed in Europe.

Let us look at what is considered to be high-risk AI. According to the Whitepaper, two cumulative elements constitute high-risk AI: (i) a high-risk sector and (ii) high-risk use of the AI application. Only if these two requirements are met, the system is subject to the prior conformity assessment. The Whitepaper hurries to add that there might be exceptional instances, where the use of an AI application is considered a high-risk as is, i.e. in recruitment and when worker’s rights are impacted. It also qualifies biometric recognition a high-risk application, irrespective of the sector in which it is used.

The list of high-risk sectors is to be exhaustive (whilst periodically reviewed) and the Commission already indicates the following sectors as potentially high-risk: healthcare, transport, energy and parts of the public sector. The second criterion, being that the AI application is used in a risky manner, is more loosely defined, suggesting that different risk-levels could be considered based on the level of impact on the rights of an individual or a company. We would suggest to add society and the environment here.

While this looks reasonable at first sight (one does not want to put each and every AI application through conformity testing) the chosen system might still have some gaps. We agree that and AI-application used to channel the open WIFI-signal in a mall would not require prior testing and validation, while that same system used in a military setting or a hospital would (for cybersecurity or privacy reasons).

But what if we look at the opposite situation? Following the logic of the Whitepaper, an AI application (high- or low-risk) used in a low-risk sector would in principle not be subject to the prior conformity assessment. As a thought experiment, let us consider targeted advertising, search engines and movie recommender systems. The Commission will likely qualify advertising, information and entertainment as low-risk sectors, while targeted advertising has shown to have a potential segregating and dividing effect, search engines have shown to make biased search predictions and video recommender systems prioritize ‘likes’ over quality and diverse content, amplifying fake news and disturbing footage. If we were to address these particular undesirable effects of AI, they would all (at least) have to count as ‘exceptions’ and be subject to the prior conformity assessment.

At this point we think that the ‘high-risk sector-requirement’ might not the most effective way to achieve what the Commission wants to achieve, which is to avoid that any and all AI needs to be subjected to prior conformity testing. This is an element that will undoubtedly receive much attention during the consultation period, and we are confident that the final version will contain a much improved system. We will contribute towards this in any way we can.

The requirements to be met in order for high-risk AI to pass the prior conformity assessment are all derived from the Ethics Guidelines for Trustworthy AI, developed by the High Level Expert Group on AI. They range from robustness, accuracy and reproducibility, to data governance, accountability, transparency and human oversight.

Socio-technical and ‘human-in-command’ approach

While we acknowledge need for conformity testing of AI and the relevance of all the requirements, we fear that a one-off (or even a regularly repeated) conformity assessment will not suffice to guarantee the trustworthy and human-centric development, deployment and use of AI in a sustainable manner.

In our opinion, trustworthy AI needs a continuous, systematic socio-technical approach, looking at the technology from all perspectives and through various lenses. For policy making, this requires a multidisciplinary approach where policy makers, academics from a variety of fields (AI, data-science, law, ethics, philosophy, social sciences, psychology, economics, cybersecurity), social partners, businesses and NGO’s work together on an ongoing basis. For organisational strategy, it requires involvement of all levels of an organisation, from management to compliance and from front to back office, are involved in the process on an ongoing basis.

In this sense, the Whitepaper has a slightly ‘fatalistic’ flavour to it, where it seems to think that AI ‘overcomes us’, leaving us no other option than to regulate its use. But we do have other options. One of which is the option to decide not to accept a certain type of AI(-use) at all, for example because it will create a world that we do not want to live in. This is what we have been calling the ‘human-in-command’ approach to AI that we need to foster.

Biometric recognition

The Whitepaper does not call for a ban on facial recognition. Instead, it delivers a shot across the bow by first stating that biometric recognition ((tone of) voice, gait, temperature, heartrate, blood pressure, skin color, odor, and facial features) is already heavily restricted by the GDPR and that there are specific risks for human rights and secondly opening the discussion on if, and if so, under what conditions to allow biometric recognition. As the Vice-President Margarethe Vestager explained to a group of journalists recently, the Whitepaper basically says, in very legal language: “let’s pause and figure out if there are any situations, and if so, under what circumstances facial recognition should be authorised”. And she added: “as it stands right now, GDPR would say ‘don’t use it’, because you cannot get consent.”

This is a smart strategy. It gives the European Commission the much needed time to consider the many complicated legal, ethical and social implications of biometric recognition, while at the same time stressing the restrictions under the GDPR and the risk to fundamental rights. As such, the Commission could very well be paving the way for specific regulation of biometric recognition.

Conclusions

The Whitepaper is the European Commission’s first concrete attempt at discussing AI policy beyond the high-level statements of previous Communications. In this sense, the Commission takes up a rule setting role (rather than a referee role). In our opinion, this is a good first step. If we were to draw the analogy with a game, independently of who is playing the game, without rules no one wins. Moreover, the potential impact of AI, both positive and negative, is too large to be left outside of democratic oversight. While the ideas of the Commission need further elaboration and depth, the true the leap forward would be not only to focus on “Trustworthy AI made in Europe” as an alternative to AI made by the existing tech giants, but to promote trustworthy AI as a competitive advantage and incentivize and invest in the institutions, research and frameworks that can set this new AI playing field.

About the authors

Virginia Dignum is professor of Artificial Intelligence at Umeå University, scientific director of the Wallenberg AI, Autonomous Systems and Software Program – Humanities and Society (WASP-HS), co-founder of ALLAI, member of the European High Level Expert group on AI and of the World Economic Forum AI Board, and currently working as an expert advisor for UNICEF.

Catelijne Muller is co-founder and president of ALLAI, member of the European High Level Expert group on AI, Rapporteur on AI for the European Economic and Social Committee, and currently working as an expert advisor for the Council of Europe on AI & Human Rights, Democracy and the Rule of Law.

Andreas Theodorou is postdoctoral researcher on Responsible AI at Umeå University, member of the AI4EU consortium, and expert on IEEE Ethically Aligned Design Standardisation Initiative.

Read the EU Whitepaper on AI.

This article first appeared on the ALLAI website and is posted here with their permission.




ALLAI is an initiative of Catelijne Muller, Aimee van Wynsberghe and Virginia Dignum
ALLAI is an initiative of Catelijne Muller, Aimee van Wynsberghe and Virginia Dignum




            AIhub is supported by:


Related posts :



Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from AIhub, Robohub, and IEEE Spectrum.
13 November 2024, by

Enhancing controlled query evaluation through epistemic policies

The winners of an IJCAI2024 best paper award explain the key advances of their work.

Modeling the minutia of motor manipulation with AI

Developing a model to provide deep insights into hand movement, which is an essential step for the development of neuroprosthetics and rehabilitation technologies.
11 November 2024, by

The machine learning victories at the 2024 Nobel Prize Awards and how to explain them

Anna Demming delves into the details behind the prizes.
08 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association