ΑΙhub.org
 

The potential update on the protection of workers through the AI Act

by and
25 October 2022



share this:

Abstract paint splashesPhoto by Jr Korpa on Unsplash

In this #5 post of the Symposium “Hitchhikers Guide to Law & Tech”, we continue analyzing the EU’s Digital Strategy and the intersection between law and tech. The EU’s proposed AI Act, if adopted, has the potential to modify the existing legal framework for the protection of workers, faced with an increasing prevalence of AI technologies. This blogpost presents an outline of the significant legal scrutiny and numerous safeguards most workers’ data collection and processing activities would need to meet, as falling within the scope of high-risk AI systems. This is accompanied by a brief reflection on the challenges ahead, and the path forward that needs to be followed for the desired results to be achieved.

The software techniques that qualify as AI systems within the proposed AI Act are very extensive in their scope. This has been highlighted by one of the prior posts within the DigiCon Symposium on Law & Tech. However, some consider that even the requirements for AI systems that qualify as high-risk “are not going far enough”.

One of these high-risk areas is composed by “AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests”, as well as those “intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships” (Annex III to the AI Act, art. 4 (a) and (b), respectively). Thus, if this proposal and its annexes are adopted, the AI systems affecting workers will be subjected to the highest degree of scrutiny. Among the numerous safeguards that these AI systems would need to develop, and continuously update, are: setting a “risk management system” (art. 9), a “data governance and management practices” handbook (art. 10), automatic “record-keeping” capabilities (art. 12), “transparency and provision of information to users” (art. 13), “human oversight” (art. 14), and a “technical documentation” demonstrating the compliance of the AI system with all these legal obligations (art. 11).  

Moreover, the AI Act specifically addresses one of the most prevalent worries of workplace AI systems; that is, the danger of “even unprejudiced computers and decision-makers” unknowingly generating discriminatory or biased decisions while engaging in data-driven decision making, conditioned by “historical and societal biases”  (Williams, Brooks & Shmargad, 2018).  To face this issue, the AI Act enacts a specific methodology for the composition of an AI system’s training data, validation data and testing data. These protocols are meant to establish a sufficient regulatory threshold to prevent these issues from occurring. In broad terms, these protocols will require the inclusion of a statistical representation of the groups of individual the high-risk AI system will affect. 

However, the fulfilment of some of these minimum requirements is likely to force the high-risk AI system to collect more information that what it may have initially foreseen, to prevent these possible biases. Unfortunately, recent studies show that the solution is often not as simple as outlawing the collection of a specific given sensible data point, which can at times further entrench instances of discrimination. In their paper Williams, Brooks & Shmargad (2018) explain the findings of recent empirical studies, which show how “employers, relying on perceptions of higher conviction rates of certain races, used race as a proxy to try to avoid applicants with felony records” when laws enacted prohibitions to inquire about applicants’ criminal convictions. Thus, even if criminal conviction history is a sensitive category of data, worthy of enhanced protection under the GDPR, outlawing its transparency may negatively affect vulnerable groups. It is in cases such as these, that preventing discrimination may require the collection of more sensitive data, subject to safeguards and strict necessity. The proportionality of this type of measures will surely be specific to the AI system under consideration, and the goal which it is tasked with. Nevertheless, the fact that the AI Act contains a mechanism to conduct this balance of risks versus rewards is already an improvement in comparison to the status quo. 

Lastly, and of extreme importance, under the AI Act, those who do not develop but provide and/or use these AI systems must comply with additional legal obligations and inform designated national competent authorities.  This is an appropriate inclusion, as the classification of these AI systems as high risk is a consequence of their utilization within the workplace, not necessarily due to the design of the AI system itself. These market surveillance authorities must have “access to the source code of the AI system” to assess the conformity of high-risk AI systems with the AI Act (art. 54). This is essential, as otherwise, employers could escape the scope of this regulatory instrument. However, concerns exist on the capabilities of Member States to properly assess the compliance of the AI systems, who would need to acquire the appropriate assets to conduct these assessments, including “human resources and technical tools”.

The EU is currently assessing the implementation of other regulatory proposals that would further enhance the protection of workers. These include, for example, the enactment of new liability rules, which would facilitate the access of individuals to remedies when they are harmed by AI systems. Also, a directive explicitly for the protection of platform workers. These proposals are all very promising. However, their feasibility, similarly to the AI Act, still raise concerns. The European Commission predicts the approval of the AI Act “in the second half of 2022”, following a minimum of two years before becoming operational. To make this a reality, the current momentum needs to be continued. A key for the success of these policies may be found within “regulatory sandboxes”. Respondents of Business Associations to consultations on the AI Act have positively perceived the use of “regulatory sandboxes” for the application of AI tools, which the AI Act explicitly proposes to reduce the compliance costs for new firms and SMEs,  and assure their access to fairly priced conformity assessment mechanisms. These include too, guidance on how to comply with the AI Act, the celebration of “awareness raising activities” (art. 55 (b)) and “a dedicated channel for communication” (art. 55 (c)). Thus, the adoption of this regulation can also allow SMEs to benefit from positive AI innovations. It is through the appropriate transposition of these measures, from written law to operationalised mechanisms, that the EU can became the world-wide “rule-shaper” it aspires to be and in doing so, making AI a “human-centric” technology, in compliance with fundamental rights. 




Lola Montero is a PhD Researcher at the European University Institute.
Lola Montero is a PhD Researcher at the European University Institute.

The Digital Constitutionalist




            AIhub is supported by:


Related posts :



Learning an artificial language for knowledge-sharing in multilingual translation

Danni Liu and Jan Niehues investigate multilingual neural machine translation models.
07 December 2022, by

Causal confounds in sequential decision making

Using techniques from causal inference, we derive provably correct and scalable algorithms for sequential decision making in certain settings.
06 December 2022, by

The Machine Ethics Podcast: The Politics of AI with Mark Coeckelbergh

Host Ben Byford chats to Mark Coeckelbergh about AI as a story about machines, environmental impacts of robots and AI, energy budgets, politics and AI, and more.
05 December 2022, by

Call for AI-themed holiday videos, art and more

Send us your AI-generated art, pictures, poems, datasets, music, films...
02 December 2022, by

Estimating manipulation intentions to ease teleoperation

Introducing an intention estimation model that relies on both gaze and motion features.
01 December 2022, by

#NeurIPS2022 outstanding paper – Gradient descent: the ultimate optimizer

Kartik Chandra, Audrey Xie, Jonathan Ragan-Kelley, Erik Meijer, tell us about their work, which won a NeurIPS outstanding paper award.
30 November 2022, by





©2021 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association