In early April 2021, a draft EU Regulation on a European Approach to Artificial Intelligence was leaked to the press. The draft had been already attentively commented, among others, by Dr Michael Veale (UCL Faculty of Laws). The draft Regulation, however, raised many specific concerns about the use of AI at work to be addressed urgently, and I discussed some of them in this blog. I have now updated this same blog to comment on the Proposed Regulation, which was released today, hoping that other labour experts will add their analyses.

Recital 36 of the Proposed Regulation mentions that “AI-systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons”. It gives heed, very generically, to the potentially discriminatory impact of AI in the world of work and the risks it poses to workers’ privacy. Compared to the Draft Regulation, the final proposal also explicitly mentions self-employed and platform workers, to cover them regardless of their employment status. This is a step forward compared to the leaked Draft.

While classifying AI systems used at work as high-risk is appropriate, however, the Proposed Regulation is far from being sufficient to protect workers adequately.

Firstly, Annex III of the Proposed Regulation mentions: “AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;” and “AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships”.

As just said, it provides that these systems shall be classified as high-risk and, therefore, subject to specific safeguards. At the same time, it specifies that the assessment of conformity of these systems to existing rules and safeguards will only be subject to self-assessment by the provider. This is, disappointingly, a lower level of protection than other high-risk systems that require stricter conformity assessment procedures through “the involvement of a notified body”. As already argued when commenting the Draft, given the extraordinarily severe consequences that AI systems at work can entail, and the particular nature of workplaces, where workers are already subject and vulnerable to their employers’ extensive powers and prerogatives, it is highly worrisome that this Proposed provision was not subject to any form of social dialogue at the EU level.

Moreover, the Proposed Regulation seems to take for granted that if AI systems used at work comply with the procedural requirements it sets forth, these systems should be allowed. The use of AI to hire, monitor (and, therefore, surveil) and evaluate work “performance and behaviour” is deeply problematic. Several EU national legislations ban or severely limit the use of tech tools to monitor workers. Moreover, Spain just introduced new rules granting algorithmic transparency at work. If adopted, the draft Regulation risks prevailing over these more restrictive legislations and triggering a deregulating landslide in labour and industrial relations systems around Europe. This is all the more serious because these national legislations often require to involve the trade unions and works councils before introducing tools allowing any form of tech-enabled surveillance and also partially ban this surveillance. The Proposed Regulation, instead, just like the leaked Draft, does never specifically mention the social partners and their roles in regulating AI systems at work.

If the Regulation is not corrected, the more protective national legislation risks being overcome by this EU instrument: this instrument, in other words, risks functioning as a “ceiling” rather than a “floor” for labour protection.

The Proposed Regulation also provides that high-risk AI systems must be built allowing the possibility of human oversight, something already included in the Draft. The Draft, however, provided that  people in charge of this oversight were to be put in the position, among other things, to “decide not to use the high-risk AI system or its outputs in any particular situation
without any reason to fear negative consequences.” Commenting on the Draft, I argued that it was problematic that it did not explicitly mention the need to provide managers and supervisors with the specialised training and powers to counter the specific implications of the use of these systems in the context of work. I also stated that, without explicit workplace protection, this provision may not adequately prevent disciplinary actions from employers.

The Proposed Regulation, however, does not even mention anymore the need to prevent the fear of negative consequences for the human supervisors who reverse or disregard the outputs of high-risk AI systems. In the context of work, this is certainly not enough to ensure effective human oversight!

These, again, are only some of the concerns that the leaked draft EU Regulation on AI raises about work and labour protection. It is extremely urgent for the social partners and labour experts to reflect and act on this instrument.

This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *