In April 2021, the European Commission published its first draft of the proposal for a Regulation on Artificial Intelligence (hence: the AI Act). One aim of the proposal is to guarantee ‘consistency with existing Union legislation applicable to sectors where high-risk Artificial Intelligence (hence: AI) systems are already used or likely to be used in the near future, which includes the EU social acquis. While some criticism has already been raised about AI Act, we should ask what ‘consistency’ actually means in that context?

 

One could argue that ensuring true consistency with EU law means guaranteeing that the way the AI Act will be implemented and applied will still allow the other pieces of EU labour law to fulfil their purpose. It is undeniable that the implementation of the AI Act will overlap with various fields of EU law; considering the increasing use of AI technology at work, EU labour law will be one of these overlaps. To name only one example – and considering that another aim of the AI Act is to make it safe for the users – let’s examine if the provisions of the AI Act will not be an obstacle for the purpose of the occupational health and safety (OSH) legislation: protection workers’ health and safety with a preventive and participatory approach.

 

The use of algorithmic management software at work has proven to negatively impact workers’ health and safety.Continuous monitoring via wearables, for instance, increases work-stress while affecting productivity. The way the algorithm allocates tasks and tracks workers affects the work organisation and negates workers’ right to appropriate break time, leading to severe physical and psychological stress. Meanwhile, research has shown positive effects of using the concept of ‘participatory algorithmic governance framework’, using a model considering workers’ well-being. Directive 89/391/EEC is the cornerstone of the EU’s OSH legal framework, providing a general employer’s obligation to ensure the safety and health of workers in every aspect related to work via the application of the principles of prevention. The Directive adopts a worker-centric approach with the employer obligation to consult and to inform the workers or their representatives. It also provides workers or their representatives with the right to appeal to the competent authority if they consider that OSH prevention is inadequate. Workers and their representatives are an important part of the elaboration and the implementation of the preventive measures at work.

 

Even if the Framework Directive was adopted thirty years ago, it contains provisions that are relevant for the implementation of (high-risk) AI at work as proposed by the Commission’s AI Act. When an employer considers integrating AI software at work, he should evaluate to what extent the algorithmic management’s use, or its integration within the working environment, will impact workers’ health and safety. According to Art 6(2) Directive 89/391/EEC, the employer shall eliminate or reduce the risk by adapting the working methods to alleviate predetermined work-rate, as part of a coherent overall prevention policy that covers technology. To base his/her assessment on the potential risks of the AI, the employer would probably take into consideration all the risks identified by the provider at the occasion of the risk management evaluation and assessment, which should be communicated to the employer as a user of the AI (Art 13(3)(iii) AI Act). Indeed, according to Art 9(2)(a) AI Act, the provider should have identified and mitigated the known and foreseeable risks associated with the AI system. Additionally, as a user of AI, the employer should have been informed, by the provider, of the residual risks of the AI system (Art 9(4) AI Act). Therefore, the employer should consider the providers’ risk assessment to evaluate the potential impact of the AI system at work.

 

Some elements of the AI Act proposal might downplay the impact that the development of the AI will have once integrated at work. For example, to be considered as high-risk AI (and therefore be subject to all the provisions mentioned above), the proposal requires the AI to have a ‘significant harmful impact on health and safety’, which might be too restrictive and lead to the exclusion of AI being qualified as high risks even if they represent a danger for the workers (Recital 27 AI Act). Actually, a significant part of the harmful effect on workers is psychological (e.g., stress due to the monitoring). The harmful impact does not appear immediately; it is a gradual process. Also, the severity of the harm might vary from one worker to another. Therefore, the phrasing should be replaced by ‘potential significant harmful impact on health and safety’, even if this change leads to a restriction of international trade. Indeed, improving workers’ safety, hygiene and health at work is an objective that should not be subordinated to purely economic considerations (Recital 13 Directive 89/391/EEC).

 

The concept of intended purpose also raises the question of the scope of the definition of the high-risk AI. Art 3(12) AI Act defines ‘intended purpose’ as the use for which an AI System is intended by the provider. Yet some software may have an impact at work simply because it is used in an employment context with an unbalance of powers. There are already examples of AI technologies that have been developed with the intended purpose to improve the safety of the driver that are used to monitor the workers once implemented at work. Providers, therefore, should take into consideration the employer’s duties while designing the AI and foreseeing its deployment at work. Similarly, if the AI system is intended to be used at work, the provider cannot ignore the impact on workers’ health and safety. Also, the provider should take into consideration that the AI should be designed with a view to mitigating monotonous work and work at a predetermined work-rate and to reducing their effect on health.

 

Thus, the impact on operational work processes or occupational health and safety must be explicitly considered in the ‘risk management system’ required for high-risk AI systems. Providers can contribute to a better and fairer application of AI at work when they develop the Software. For example, when they program an AI to allocate tasks, they should guarantee that the goals are realistic – and not necessarily aiming at economic optimization. Also, they should compute systems where these goals can be adjusted to individual capacities while avoiding risks of retaliation. For example, providers could cross or combine the allocation of tasks (and target goals) with analysis of vital signs (e.g., heart rate, skin temperature) and environmental variables (e.g., movements). The idea would be that whenever the vital signs or environmental variables signal that the worker is tired, the AI should adjust the allocation and/or organization of work to allow the worker to be safe. An option could be offered to the worker to either reduce pace for the next two hours or take a break. Rather than having a warning that the worker is not ‘quick enough’ to fulfil the predetermined goal, the AI should not pressure the worker further and adjust the goal to a ‘human pace’. The average handle time or target should be left to collective bargaining and discussion at the work level. The provider should not be in a position to set target goals that are a matter of work organisation. In fact, the Framework Directive (Art 6(3)(c)) provides that workers and/or their representatives should be consulted when a new technology is implemented at the workplace. Thus, the provider should compute or program the AI in a way that this kind of variables can be adjusted at work level.

 

However, it means that data on workers’ vital signs might be accessible by the employer, and it represents a significant risk if unregulated. Thus, the employer should access workers’ data only when the data are aggregated and anonymized; otherwise, there is a risk that the worker will be penalised for being too slow. Similarly, all the data collected from work should be aggregated and anonymized before being communicated to the provider in the context of the post-market surveillance (Art 61(1) AI Act).

 

To conclude, the AI Act will have an impact on the application of OSH legislation when a new technology is implemented at work. It is only one example amongst many (e.g., Discrimination law), and still, it is difficult to see to what extent this part of the EU social acquis has seriously been taken into account in the AI Act as proposed in April 2021. With that, we do not mean to argue that the EU should regulate in detail issues of AI that touch upon OSH law or EU non-discrimination law in detail. Yet, through this proposal and through a thorough assessment of the ‘fundamental rights’ implications of AI systems, unclarities should be taken away, if guaranteed consistency with these social rights can be seen as a purpose at all, of course.

This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *