-
AI Act and Prevention Regulations
Regulation (EU) 2024/1689 (AI Act) subjects the entire discipline of the employment relationship to a stress-test, forcing the interpreter to question the compliance capacity of the national rules that preside over multiple regulatory spheres. Of course, there is no shortage of reflections on the impact of the new European regulations on health and safety at work, due to the adaptation of the domestic system to the traditional arrangement of protections, obligations and related responsibilities of the actors of the prevention system[1].
In fact, in the field of occupational health and safety, it is noted that AI systems can represent a tool for exercising employer powers and prerogatives, a tool for performing work and, even more specifically, an individual or collective (so-called intelligent) protection device. In each of these applications, AI can constitute a hypothesis for the evolution of experience and technology (in Italy, pursuant to Article 2087 of the Italian Civil Code) for better risk governance, provided that the intermediation of the human factor is not excessively minimised or entirely cancelled out.
The experimentation of these safety management models, in turn, opens the door to new scenarios for assessing the position of the employer, for the purposes of attributing responsibility for the accident and risk event, as well as of the worker himself in relation to his possible culpable complicity. In particular, these critical issues emerge when the AI system takes on the role of manager or autonomous executor of the work process, while the residual (organisational, managerial, control and spending) power held by the guarantors of the H&S system is not well defined. Moreover, in the event that the worker suffers harm to his physical or psychological integrity as a result of the use of equipment that uses AI systems, it is natural to ask how the liability of the various subjects in the supply chain is configured for damage caused by a defective product, equipment or machine, or by incorrect risk assessment, omitted maintenance or undue tampering with the equipment.
In these cases, however, one has to reckon with the need to prevent objective forms of personal liability, and to clarify, at the same time, the level of autonomy of AI systems as well as the residual margin of decision-making and enforcement in the hands of natural persons.
It should be made clear that the AI Act cannot answer all these questions because its purpose is to create a single market for AI, ensuring that its devices are safe and respect the fundamental values of the European Union through a balanced reconciliation of social rights and market protection. For this reason, its legal basis is the protection of competition (Articles 114 and 16 TFEU).
Therefore, the AI Act fits into the complex puzzle of technical harmonisation regulations on the requirements for machinery and equipment (including work equipment), flanking, for worker health and safety profiles, Directive 2006/42/EC (Machinery Directive), soon to be repealed by Regulation 2023/1230/EU (Machinery Regulation). The relationship between these two acts is then destined to interact with the provisions set out to protect the healthiness of working environments. For example, in the Italian legal system, the general provisions of Title I and the technical provisions of Title III of Legislative Decree no. 81/2008 are relevant[2].
The link with product discipline is not surprising, since the discipline regarding health and safety at work (or protective and preventive discipline) is pervaded by a high technical component that supports the content specification of the safety obligation and, consequently, the perimeter of civil and criminal liability.
With respect to the interaction between the prevention regulation and the AI Act, there is a fear that the AI Act may generate, in practice, antinomies between the two regulatory frameworks, and actually lower the protective standards with regard to the use of work equipment and the assessment of the related risks. This is especially so in light of the regulatory system of the European act, which focuses on risk management of so-called “high-risk” systems and the construction of a system of obligations aimed at making manufacturers or producers (providers) and, at most, suppliers or first-level users (deployers) responsible. Of lesser intensity are the obligations placed on second-level users, which include employers.
In detail, in fact, the regulation devotes particular attention to the discipline of risk management, focusing on “high-risk” systems used in the area «employment, workers’ management and access to self-employment» and in particular for «the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates» and for making «decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships» (Annex I). While such systems are permitted, they impose particularly stringent regulatory burdens in the form of the adoption of appropriate governance practices and data management. The assessment of compliance with these requirements is entrusted to internal procedures to be carried out by the provider itself (Art. 19)[3]. Employers using such systems must more simply follow instructions and report any serious incidents or malfunctions to the supplier/distributor. Conversely, where the risk to the rights and freedoms of individuals is “limited”, the regulation essentially imposes transparency obligations. Finally, where the risk is “minimal”, the use of self-regulation through the adoption of «codes of conduct» is encouraged.
As a result, the framework for apportioning liability, especially for the use of high-risk systems (Art. 27), referring to manufacturers and suppliers, would be too bland for the employer (user) as the primary guarantor of worker safety. In fact, the obligations mainly concern the supplier, who must: ensure that the system complies with all requirements and has adequate quality management measures in place; draw up technical documentation for the system; keep automatically generated logs; ensure that the system is subject to the relevant conformity assessment procedure[4]. These burdens, in essence, describe an upstream-oriented responsibility for the use of such technologies vis-à-vis the supplier.
On the contrary, for the employer, a residual civil liability relegated to the only hypothesis in which the latter makes «significant changes» to the normal operation of the AI software. However, the producer’s and user’s liability hypotheses themselves would be linked only to the risks that determine a significantly harmful impact on the worker’s health and safety (Article 27). Such an arrangement would therefore lead to a system that is not very harmonious, from the point of view of workers’ health and safety, with respect to that intended by Directive 89/391/EEC and its derived rules.
However, it must be pointed out that the AI Act expressly refers to the remaining technical harmonisation legislation and the Machinery Directive[5], which will be definitively repealed by the Machinery Regulation as of 20 January 2027. Likewise, more generally, it is clear that the AI Act cannot exhaustively fill the vagueness of the prevention obligation arising from the use of these systems. The regulation, in fact, has a declaredly circumscribed sphere of action, mainly focused on risk management in the commercial sphere and aimed at providing a minimum and complementary level of protection that does not preclude the introduction of more favourable rules for workers, including through collective agreements. Therefore, these obligations must be supplemented with those arising from other European and national regulation already in force.
-
The AI Act in European digital economy legislation
The AI Act is part of a broader reformist project that attempts to shape a “cultural” model of the digital economy, the backbone of which is the safeguarding of European social values and fundamental rights – even human rights – in the new competitive scenarios of global markets, even before the labour scenario.
Within this “anthropocentric” vision[6] , the complementarity of the Regulation 2016/679/EU (GDPR) for data protection, of the Directive 2024/2831/EU on platform workers, as well as that of the further ongoing regulatory hypotheses on liability regimes related to the use of AI, is above all evident. But above all, in the specific area of product regulation, strongly connected is the Machinery Regulation, whose intertwining with the AI Act is intended to preside over the maintenance of previous levels of protection for workers in the use of work equipment, even when it uses AI systems.
More specifically, the AI Act seems to add a further piece to the controversial path of identification between labour rights and human rights. In fact, it shares with other regulatory acts – the CSRD Directive and the CS3D Directive[7] – that vision of business activity that takes into account shareholder value together with its social externalities, and especially the degree of exposure to risks of human rights violations.
The AI Act has in fact envisaged a specific and additional obligation of fundamental rights impact assessment (Fria – Fundamental rights impact assessment), including those of workers, for certain first-level deployers of AI systems[8] . The Fria regulatory technique represents one of the most innovative and disruptive profiles of the AI Act in the social sphere, in contrast to the traditional approach of the technical product regulation so far. First of all, it is potentially highly relevant for the protection of occupational health and safety and privacy, as well as anti-discrimination protection. Secondly, Fria is in addition to conformity assessment, shifting part of the burden of dealing with potential negative consequences of AI to the primary users (first-level deployers) in relation to the specific and real operating context of such systems. Therefore, unlike conformity assessments and not having to comply with pre-established models and checklists, when adapting to the European discipline, this obligation could be developed in closer connection with existing national provisions on the safety of work equipment. Possibly also by providing for the involvement of workers’ representatives.
-
Machinery Regulation and Prevention Regulations
In the same context of the EU competition law framework, the Machinery Regulation will apply to systems using AI technologies, once the previous Machinery Directive is repealed.
Like the AI Act, it places a particular burden on the manufacturer. This figure, possessing detailed knowledge of the design and production process, holds a position of guarantee that obliges him to assess the conformity of the machine[9]and to define the essential health and safety requirements of the same[10], while making available «precise and comprehensible» information[11] and specific accompanying documentation.
The Machinery Regulation also burdens the figures of the importer and distributor[12]: the former, as a person who places a product from a third country on the EU market; the latter, as a person other than the manufacturer or importer, who makes a product available on the market. The importer has to make sure that the manufacturer has completed the appropriate procedures for conformity assessment of the product, taking responsibility for it himself. The distributor is responsible for verifying that the product is correctly identified and accompanied by the necessary documentation, taking due care in transport and storage so as not to compromise its conformity with the safety requirements.
With regard to safety components of equipment, as in the previous directive, the machinery regulation stipulates that they are subject to CE marking. However, in the definition of safety components, it also includes digital components, including software, extending the regulation to intangible equipment for the first time (Article 3). Furthermore, with regard to machines that use AI systems, the regulation places the obligation of a risk assessment on the manufacturer, taking into account the evolution of their behaviour when they have certain levels of autonomy. In addition, new requirements are imposed to protect the health of workers against risks arising from the dynamics of human-machine interaction. This assessment will have to take into account the evolution of the behaviour of machines operating with certain levels of autonomy, in accordance with the AI Act[13].
Looking ahead, such predictions appear to be particularly onerous for manufacturers. One only has to think of the technical measures to be taken in the face of autonomous machine behaviour, or of the cybersecurity solutions required for machines using AI software and systems connected to data networks. Moreover, with respect to human-machine integration, the requirements for security of mobile elements will have to be updated by taking into account the most innovative solutions on collaborative applications, as imposed by the regulation[14].
Well, given that the commercial regulation of work equipment today straddles the two regulations, it is useful to understand how this regulatory interweaving will interact with the prevention regulation. In particular, the set-up does not seem destined to change since the AI Act expressly refers to the harmonisation legislation and the Machinery Directive, which, as of 20 January 2027, will be definitively repealed by the new Machinery Regulation. Therefore, machines and products that fall within the scope of these provisions must be declared compliant with them, and their use must be integrated into the company’s prevention system according to the national regulations already in force.
However, the Machinery Regulation also applies to old products that have undergone «substantial modifications» by various users. These are those machines that, having been modified after being placed on the market or put into service, affect safety by increasing or creating a risk[15]. As in the case of AI systems, such hypotheses incorporate clear and direct responsibilities on the part of the various users, possibly including employers. Therefore, in the gradual implementation of the two regulations – AI Act and Machinery Regulation – it will be crucial to understand whether one is dealing with a newly manufactured machine, or a machine that, having been placed on the market under the previous regulation, has undergone such substantial changes over time. With respect to the latter, there is inevitably an obligation to assess the risks to the health and safety of persons (or animals)[16], together with the various obligations incumbent on the economic operators in the supply and use chain, of which the employer himself is a part.
Furthermore, it is possible to assume that the Risk Assessment Document, regulated by the national prevention disciplines, will be supplemented with specific technical annotations that allow the guarantors of the prevention system to take into account the evolution of the behaviour of machines designed to operate with different levels of autonomy, on the basis of the manufacturer’s technical indications. This is because of the importance that the self-learning process has acquired upstream, during the design and production of the AI system. In addition, when choosing work equipment, the employer must take into account the specific conditions and characteristics of the work to be performed, the risks present in the working environment, those arising from the use of the machinery and those arising from interference with other equipment already in use[17].
In order to minimise the risks, then, the employer must adopt appropriate technical and organisational measures and the necessary measures so that the equipment: is installed and used in accordance with the instructions for use; is subject to control and appropriate maintenance; and is subject to the measures for updating the minimum safety requirements. Furthermore, the use of the equipment must be restricted to workers who have received adequate information, training and instruction
From this brief reconstruction it emerges that the guarantee position of the employer is very articulated and invoked with reference to distinct time segments of the work organisation process, following the introduction of the equipment into the company. This position of guarantee is clearly distinct from that of third parties to the company (designers, manufacturers, suppliers, installers and assemblers), respectively in the preliminary and subsequent phases following the introduction of the equipment into the company.
The same safety requirements imposed by Regulation 2023/1230/EU and Regulation 2024/1689/EU keep this distinction clear. What is more, with respect to the obligation to inform and train workers[18], the general obligation of literacy introduced by Art. 4 of the AI Act may require supplementing the training obligations, provided for by national regulations, with notions of how AI systems work.
Ultimately, the new duties of a technical-procedural nature introduced by the regulations flank, without absorbing them, the more traditional prevention duties. Consequently, the guarantee positions of the actors involved, in the wake of product and social discipline, must be kept quite distinct.
-
The responsibility of the employer and supply chain actors
At this point, the question arises as to whether this regulatory “mosaic” can guarantee a certain delimitation of the prevention obligation and an adequate level of protection of workers’ health and safety.
First of all, it cannot be ruled out that the traditional criteria for attributing H&S liability will be taken into account in an evolutionary way by case law. At the same time, collective bargaining could develop modal rules that circumscribe the tasks of the various safety actors.
Hence, it is necessary to analyse the holding in judgement of the traditional preventive rules on “external parties” to the company[19], as well as the criteria for apportioning liability between the latter and the employer, developed over time by national jurisprudences. The European discipline, in fact, has given an impetus to the extension of the safety debt to the design, construction and supply phases of machinery to be used in the working environment.
For its part, in Italy, the inter-subjective division of liability between third parties and the employer has been directed by establishing that, if the latter uses (or causes to be used) a machine that does not comply with the regulations in force, he shares liability with the manufacturer (or with the other parties indicated), unless the defect is unknown and not recognisable with normal diligence[20]. It follows that the manufacturer’s liability does not exclude the liability of the employer who is the user of the machinery, since the latter is obliged to eliminate sources of danger for the workers called upon to use it[21].
That being said, in the case of injuries to psycho-physical integrity attributable to defective machines employing IA systems, the determination of the degree of liability of the employer and of the other holders of positions of guarantee should not disregard these hermeneutical canons. Rather, at trial, the judge may find himself in the particular position of having to assess, as one of the elements of his conviction, the technical classification of the levels of autonomy of the IA system, drawn up at the design, construction and marketing stages. This is in order to understand: whether the algorithmic intermediation of the machine used by the worker can be considered the sole cause of the accident; to what extent the accident was exclusively or concurrently affected by production, design or modification defects; to what extent the accident was due to a failure to comply with the employer’s obligations in the phases of risk assessment and use, maintenance and training, or to a culpable concurrence by the worker that could possibly relieve the employer of his responsibilities.
Therefore, it is difficult to imagine the employer not being held responsible, since the malfunctioning of the machine mediated by the AI system will be legally attributable to him. However, this guarantee position could be progressively alleviated if the other causal factors mentioned above prevail.
On the other hand, precisely with regard to the damage caused by AI systems as components of machines with an increasing degree of autonomy, the controversial hypothesis of giving AI legal personality has arisen. This would remedy the risk of excessive liability on the part of the employer, manufacturers and suppliers. The 2017 European Parliament Resolution in relation to robots was along these lines[22]. The founding hypothesis of a legal personality of the machine would not imply its personification, assuming rather a functional (and evidentiary) value. Such a mechanism would allow the imputation of effects directly in the hands of the machine, with a lightening of the criminal profiles and of the compensation burden in the hands of natural persons, also in a logic of greater economic sustainability.
The prospect, which is not without perplexity, tends towards a compromise regulatory solution, in any case without relieving the employer, designers, manufacturers and suppliers of their respective prevention obligations. It would be a matter of hypothesising, on the basis of a case-by-case assessment of the risk, the degree of actual residual human control over the AI, even to the point of admitting more extreme hypotheses in which such control no longer intervenes or intervenes at too advanced a stage in the decision-making and management process, to the point of jeopardising a strong causal link between employer conduct and the harmful event.
While waiting for more solid interpretative constructs, the fact remains that the employer, in discharging his prevention obligations, will at least have to take into account the different degree of autonomy and pervasiveness of the AI, which in turn is certified by the manufacturer. In this way, when assessing risks, the employer will be able to make probabilistic predictions on the “conduct” of the digitised system, which will enable him to draw up appropriate prevention and organisational protocols.
In any case, the prerequisite should lie in the possession of prior and adequate training by the employer, workers and their representatives on the technical specifications of AI, with a view to participatory technological risk management.
______________________
References
[1] For an in-depth analysis, please refer to M. Giovannone, Responsabilità datoriale e prospettive regolative della sicurezza sul lavoro. Una proposta di ricomposizione, Giappichelli, Turin, 2024, p. 161 ff.
[2] These in turn are supplemented by technical standards (Annexes V, VI and VII) and sector-specific regulations.
[3] Only high-risk AI systems used for biometric identification are covered by conformity assessment by a “notified body”.
[4] Arts. 14, 15, 16 and 17.
[5] Recital 26 and Annex V, Part A.
[6] COM(2018) 795 final; COM(2021) 205 final, 2.
[7] Corporate Sustainability Reporting Directive CSrD (EU Directive 2022/2464) and Corporate and Sustainability Due Diligence Directive (EU Directive 2024/1760). Both are currently being revised at the initiative of the European Commission (so-called Omnibus I package,
[8] Art. 27. In detail, this concerns deployers that are public law bodies or private entities providing public services.
[9] Recital 31, Arts. 10 and 25.
[10] Recital 32.
[11] Recital 39.
[12] Arts. 13, 14, 15 ff.
[13] Annex II, Part B, para. 1.
[14] Annex III, Part B.
[15] Art. 3(16).
[16] Recital 26.
[17] In the Italian legal system, a combined reading of Articles 28 and 71 of Legislative Decree No. 81/2008 is relevant.
[18] In Italy, Articles 37(7) and 73, Legislative Decree No 81/2008.
[19] In Italy, Articles 22, 23, 24 and 72 of Legislative Decree No. 81/2008.
[20] Cf. Criminal Cass., Sec. IV, 27 September 2001, no. 35067.
[21] Cf. Criminal Cass., Sec. IV, 13 January 2006, no. 1216; Criminal Cass., Sec. IV, 9 July 2008, no. 27959.
[22] European Parliament resolution of 16 February 2017with recommendations to the Commission on Civil Law Rules on Robotics. In the same vein, European Parliament resolution of 20 January 2021 on artificial intelligence: questions of interpretation and application of international law.Contra, the European Economic and Social Committee in its opinion of 31 May 2017, published on 31 August 2017.
