"Machine Learning & Artificial Intelligence" by mikemacmarketing

Introduction

On February 2, 2025, a significant milestone was reached in protecting workers’ fundamental rights as Chapter I [General Provisions] and Chapter II [Prohibited Practices] of the AI Act became applicable.[1] Two days later, on February 4, 2025, the European Commission approved the draft non-binding guidelines on the practical implementation of the prohibited AI practices laid down in Article 5, Chapter II, of the AI Act.[2]

Of particular relevance to the world of work is the inclusion of AI emotion recognition systems in the workplace amongst AI practices and systems that pose unacceptable risks, and are therefore prohibited under Article 5(1)(f), with two limited exceptions. This is a welcome addition to the list of prohibited practices in Article 5, especially since it was not included in the Commission’s 2021 Proposal for a Regulation on Artificial Intelligence.[3]

In light of this recent regulatory development, this blog post presents examples of applications of emotion recognition technologies in workplace settings, examines the scope of the prohibition under Article 5(1)(f), and explores how the two exceptions to this prohibition (‘medical or safety reasons’) should be interpreted, also considering the European Commission’s guidelines.

Emotion Recognition Technologies in the Workplace

Emotion monitoring is not an unheard practice in the workplace and beyond (e.g. healthcare, advertisement, surveillance, and security).[4] It is driven by advancements in the field of affective computing, a multi-disciplinary field of study that researches ‘computer’s capabilities to recognise and interpret human emotions and affective states’[5] as well as to ‘demonstrate emotions’.[6]  

In recent years, companies like HireVue, Cognisess, Emotiv, and Cogito, have designed and developed technologies that can – or, perhaps more accurately, claim to be able to – provide insights into employees’ emotional and mental states through the algorithmic analysis of a wide range of biometric data. These data include but are not limited to facial micro-expressions, speech patterns and tones, head and body posture, gait, and brain activity.[7]

This information is collected and processed using diverse technologies, including facial and speech recognition software and wearables like smart earbuds, headsets, headbands integrated into hardhats and caps, and chest straps, which measure physiological parameters connected to an individual emotional state (e.g., heart rate variability, galvanic skin response, breathing rate).  

Selection and recruitment are among the most frequently cited examples of emotion monitoring applications in the workplace.[8] However, the (potential) use of such technologies extends far beyond recruitment. Research is ongoing, and tech companies have started commercializing AI-powered products that are purported to detect workers’ emotional inner states throughout the entire lifecycle of an employment relationship, across various industry sectors and occupations (e.g. for call center operators).[9]

These technologies encompass, for instance, those that monitor workers’ attention, concentration, and energy levels by measuring brain data and other biometric data,[10] track stress levels through the analysis of physiological parameters,[11] and evaluate job engagement, satisfaction, and social interactions at work based on communication patterns (e.g. time spent interacting, physical proximity).[12]  

That said, the development and deployment of these technologies in work environments have faced criticism and opposition. Scholars from various fields, along with policymakers and civil society organizations, have highlighted the lack of scientific basis behind emotion recognition technologies, including the definitional challenges surrounding the concept of ‘emotions’ and their context- and culture-dependent nature.[13] Moreover, in the context of an inherently power-imbalanced relationship, such as the employment one, the use of these technologies could result in violations of workers’ fundamental rights, including the right to health and safety at work and data protection, and lead to biases and discrimination.[14] The AI Act echoes and acknowledges these concerns and risks, with Recital 44 pointing out the ‘limited reliability, lack of specificity and limited generalisability’ of technologies that claim to infer individuals’ states of mind.

In light of the above, what qualifies as an AI emotion recognition system under the AI Act and thus falls within the prohibition in Article 5(1)(f)?

The Prohibition of Emotion Recognition Systems in the Workplace under Article 5(1)(f) AI Act

Adopting a risk-based approach, the AI Act takes a firm stance on AI emotion recognition technologies in the workplace: they are prohibited, with only two exceptions, which will be elaborated on later. Specifically, Article 5(1)(f) reads that “[t]he following AI practices shall be prohibited: […] the placing into the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons”.

The Commission’s guidelines note that four cumulative conditions must be met for this prohibition to apply: a) placing into the market, putting into service for this specific purpose, or the use; b) AI systems to infer  emotions of a natural person; c) area(s) of workplace (and education); d) the exceptions (medical or safety reasons) are not applicable.[15]

Conditions a) and c) leave limited room for debate. In this regard, the Commission’s guidelines clarify that the term ‘use’ in Article 5(1)(f) indicates that the prohibition applies to deployers too, i.e., employers. Additionally, the term ‘workplace’ should be understood broadly, ensuring that AI emotion recognition systems are prohibited in both physical and virtual workplaces, and more generally, throughout the entire employment relationship, from recruitment to dismissal.[16]

That said, one aspect that requires particular attention is the use of AI emotion recognition systems to monitor individuals who are not workers but are present in a work context (e.g. technologies that monitor emotions of call center customers or shop clients). In these cases, the AI systems are not prohibited under Article 5(1)(f).[17] However, given the potentially blurred lines between monitoring customers/clients/etc. and workers, deployers need to ensure that safeguards are put into place to prevent the detection of workers’ emotional states.[18]

Turning to conditions b) and d), defining the scope of the concept ‘AI system to infer  emotions of a natural person’ and the two exceptions to the prohibition is not entirely straightforward and will be explored in the following two sections.  

Defining an ‘Emotion Recognition System’ under the AI Act: ‘Emotions or Intentions’ vs. ‘Readily Apparent Expressions, Gestures or Movements’, and ‘Physical States’

Under Article 3(39) of the AI Act, an ‘emotion recognition system’ is defined as an AI system ‘for the purpose of identifying and inferring emotions or intentions of natural persons on the basis of their biometric data’.[19] Recital 18 provides the same definition and includes a non-exhaustive list[20] of emotions or intentions, such as happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction, and amusement. Other examples are anxiety, impatience, irritation, and complex emotional states.[21] Given the focus on cognitive states of mind, attentiveness, focus, and boredom are also relevant in this context.[22]

Furthermore, Recital 18 excludes from the definition of ‘emotion or intention’: a) physical states; and b) readily apparent expressions, gestures, or movements provided the AI system only detects and does not infer information about workers’ emotional states. Therefore, AI systems that detect physical states and readily apparent expressions, gestures, or movements do not qualify as ‘emotion recognition systems’ and are not prohibited under Article 5(1)(f) of the AI Act.[23]That said, while Recital 18 and Article 3(39) draw these distinctions (including between emotions and intentions), the boundaries between these categories are not always clear. Three main considerations can be made in this regard.  

First, Article 3(39) and Recital 18 refer to two seemingly distinct concepts  – ‘emotions’ and ‘intentions’. However, the AI Act does not further elaborate on this distinction, nor do the Commission’s guidelines provide additional clarity. This ambiguity has not gone unnoticed and has started to attract criticism.[24] While uncertainty remains and further guidance is needed on whether, and if so how, to differentiate between these two terms, one can assert that what matters for the application of the prohibition in Article 5(1)(f) and its impact is that ‘emotions or intentions’ should be interpreted broadly, as suggested by the Commission’s guidelines.[25]

In other words, by using both terms, emotions and intentions, the European legislature has included a broad range of AI systems within the prohibition, including those that, based on the biometric analysis, claim to be able to detect a worker’s intention to act or refrain from acting in a certain way.[26] This approach aligns with the rationale for prohibiting AI emotion recognition technologies in consequential settings like the workplace, where their use poses an unacceptable risk to individuals’ health and safety and fundamental rights and interests.[27]

Secondly, the distinction between ‘emotions or intentions’ and physical states is not (always) clear-cut, which may affect the scope of application of the prohibition under Article 5(1)(f). Recital 18 lists pain and fatigue as examples of physical states, referencing AI systems used ‘to detect the state of fatigue of professional pilots or drivers for the purpose of preventing accidents’.[28] This suggests that the AI Act provides some flexibility for the development and implementation of AI systems designed to monitor fatigue, particularly in high-risk sectors and occupations (e.g. mining, transportation, and construction). Such technologies could support employers in fulfilling their duty of care obligations under EU and national occupational health and safety legislation.[29] In this regard, for example, EU-OSHA has recently published a case study on one such system: a smart headband that measures fatigue levels and detects microsleep by collecting and processing workers’ brain wave information to trigger on-the-spot alerts in case of danger.[30]

At the same time, however, the AI Act does not take a stance on whether stress should be classified as an emotion or as a physical state. This is an important issue to address as stress, like fatigue, can manifest through both physical and mental symptoms and could therefore be monitored by measuring a number of physiological parameters.[31]  The Commission’s guidelines provide some clarity, albeit indirectly, on whether stress qualifies, or could qualify, as a physical state. Specifically, the guidance document indicates that general monitoring of stress levels at the workplace is not permitted and does not fall under the ‘medical or safety reasons’ exception in Article 5(1)(f).[32] This can suggest that the guidance document adopts a narrow view of what constitutes a physical state, categorizing stress as an ‘emotion or intention’ under Article 3(39) and permitting the monitoring of stress levels based on biometric data only if the ‘medical or safety reasons’ exception applies (see next section).

Thirdly, while the distinction between ‘emotions or intentions’ and readily apparent expressions, gestures, or movements, may seem clear in theory, it can be blurred in practice. Developers and providers may hence use and exploit this ‘escape’ route provided by the AI Act to classify an AI system as one that merely detects a readily apparent expression, such as a person smiling,[33] while in reality, it identifies and infers information about the individual’s inner emotional and mental state. In response to the Dutch Data Protection Authority’s call for input on the prohibition of AI systems for emotion recognition in the workplace and education, the Electronic Privacy Information Center has raised concerns about this risk.[34]  

Exceptions to the Prohibition of AI Emotion Recognition Systems: Medical or Safety Reasons

As mentioned above, there are two exceptions to the prohibition of AI emotion recognition systems: medical or safety reasons. These must be narrowly interpreted,[35] and where applicable, the requirements laid down in relevant applicable legislations (e.g. data protection law, labor law, and occupational health and safety) would still apply.  When, then, could these exceptions apply?

The Commission’s guidelines exclude the application of the ‘medical  reasons’ exception for the general monitoring of stress levels as well as burnout and depression in the workplace. Consequently, the AI Act could put a halt to the development and implementation of these AI emotional monitoring technologies, especially when intended for work environments, such as office settings, where employers’ use of such systems to enhance workers’ general well-being (e.g. as part of corporate wellness programs) is closely linked to the goal of tracking productivity.[36]

Conversely, the ‘safety reasons’ exception might permit AI emotion recognition technologies that measure stress and attention levels based on biometric data in high-risk sectors and professions where workers are at risk of fatal accidents, or serious injuries and health conditions.[37] For instance, this exception could apply to AI systems designed to detect and measure the emotional state of workers in industries like construction and transportation to prevent accidents.[38]

However, this exception does not constitute a blanket ‘approval’ for all AI systems designed for fatigue detection and monitoring. As outlined in the Commission’s guidelines, a proportionality assessment must still be conducted to determine whether there are less invasive ways to reach the same objective (safety). This assessment must also consider the often-blurred boundaries between the ‘benign’ goal to keep workers healthy and safe in the workplace, on the one hand, and surveilling workers for performance and productivity reasons, on the other.[39]

Concluding Remarks

As the possibilities offered by AI-driven technologies increase, the prohibition of AI emotion recognition systems in the workplace under Article 5(1)(f) marks a significant step forward in protecting workers’ fundamental rights and interests. The Commission’s guidelines represent a valuable and necessary addition to understanding how this provision and its different elements should be interpreted.

However, while this guidance document offers important clarifications on the core concepts in Article 5(1)(f), including illustrative examples, room for interpretation remains (e.g. monitoring stress and attention levels in certain professions might fall under the ‘safety reasons’ exception) and may be used by tech developers to circumvent this prohibition. A case-by-case assessment in the application of Article 5(1)(f) therefore remains key to ensuring that the prohibition of AI emotion recognition technologies in the workplace is upheld and that exceptions are only justified in limited cases, where the positive outcomes (e.g. protecting workers’ lives and health) outweigh the negative implications for workers’ fundamental rights, including the respect for their (mental and physical) health and safety.

______________________

References

[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence, OJ L, 2024/1689, 12.7.2024. According to Article 113 of the AI Act, the legislation entered into force on August 2, 2024, and will apply from August 2, 2026. However, the second paragraph of this provision lists three exceptions, including one related to the prohibited AI practices [Chapter II]. See also Recital 179 of the AI Act.   

[2] European Commission, Annex to the Communication to the Commission, Approval of the content of the draft Communication from the Commission – Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024//1689 (AI Act), Brussels, 4.2.2025, C(2025) 884 final. At the time of writing (31 March 2025), the guidelines have not yet been formally adopted by the European Commission. They will become applicable from the moment they are formally adopted: see European Commission, Communication to the Commission, Approval of the content of the draft Communication from the Commission – Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act), Brussels, 4.2.2025, C(2025) 884 final.  

[3] European Commission, Proposal for a Regulation on Artificial Intelligence (Artificial Intelligence Act), Brussels, 21.4.2021, COM(2021) 206 final. In the Commission Proposal, only emotion recognition systems used by law enforcement authorities and in the context of migration, asylum, and border control management were included in the high-risk category (Annex III – High-Risk AI systems Referred to in Article 6(2) – points 6(b) and 7(a)). Civil society organizations and data protection authorities have advocated including a general prohibition of AI emotion recognition systems in the list of prohibited AI systems and practices. In this regard, see, for instance: European Digital Rights, An EU Artificial Intelligence Act for Fundamental Rights – A Civil Society Statement, 30.11.2021, p. 3; EDPB-EDPS, Joint Opinion 5/2021 on the proposal for a Regulation of the European Parliament and of the Council laying down hamonised rules on artificial intelligence (Artificial Intelligence Act), 18 June 2021, para. 35. The 2023 European Parliament amendments on the Proposal for the AI Act included a prohibition of emotion recognition technologies in the workplace in Article 5(1)(point dc) (Amendment 226) and Recital 26 (Amendment 52): Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).

[4] For some examples see: Electronic Privacy Information Center (EPIC), Comments of the Electronic Privacy Information Center to Autoriteit Persoongegevens (NL) – Department for the Coordination of Algorithmic Oversight (DCA), AI Systems for Emotion Recognition in the Areas of Workplace or Education Institutions: Prohibition in EU Regulation 2024/1689 (AI Act), [DCA-2024-02], 17 December 2024, p. 5; Amelia Katirai, Ethical considerations in emotion recognition technologies: a review of the literature (2024) 4 AI and Ethics 927-929; European Commission, Guidelines on prohibited artificial intelligence practices, para. 240.

[5] EDPS, Facial Emotion Recognition, TechDispatch, Issue 1, 2021, p. 1.

[6] Phoebe V. Moore, Gwendolin Barnard et al, Data on Our Minds: Affective Computing at Work (Report, Institute for the Future of Work, November 2024), p. 9.

[7] For other examples, see Amelia Katirai, Ethical considerations in emotion recognition technologies: a review of the literature, p. 929, and European Commission, Guidelines on prohibited artificial intelligence practices, para. 251.

[8] See, e.g., Phoebe V. Moore, Data subjects, digital surveillance, AI and the future of work (Study, European Parliamentary Research Service 2020) p. 23; Richard A. Bales and Katherine V. W. STONE, The Invisible Web at Work: Artificial Intelligence and Electronic Surveillance in the Workplace (2020) 41(1) Berkeley J. Emp. & Lab. L. 1, pp. 12-13;  Eurofound (2020), Employee monitoring and surveillance: The challenges of digitalisation, Publications Office of the European Union, Luxembourg), p. 33; Bernard Marr, The Amazing Ways How Unilever Uses Artificial Intelligence to Recruit & Train Thousands of Employees, Dec 14, 2018:12:07am EST; Peter Mantello et al., Bosses without a heart: socio-demographic and cross-cultural determinants of attitude toward Emotional AI  in the workplace (2023) 38 AI & Society 97-119, p. 98; Peter Mantello and Manh-Tung Ho, Emotional AI and the future of wellbeing in the post-pandemic workplace (2024) 39 AI & Society 1883-1889, p. 1883.

[9] See, e.g., Angelica Salvi del Pero, Peter Wyckoff, and Ann Vourc’h, Using Artificial Intelligence in the workplace: What are the main ethical risks? (2022, OECD Social, Employment and Migration Working Papers No. 273), p. 30; EU-OSHA, Artificial intelligence for worker management: an overview (Report, 2022), p. 22; Tom Simonite, This Call May be Monitored for Tone and Emotion, 19 March 2018, 7:00 am, WIRED, Call Centers Tap Voice-Analysis Software to Monitor Moods | WIRED

[10] See, e.g., European Commission: Directorate-General for Employment, Social Affairs and Inclusion, Visionary Analytics, Paliokaitė, A., Christenko, A., Aloisi, A. et al., Study exploring the context, challenges, opportunities, and trends in algorithmic management in the workplace – Final report, Publications Office of the European Union, 2025, https://data.europa.eu/doi/10.2767/5629841, p. 78; Vishal Patel et al., Trends in Workplace Wearable Technologies and Connected-Worker Solutions for Next-Generation Occupational Safety, Health, and Productivity (2022) 4 Adv. Intell. Syst. 2100099, p. 9, 11 and 13; EU-OSHA, Artificial intelligence for worker management, p. 32.

[11] Vishal Patel et al., Trends in Workplace Wearable Technologies, p. 13-14. The authors mention the wireless EEG headset by IMEC, which can detect emotions and attention in real time. IMEC has developed wearables for pain and stress monitoring (e.g. patches, wristbands, and EEG headsets) analyzing an individual’s physiological signals (e.g. heart rate variability, tension in face muscles, eye movement, and pupil dilation, brain signals) Information available here: Technology for pain and stress monitoring devices | imec. One of IMEC’s products (Imec’s Chill band) was used in 2017 to investigate stress in the work environment.

[12] Vishal Patel et al., Trends in Workplace Wearable Technologies, p. 10 (the authors mention wearable sociometric badge Humanize).

[13] See, e.g., Amelia Katira, Ethical considerations in emotion recognition technologies: a review of the literature, p. 931 ff; Andreas Häuselmann et al., EU law and emotion data, 2023 11th International Conference on Affective Computing and Intelligent Interaction (ACII), Cambridge, MA, USA, 2023, p. 1-8; Angelica Salvi del Pero, Peter Wyckoff and Ann Vourc’h, Using Artificial Intelligence in the workplace, p. 29; EPIC, Comments of the Electronic Privacy Information Center to Autoriteit Persoongegevens (NL), p. 6-11; AccessNow, Joint civil society amendments to the Artificial Intelligence Act.

[14] See, e.g., Amelia Katirai, Ethical considerations in emotion recognition technologies: a review of the literature, p. 932-933; European Parliament resolution of 12 February 2019 on a comprehensive European industrial policy on artificial intelligence and robotics (2018/2088(INI)), para. 13; Sophie Weerts et al., AI systems for Occupational Safety and Health: From Ethical Concerns to Limited Legal Solutions, in M. Janssen et al., Electronic Government. EGOV 2022. Lecture Notes in Computer Science, vol 13391. (2022, Springer, Cham), p. 499-514.

[15] European Commission, Guidelines on prohibited artificial intelligence practices, paras. 242-243.

[16] Ibid., paras. 243, 253-254. For some illustrative examples of prohibited/not-prohibited AI systems see the box on pages 85-86.

[17] Ibid., para. 254 (box).

[18] Ibid., para. 254 (box) and para. 270. Even though not prohibited under Article 5(1)(f), AI emotion recognition systems used for customers/clients can still be considered high-risk systems under the AI Act (see Article 6(2) and Annex III, 1(c)). In this regard, Article 50(3) of the AI Act also sets transparency obligations for deployers.

[19] This blog post focuses on one aspect of this definition: ‘emotions or intentions’. However, it is important to note that Article 3(39) includes other elements that have also raised questions and concerns regarding their definition and scope. One such element is the concept of ‘biometric data’, as the prohibition in Article 5(1)(f) applies only to AI emotion recognition systems that infer emotions or intentions based on biometric data analysis, thereby allowing systems that detect emotions from written text. In this regard, see, e.g. European Commission, Guidelines on prohibited artificial intelligence practices, paras. 250-251; Autoriteit Persoonsgegevens (AP) [Dutch Data Protection Authority] – Department for the Coordination of Algorithmic Oversight (DCA), AI systems for emotion recognition in the areas of workplace and education, Summary of responses and next steps, February 2025, DCA-2025-02, paras. 14-17; Christiane Wendehorst and Yannic Duller, Biometric Recognition and Behaviour Detection – Assessing the ethical aspects of biometric recognition and behavioural detection techniques with a focus on their current and future use in public spaces (European Parliament, Study, 2021), p. 67 ff.

[20] The AI Act does not specify this point; however, it is clarified in the European Commission, Guidelines on prohibited artificial intelligence practices, para. 247.

[21] AP, AI systems for emotion recognition, para. 11 (examples given by the respondents).

[22] European Commission, Guidelines on prohibited artificial intelligence practices, para. 263.

[23] However, they may still classified as high-risk systems under Annex III, point 4, with all the resulting requirements (Chapter III, Section 2, AI Act) and obligations for providers and deployers (Chapter III, section 3, AI Act), and their use in the workplace must comply with relevant existing legislation (e.g. GDPR and OHS legislation).   

[24]  AP, AI systems for emotion recognition, para. 11.

[25]  European Commission, Guidelines on prohibited artificial intelligence practices, para 247.

[26]  European Commission, Guidelines on prohibited artificial intelligence practices, para. 254 (box), for instance, cites the example of cameras in a supermarket or bank that are used to detect suspicious clients and conclude if somebody is about to commit a robbery.  See also, AP, AI systems for emotion recognition, para. 11.

[27] European Commission, Proposal for a Regulation on Artificial Intelligence, section 5.2.2 (‘The list of prohibited practices in Title II comprises all those AI systems whose use is considered unacceptable as contravening Union values, for instance by violating fundamental rights’).

[28] The same example is used in the European Commission, Guidelines on prohibited artificial intelligence practices, para. 249 (box).

[29] Stefania Marassi, Intelligenza Artificiale e Sicurezza sul Lavoro, in Marco Biasi, Diritto del Lavoro e Intelligenza Artificiale (2024, Giuffrè), p. 217-218.

[30] EU-OSHA, Smart Digital Systems for Improving Workers’ Safety and Health: Smart Headband for Fatigue Risk-Monitoring (Case Study, 2024). On the topic, see also, e.g.: Mohammad Moshawrab et al., Smart Wearables for the Detection of Occupational Physical Fatigue: A Literature Review (2022) 22(19) MDPI Sensors 7472; EU-OSHA, Smart digital monitoring systems for occupational safety and health: uses and challenges (2023, Report), p. 23;  Sina Rasouli et al., Smart Personal Protective Equipment (PPE) for construction safety: A literature review (2024) vol. 170 Safety Science 106368, p. 6 and 9; Angelica Salvi del Pero, Peter Wyckoff and Ann Vourc’h, Using Artificial Intelligence in the workplace, p. 35.

[31] See, e.g., AP, AI systems for emotion recognition, para. 12 (‘Respondents indicate that interpretations such as stress, pain and fatigue can be both an emotion and a physical state).

[32] European Commission, Guidelines on prohibited artificial intelligence practices, para. 257 and box on page 88.  

[33] For examples of readily apparent expressions, gestures, or movements, see European Commission, Guidelines on prohibited artificial intelligence practices, para. 249, and Recital 18 AI Act.

[34] EPIC, Comments of the Electronic Privacy Information Center to Autoriteit Persoongegevens (NL),  p. 25-26.

[35] European Commission, Guidelines on prohibited artificial intelligence practices, para. 256.

[36] For examples of research on the topic, see, Lu Han et al., Detecting work-related stress with a wearable device (2017) vol. 90 Computers in Industry 42–49, and Marieke van Vugt, Using Biometric Sensors to Measure Productivity, in C. Sadowski and T. Zimmermann(eds), Rethinking Productivity in Software Engineering (2019, Apress), p. 158-167.

[37] The Commission’s guidelines stress that ‘[t]he notion of safety reasons within this exception should be understood to apply only in relation to the protection of life and health and not to protect other interests, for example property against theft or fraud’ (para. 258).

[38]  See, e.g., Sungjoo Hwang et al., Measuring Workers’ Emotional State during Construction Tasks Using Wearable EEG (2018) vol. 144(7) Journal of Construction Engineering and Management 04018050; Kabir Ibrahim et al, Benefits and challenges of wearable safety devices in the construction sector (2025) vol. 14(1) Smart and Sustainable Built Environment 50-71.

[39] European Commission, Guidelines on prohibited artificial intelligence practices, paras. 259-260. In para. 260 and in the box on page 88, the European Commission highlights that when the ‘medical or safety reasons’ exception is used the data cannot be used for another purpose. This is also in line with the respect for the principle of purpose limitation under the GDPR.

This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *