Practice and academic insights into artificial intelligence in work
Accept it: artificial intelligence is changing how we work, and we must adapt to it. This was the tenor of the discussion amongst members of a panel of legal practitioners and law academics in Dublin in May 2025. This post summarises the discussion amongst this panel.
What does good use of AI in the work setting look like?
Lawyers tend to be more concerned with the risks or negatives that may arise from circumstances. Asking them to consider what good use of AI (i.e. looking to the positives of the deployment of AI systems in the work setting) elicits differing responses. Good use here focuses on how AI systems are deployed by employers. But, good use is not a singular concept.
Artificial Intelligence can increase productivity by allocating routine tasks to AI systems, thereby freeing workers for other more time-consuming tasks. AI can also facilitate a different kind of support for employees. These systems can offer employers opportunities to manage workloads, distribute tasks, leading to (it may be hoped) increased morale. Errors may be reduced which can lead to overall improvements in productivity in addition to the limitation of risks. AI can enable employers to better respond to reasonable accommodation requests by employers.
AI is not without risk (this is not a pun on the AI Act’s risk-based approach). It may be analogised to the smartphone or social media. Both evidence the mixed contribution technology can have. Both technologies connect people in tremendous ways, thereby facilitating the exchange of ideas. They can also, in the work context, be the bases for disciplinary action against employees for conduct/expression that is inconsistent with employers’
Beyond acknowledging risk, some retain their concern with the deployment of AI systems, leading to a relative response: good use depends on what tasks are affected by AI systems, and how these systems are being used. Implicit in the response is the mixed assessment of AI – while there are positives, some sense that the positives may be obscured by ill-advised deployment.
One approach to effecting good use of AI may be to use fundamental rights (such as those in the EU Charter of Fundamental Rights) as a guide in the deployment of AI systems. The idea clearly alludes to the balancing exercise that has long been at the centre of work. Even here, though, the situation is far from straightforward. The discussion of deployment of AI systems in the work setting has largely centred around employers inserting these systems into their workplaces. What if workers have their own AI that they use for work? (Think of a worker bringing their own laptop to work.) If this practice is not prohibited, then employers will need to develop robust AI policies (that are enforced) regarding both employers as well as workers’ use of AI systems.
i) Good use of AI by workers
Good use of AI is a more dynamic concept that only perceiving it as employers deploying systems into the workplace. We may also consider what employees’ good use of AI systems means. Good use of AI can be a skill for employees. This can entail posing questions that elicit particularised responses. Lawyers will be familiar with this approach since issue identification has long been a key part (and skill) in legal practice. Employees will be required to hone their skills in posing questions. This idea of good use of AI systems may superficially seem like AI will give the answers if the rightquestions are posed. Workers must bring more to the exercise than this. They will need to know what is missing from the responses, and how to add that missing information or analysis.
Transparency as a panacea
Transparency can be a partner of a fundamental rights approach. EU legislative efforts rely significantly on transparency. Transparency can be found in two forms in laws such as the Platform Work Directive or the AI Act. First, transparency requires the provision of information. Second, transparency means human oversight layered into the process.
i) Information provision as transparency
Compliance with transparency obligations consists, in part, in providing information. The premise is that if individuals (here employees) have information in advance, they can plan accordingly. Planning, in this context, presumes employees can act on this information in a way that allows them to avoid undesirable situations. The EU seems to have favoured this idea of transparency. See for example, the Platform to Business Regulation (2019/1150). There can be a question about the extent of transparency, and whether it means something more; some essence of a right beyond only being provided with the information.
ii) Human involvement as transparency
Humans having a role in the process of technological decision-making (in/after/before the loop) constitutes another form of transparency. The questions surrounding this form have been raised before with Article 22 of the General Data Protection Regulation (GDPR). The distinction targeted here is between automated decision support (where a person would make the final decision) and automated decision-making (where there is no human judgement involved).
An individual has a right not to have a significant decision made solely by automated means. A legal effect is not defined in Article 22. The Article 29 Working Party (which is now the European Data Protection Board) writes that such effect would include affecting an individual’s “legal rights, such as the freedom to associate with others, vote in an election, or take legal action”. A “similarly significant” effect would, again in the Working Party’s words, “at its most extreme, lead to the exclusion or discrimination of the individual”. The effect includes “decisions that deny someone an employment opportunity or put them at a serious disadvantage”.
Human involvement should entail more than a human routinely applying automatically generated decisions. A human manager may take a passive approach to such a decision (where an algorithm for example renders a decision) by simply affirming such a conclusion. A “decision” pursuant to this provision has been interpreted as including “a number of acts which may affect the data subject in many ways” (including creditworthiness). Meaningful oversight should include oversight by an individual with authority and competence to change the decision, as well as analysis of all relevant data.Uber, according to the Amsterdam Court of Appeal, had not established human involvement when, amongst other points, it did not sufficiently set out that all relevant data had been considered, nor the qualifications or knowledge level of employees who reviewed the results of automated processing. An in-person conversation with an individual whose conduct was flagged as fraudulent through automated processing pursuant to Article 22 satisfied the obligation of human intervention in one case.
Collective bargaining on algorithms
Information technology adds to the work of trade unions. But what does it add? Another dimension to bargaining; another term to the employment contract; another working condition to monitor (a condition which can implicate other conditions of work, such as stress (technostress)).
It may be conjectured that the workforce and trade unions are against new technologies. This is the luddite critique. Sometimes hesitation or wariness may be mistaken for opposition. To give a benefit of doubt to workers, a significant change to the way their work performance is managed can create stresses that induce a range of reactions.
i) Banking Sector Social Partners
The European Social Partners in Banking released its Joint Declaration on Employment Aspects of Artificial Intelligenceon 14 May 2024. In it, the Social Partners “confirm[ed] the relevance of social dialogue (including information and consultation) and collective negotiation to steer the significant impacts on workers resulting from the introduction of AI. The European Social Partners will monitor such impacts according to national legislation and customs.” The Social Partners recommend regularly undertaking joint occupation safety and health risk assessments regarding the effects of algorithmic management. Social dialogue is also identified as a means of developing “joint actions to support job transition and ensure re/up-skilling opportunities when profiles are affected by the grown use of AI and other digital technologies.” The Joint Declaration also calls for the maintenance of a “series of individual and collective digital rights”. Moreover, the Social Partners states that the “use of AI in surveillance to monitor employees should be limited, transparent, proportional and in compliance with existing collective agreements and national or local law”. The document reiterates what is found in other EU legislation (such as Article 22 GDPR) that employees “should have the right not to be subject to decisions that affect them legally and significantly based solely and exclusively on automated variables.”
On 4 March 2025, the Bank of Ireland and the Financial Services Union reached an agreement on AI. It draws from the Social Partners’ Joint Declaration. The agreement does not contain the type of clauses one would find in a collective agreement. Instead, it consists of a statement of principles that include: “The Bank reaffirm commitment to the job security and change management agreements with FSU. These agreements will be updated with a note referencing this new AI agreement. It is agreed by the Bank and the FSU that in order to assist with this commitment to job security, all employees will need to be flexible regarding redeployment. If an employee refuses to accept a reasonable alternative role this will be managed in line with the change management agreement.”
The Banking example suggests movement amongst the social partners at EU level, which has subsequently prompted national level action. In the midst of much uncertainty, both the Joint Declaration and AI Agreement are welcome indications of some efforts towards dialogue with attention to the impact of technological advances on jobs for workers.
Further Thoughts
The title of this post, optimism and scepticism, encapsulates the perspectives on the impact of AI on work. Without doubt, there is a desire for more concrete steps that extend beyond the rhetoric of a human-centred approach. At this point in time, it may be useful to note the conflicting paces that we now face. Technologies increase the speed of task completion. But, making space for humans also means facilitating change at a human pace, which for better or worse is slower than that of technology.
Note: The event from which these comments are drawn was hosted by the Technology Law & Policy Centre at Maynooth University.
