Introduction

The rise of Artificial Intelligence (AI) in the labour market is fuelling a global debate about its potential to revolutionize recruitment processes, including in the public sector.

Public hiring systems, traditionally founded on open competitions to ensure meritocracy, transparency, and equal access to public employment, may soon be challenged by AI’s ability to automate and streamline (at least certain phases of) the worker selection procedures. Consequently, there is an increasing need to analyse the concrete possibilities for AI integration into public recruitment systems, focusing on both the opportunities and challenges it presents.

In public hiring, the principle of open competition has long been a cornerstone, especially in European countries, where public servants are expected to be evaluated solely on their technical preparation to maintain impartiality. Indeed, public hiring systems in Europe aim to ensure that the most qualified candidates are selected, typically through standardized exams, transparent procedures, and impartial evaluations. However, the introduction of AI-based tools in recruitment processes holds the potential to reshape these systems, offering remarkable opportunities for innovation while simultaneously raising concerns about the potential risks of algorithmic bias and the erosion of fundamental labour rights.

To explore this issue further, it is essential to first analyse the functioning of these algorithms and their applications in the private sector; the sector which is currently leading innovation in this field.

AI Applications in Recruitment

AI tools in recruitment offer a wide array of functionalities. The range of their possible applications—often used in combination and in progression—goes from simple screening of social media job profiles and keyword-matching on candidates’ resumes to assessing personality traits or predicting future job performance through specialized games and video interviews that analyse facial expressions, tone of voice, choice of words and response speed.

These tools can provide significant benefits for both employers and candidates. Employers gain from the automation of time-consuming and costly recruitment procedures, resulting in processes that are significantly faster, more economical, and generally more appealing to applicants. Moreover, due to the more targeted selection process, algorithmic recruitment systems tend to identify candidates who are not only more likely to accept the specific job offer, but also demonstrate strong work performance and high motivation, ultimately reducing the turnover costs (AGNIHOTRI, BHATTACHARYA, 2024)[i].

Conversely, candidates may also benefit from faster and more efficient hiring processes, alleviating the need to devote extensive hours to unproductive, rote studying. Additionally, they may have the opportunity to receive detailed feedback in the event of non-selection (AGNIHOTRI, BHATTACHARYA, 2024). However, recruitment systems shape candidates’ future professional life and must therefore adhere to principles of trustworthiness and fairness. This can lead some candidates to develop a degree of algorithm aversion, prompting them to prefer engaging with a human recruiter or, at the very least, with a hybrid system (KEPPLER, 2024)[ii].

For the advantage of both parties, in theory, AI’s objectivity promises to eliminate the unconscious biases that human recruiters often introduce into the hiring process, thereby fostering a more meritocratic system that could enhance employee diversity in terms of educational backgrounds and social conditions. However, these theoretical advantages are increasingly being questioned, as real-world applications of AI in recruitment have revealed significant flaws.

Algorithmic recruitment functioning and its biases

AI systems operate through machine learning, leveraging vast quantities of data (the “data set”) to identify hidden patterns, connecting inputs with outputs, and making decisions that are not entirely comprehensible and predictable for the human mind. In the context of recruitment, these tools are trained on historical data primarily composed of profiles of current successful employees, and they seek to identify in new candidates the same traits exhibited by previous successful workers in similar roles, which may reflect existing biases in society (KELAN, 2024)[iii]. For instance, regarding gender discrimination issues, traits such as confidence may manifest differently among women, men and non-binary individuals, yet the algorithm may likely be trained predominantly on male examples. Moreover, if an AI tool is developed using data from a company that has historically hired predominantly male managers, the system may conclude that being male is a crucial factor for success, thereby perpetuating gender discrimination by mistaking correlation for causality and demonstrating an intrinsic lack of holistic contextual understanding.

AI’s ability to absorb, embed, and replicate societal biases presents a significant risk of self-fulfilling predictions, consistently selecting a narrow target group of workers while neglecting others who may possess exceptional capabilities. The final outcome may be relegating workers to stereotypical social roles, in stark contrast to the principles outlined into the Joint Opinion 5/2021 on the proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), which advocated for the prohibition of AI systems that categorize individuals into clusters.

Consequently, this mindset has already resulted in high-profile cases of discrimination within the private sector. Amazon, for example, abandoned its AI hiring tool in 2018 after four years of implementation upon discovering that it systematically penalized female candidates due to a lack of female representation in its data set (KELAN, 2024). Although the company initially attempted to rectify the issue, it ultimately concluded that there was no feasible way to ensure the complete absence of gender bias. Indeed, while augmenting the data set is one of the most reliable techniques for reducing algorithmic bias, it remains imperfect (ALBAROUDI, MANSOURI, ALAMEER, 2024)[iv].

Further analysing this example, one potential solution could have involved keeping the algorithm unaware of certain potentially discriminatory factors (such as gender) to implement a so-called “blind hiring technique.” However, this approach risks reverting the concept of equality to its primordial and outdated definition— which emphasizes only formal rather than substantive equality— and it could lead to problems of gender underrepresentation in certain sectors of the world of work. Consider another instance: AI recruitment systems may be programmed to prioritize the efficiency of the hiring employer. An AI tool designed to reduce future costs, while remaining oblivious to the protective mechanisms provided by legal systems to achieve substantive equality among workers, could place all candidates on an equal footing and yet favour individuals less likely to take maternity leave or require additional training, thereby disadvantaging groups such as women or individuals from underrepresented backgrounds.

All these issues arise within a sector that remains inadequately regulated. As is common in many highly innovative fields, the level of protection applicable has, for some time, been left entirely to the discretion of economic operators due to legislation that inevitably requires time to recognize and regulate phenomena as they manifest in reality.

The European AI Act

Recently, the European Union has taken a leading role in developing comprehensive regulatory frameworks for AI, culminating in the adoption of Regulation No. 2024/1689 (the AI Act). This regulation aims to address the risks posed by AI while simultaneously fostering innovation, with a view to protecting core European social values.  However, it presents a more restrained approach in defending basic human rights compared to the original version proposed by the European Parliament and the Council (ALAIMO, 2024)[v], as its legal foundations are based primarily on internal market considerations and data protection framework (Articles 114 and 16 TFEU), rather than on social values and anti-discrimination law (such as Article 19 TFEU).

The regulation employs both a risk-based and anthropocentric approach.

From a risk-based perspective, AI systems are classified into various categories, depending on the level of risk they pose. Specifically, AI systems utilized in employment, worker management, and recruitment are explicitly classified as high-risk, as underscored in Recital 57 and Article 6(2) of the regulation. This classification arises from their propensity to “perpetuate historical patterns of discrimination”. Particularly, for recruitment algorithms, this high-risk status is likely to be permanent, as it is inherently impossible to reclassify them to a lower risk category due to their intrinsic reliance on profiling, as noted in Recital 53. Consequently, all the recruitment algorithms must ensure the accuracy and consistency of their data sets to prevent discrimination against legally protected groups of workers.

These obligations are shared between AI providers and deployers, emphasizing principles of transparency, traceability, accuracy, explainability, and impartiality—all of which fundamentally relate to the broader principle of accountability. More specifically, in August 2027, the provisions of the regulation regarding high-risk AI will become mandatory for all operators, requiring them to comply with specific rules. These include requirements for providers, such as pre-market evaluations, continuous monitoring, and strict safeguards to ensure the accuracy and fairness of datasets, as well as obligations for deployers. Specifically, public deployers must publish a Fundamental Rights Impact Assessment (FRIA) prior to activating their high-risk AI systems, continuously monitor the system’s performance, and suspend its use if it poses a serious risk to fundamental rights, in accordance with the characteristically European precautionary principle.

From a human-centred perspective, the EU aims to uphold the primacy of human decision-making, relegating AI to an only supportive role. The European Parliament articulated this sentiment in its considerations of March 13, 2024, asserting that AI systems should have “the ultimate aim of increasing human well-being”. The perspective reflects a broader vision in which AI is seen as a powerful tool to enhance human capabilities rather than a replacement for human judgment—at least for the time being. This approach is consistent with the previous General Data Protection Regulation(GDPR), which, in Article 22, affirms the right (subject to exceptions, such as explicit consent) “not to be subject to a decision based solely on automated decision-making”, traditionally viewed with greater caution in the European context.

This European framework, combined with the fact that algorithms already applied in other public sector domains—such as criminal justice and policing—are predominantly used to support human decision-making rather than to render fully automated decisions (BUSUIOC, 2024)[vi], limits the scope of this article to hybrid decision-making recruitment algorithms, where ultimate responsibility rests with human actors.

In the realm of mixed decision-making, the entire process invariably involves a dual layer of mechanisms leading to AI outputs, followed by a final human decision based on these outputs. Consequently, it is fundamental to investigate not only the functioning of AI systems in recruitment but also their influence on human decision-making, which could be biased by AI or rely too blindly on it, thereby diminishing individuals’ sense of responsibility for their decisions (BUSUIOC, 2024).

Having delineated the field and established the necessary preliminary remarks, we can now delve into the challenges and specific issues associated with applying the aforementioned AI algorithms to public recruitment.

AI applications’ issues in public recruitment

In the light of the above considerations, it is clear that the potential of AI-driven recruitment within the public sector holds significant value.

Open competitions are administrative proceedings known to be very long and resource-intensive for both applicants and institutions. Furthermore, they lack efficiency in strategically selecting profiles that meet contemporary needs, as they rely heavily on conventional exams that mainly assess rote learning. AI-based recruitment systems, however, are fast and focus on assessing candidates’ personalities and soft skills, which could enable public administrations to shift away from traditional selection processes toward more employer-driven models (KEPPLER, 2024). This could enrich the recruitment process, at least during the initial stages of the selection procedure.

Nonetheless, algorithmic recruitment presents challenges that are even more difficult to align with the fundamental principles of public competition and the broader accountability mandate of public administration. In fact, the eventual perception of unfair selective processes of workers not only might discourage valuable candidates from applying, but it could also raise serious concerns regarding public legitimacy (KEPPLER, 2024).

First, one significant issue is that not all the information about algorithm functioning might be fully accessible. Currently, it is unlikely that public administrations would have the capacity to develop proprietary recruitment algorithms, necessitating reliance on private-sector providers. However, since this is an innovative field, the private providers might withhold information under trade secret protections (Trade Secrets Directive, 2016/943, Article 3). Allowing critical public-sector operations to be managed by systems lacking full transparency should be deemed unacceptable.

Furthermore, even if full disclosure were achieved, it would not ensure complete understanding. Many of these algorithms, often termed “black box”, present an intrinsic opacity, due to the fact that their processes are too complex to trace comprehensively from input to output. This opacity has been widely debated. On the one hand, some AI developers have recently made efforts to reduce as maximum the algorithm unclarity, creating (albeit approximate) explanatory models to make such tools accessible to deployers like public administrations who might otherwise refuse to use them a priori. On the other hand, the opacity may be also an intrinsic feature of AI’s strength: AI tools are able to identify patterns and outputs by autonomously managing, mixing, dividing and processing the data set across an endless series of intermediate neural layers, which are inherently beyond human comprehension.

Thus, even in the case of a theoretically transparent algorithm, true interpretability and explainability may remain elusive. As a result, achieving effective human supervision and control may not be feasible, which is particularly concerning in the public sector. Controlling the functioning of the algorithm, in this context, means weighing core labour law values that affect the implementation of public policies. Given this, if public administrations are unable to keep up with advances in AI but still intend to leverage these technologies, they risk never having full control over these tools, leading to a phenomenon of “regulatory capture”—especially considering the historical lack of highly technical skills within the public workforce.

Against this background, it is also important to acknowledge that AI-driven personality assessments often rely on “pseudo-sciences”—theories that are not fully scientifically accurate, developed through controversial methods, lacking specific counterevidence, and not thoroughly validated by independent third-party reviews. These limitations can be particularly harmful to minority candidates, for whom it may be very hard to contest an algorithmic discrimination. For instance, factors as seemingly random as camera angles or racially linked elements like accents and culturally specific facial expressions can distort data accuracy in AI-based interviews (KELAN, 2024 citing TIPPINS, 2015), but proving that, for a single worker, can be practically impossible. While some argue that recruiting algorithms, despite their imperfections, nonetheless reduce human biases and the implicit prejudices often directed at minorities, others contend that AI-based recruitment grants employers a license to discriminate by unjustly excluding candidates based on opaque and difficult-to-challenge criteria.

Conclusions

In light of this situation, several critical questions remain open. The issues presented raise significant concerns about the potential role of AI in public hiring: can the AI truly enhance the fairness and efficiency of recruitment processes, or does it risk undermining open competitions irreversibly? Is it possible to find a balance, perhaps modelled on the EU approach, to safeguard fairness, transparency, and justice in the labour market, fostering an environment where AI innovation can flourish without compromising fundamental workers’ rights and where human judgment remains essential to uphold the integrity and equity of the hiring process? The challenge of balancing the opportunities offered by AI with the need to protect workers’ rights and uphold open competition principles is complex and difficult but essential for public administrations to tackle in order to innovate and remain competitive in the labour market.

Based on the foregoing considerations, the following recommendations are proposed.

First, at present, the most prudent approach for public administrations appears to be relying solely on simpler, more transparent recruiting algorithms equipped with suitable human-machine interface tools and algorithm audits, while avoiding “black box” tools for personality prediction, postponing this aspect for the time being. However, AI could provide valuable support in fostering a more innovative, employer-driven approach in the public sector by screening potential candidates’ CVs and social media job profiles, inviting the most suitable individuals to register for open competitions. AI-driven chatbots could also assist applicants through the complex stages of open competition registration. Additionally, AI properly trained on inclusive keywords, could be used to score candidates’ resumes and qualifications during the initial phase of the process, prior to the formal competition. These methods are accurate, cost-effective, and would free HR departments to focus on more “human” aspects of recruitment; as they pertain to initial stages, they are also generally more acceptable to participants (ALBAROUDI, MANSOURI, ALAMEER, 2024).

While it is true that these applications represent a limited subset of AI’s broader potential and may, for instance, lack the capability to assess candidates’ soft skills, some may argue that such limitations risk leaving the public sector behind in terms of recruiting innovative techniques. However, even if this were the case, certain trade-offs—such as compromising transparency and accountability—remain unacceptable for the public sector. Rather, as the European Commission’s guidelines supplementing the AI Regulation are anticipated by February 2026, the public sector could assume a leading role in shaping the EU framework for a cautious and responsible use of AI in recruitment, advancing specialized certification and audit models.

More broadly, it is essential that labour law plays its vital role in balancing the opportunities offered by AI with the need to safeguard fundamental workers’ rights, including the right to non-discrimination and the access to fair open competitions. Ultimately, the success of AI in public recruitment will depend on labour law’s ability to adapt to these new realities while maintaining its commitment to fairness, transparency, and justice. As AI continues to reshape the labour market, labour law must evolve to shape the technology, ensuring that advancements do not come at the expense of fundamental workers’ rights, particularly in the public sector, where fairness and transparency are paramount.

_________________________

References

[i] A. AGNIHOTRI, S. BHATTACHARYA, Artificial Intelligence for Hiring and Induction: The Unilever Experience, in SAGE Publications: SAGE Business Cases Originals, January 09, 2024.

[ii] F. KEPPLER, No Thanks, Dear AI! Understanding the Effects of Disclosure and Deployment of Artificial Intelligence in Public Sector Recruitment, in Journal of Public Administration Research and Theory, 2024, 34, 39–52, https://doi.org/10.1093/jopart/muad009.

[iii] E. K. KELAN, Algorithmic inclusion: Shaping the predictive algorithms of artificial intelligence in hiring, in Hum Resour Manag J. 2024, 34, 694–707.

[iv] E. ALBAROUDI, T. MANSOURI, A. ALAMEER, A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring, in AI 2024, 5, 383–404, https://doi.org/10.3390/ai5010019.

[v] A. M. ALAIMO, Il Regolamento sull’Intelligenza Artificiale. Un treno al traguardo con alcuni vagoni rimasti fermi, in Federalismi.it, 25, 2024, 231-248, ISSN 1826-3534.

[vi] M. BUSUIOC, Accountable Artificial Intelligence: Holding Algorithms to Account, in Public Administration Review, 81, 5, 825–836.

This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *