AI and Workplace Bias on Zero Discrimination Day

Zero Discrimination Day, observed annually on 1 March by the United Nations (UN), aims to promote equality before the law and in practice across all UN states parties. First celebrated on 1 March 2014, the day was launched by the Joint United Nations Programme on HIV and AIDS (more commonly referred to as UNAIDS) to combat discrimination against people living with HIV/AIDS. However, the scope of discrimination extends far beyond this issue, affecting individuals based on a diverse range of personal characteristics such as gender, age, and disability. These forms of discrimination often intersect, resulting in compounded and highly individualised experiences shaped by overlapping identities and power dynamics. This phenomenon, known as intersectionality, acknowledges that discrimination cannot be understood in isolation, as overlapping identities and power dynamics shape each experience. Furthermore, discrimination permeates many areas of life, including education, housing, and employment. In this context, the BIAS project has taken a critical role in addressing emerging forms of discrimination, particularly those channelled through AI systems used in workplace settings.

Bias in the machine: How AI Is reshaping workplace discrimination

Over the past decades, newspapers and magazines have reported extensively on cases of AI-driven discrimination in workplace settings, drawing attention to the risks posed by biased technologies. A classic example is the widely discussed “Amazon case” which serves as a key illustration of diversity bias in AI systems. In 2014, Amazon developed an AI-powered recruitment and selection tool designed to evaluate curriculum vitae (CVs) and score job applicants. However, the tool quickly revealed gender bias, as it was trained on historical hiring data that predominantly favoured men in the tech industry. Particularly, it penalised CVs containing terms such as “women” or those referencing women-only colleges, effectively disadvantaging female job applicants. Although Amazon attempted to remove explicit gender indicators, the tool continued to produce biased and unreliable outcomes. By 2017, the project was abandoned, though some components of the technology remain in use for simpler functions like identifying duplicate profiles.

While Amazon’s recruitment tool never reached the public, similar AI systems have been commercialised and deployed, with comparable issues of bias and discrimination. For instance, the non-governmental organisation (NGO) AlgorithmWatch raised concerns about LinkedIn’s hiring service, which faced criticism for its lack of transparency and discriminatory practices. One feature automatically classified candidates as “not a fit” if their profiles indicated residence in a country different from the job posting’s location, excluding such applicants without notifying them or the recruiters. This highlights the broader risks of opaque AI systems in hiring, where automated decisions can reinforce biases and disadvantage already socially marginalised groups.

From definition to impact: Understanding AI-driven discrimination in the workplace

While these examples clearly illustrate discrimination in the automated workplace, the precise definition of ‘discrimination’ varies across scholarly and policy contexts, influenced by factors such as jurisdiction and societal norms. In the employment context, discrimination is often understood as the unequal treatment of job applicants or workers based on personal characteristics such as gender and  disability. This form of differential treatment is widely regarded as negative, as it perpetuates social inequalities, creates unfair disadvantages, and undermines fundamental principles like dignity.

On an individual level, workplace discrimination, whether AI-driven or not, excludes qualified candidates from opportunities, denying them the chance to compete on an equal footing, and perpetuating systemic barriers. Beyond the professional realm, the impact profoundly affects overall well-being. Exclusion from employment is likely to undermine an individual’s dignity, autonomy, and capacity for social participation, eroding their sense of self-worth and belonging within society. Furthermore, financial instability resulting from discrimination can hinder access to housing, strain family dynamics, and increase economic stress. Simultaneously, the cumulative effects of rejection and marginalisation take a significant toll on mental and physical health, highlighting the far-reaching consequences of workplace inequality.

The organisational impact of AI-driven discrimination is also equally profound. Biased AI systems exclude highly qualified candidates, depriving organisations of diverse perspectives and talents essential for innovation and competitiveness. Public exposure of discriminatory practices can damage an organisation’s reputation, deterring prospective employees, customers, and investors who prioritise fairness and inclusivity. Moreover, organisations risk legal and regulatory penalties for employing discriminatory systems, resulting in lawsuits, fines, and costly remediation measures. Internally, perceived or actual unfairness can fracture workplace culture, breeding mistrust and resentment among employees, which undermines collaboration and teamwork.

From subtle to systemic: Why AI-driven discrimination demands attention

Discrimination driven by biased AI systems often goes unnoticed at first, as it mirrors existing societal inequalities. For instance, research demonstrates that AI systems continue to disadvantage job applicants based on ostensibly neutral factors, such as names or postcodes associated with lower socioeconomic status or particular racial and ethnic groups. This phenomenon, referred to as proxy discrimination or discrimination by association, occurs when seemingly innocuous data points serve as proxies for sensitive attributes such as political affiliation and religion.

Nonetheless, what makes AI-driven discrimination particularly concerning is the unprecedented scale and efficiency with which these systems operate. Unlike human bias, which can be identified and challenged over time, AI systems process vast datasets and make thousands of decisions in mere seconds. This rapidity enables biased outcomes to spread widely and often imperceptibly, impacting large groups of people and compounding inequalities across various stages of the employment process. Moreover, detecting AI bias poses unique challenges. It transcends traditional power dynamics between individuals and often operates through subtle mechanisms, such as correlations between commuting patterns and job retention rates, which may inadvertently disadvantage candidates from lower-income neighbourhoods.

AI bias also intersects with multiple traits, creating compounded forms of discrimination. For example, older women of colour may face unique disadvantages due to overlapping stereotypes related to age, gender, and race. Similarly, AI systems can draw on less apparent characteristics, such as data from social media activity, to make decisions that are difficult for humans to interpret or contest. Even physical appearance, often overlooked in discussions of workplace bias, plays a significant role. Automated systems can reinforce beauty standards by influencing hiring and promotion decisions, effectively excluding individuals based on factors such as body shape or perceived attractiveness. This is further perpetuated by workplace policies governing uniforms, hairstyles, or footwear, underscoring the need for closer scrutiny of AI’s role in shaping such outcomes in the future.

Addressing discrimination in the automated workplace: Legal opportunities and challenges

This context raises a critical question: what steps should be taken to address discrimination in the automated workplace, particularly when non-discrimination is not just a UN policy objective but a deeply rooted legal and ethical commitment? Within the European Union, equality and non-discrimination are fundamental values and objectives, enshrined in Articles 2 and 3 of the Treaty on the European Union, while Article 21 of the Charter of Fundamental Rights explicitly prohibits “[a]ny discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation.”

Accordingly, one natural avenue for addressing AI-driven discrimination lies in the chance for legal redress. Numerous scholars have explored this path, critically examining the ambitions of anti-discrimination and data protection laws in combating diversity bias of AI systems in the workplace. However, while these legal frameworks provide a foundation, their adequacy and enforcement in the face of rapidly evolving AI technologies remain pressing concerns.

Briefly, anti-discrimination law in the EU employment sector aims to ensure equal treatment and prevent bias in the workplace. It distinguishes between direct and indirect discrimination. Direct discrimination occurs when an individual is treated less favourably than others in comparable situations due to their personal characteristics, such as sexual orientation or religion. However, this type of discrimination rarely arises in designing and deploying AI systems as it is uncommon for these technologies to assign lower scores to workers or job applicants. Indirect discrimination, on the other hand, is more relevant to AI-driven practices. It refers to situations where an apparently neutral provision, criterion, or practice disproportionately disadvantages individuals or groups with particular characteristics unless a legitimate aim and appropriate means objectively justify such practices. In the labour market, this could occur when an AI system for hiring disproportionately rejects applications from certain groups, such as women or young people, based on criteria that seem neutral but are not justifiably necessary.

However, these legal definitions of discrimination often fail to fully address the complexities of AI-driven bias, particularly because they are tied to exhaustive lists of legally protected personal characteristics. This limitation becomes evident in the said cases of proxy discrimination, where seemingly neutral data serves as a substitute for protected attributes. Beyond this, AI-driven discrimination is not confined to legally protected characteristics. It can also affect social groups that lack explicit legal recognition. These groups might share easily identifiable traits, such as being single parents or homeless, or they may be defined by patterns that are difficult for humans to discern, such as similar web browsing histories or mouse movements. 

The General Data Protection Regulation (EU) 2016/679 (GDPR) aims to protect individuals’ fundamental rights while facilitating the free flow of personal data within the European Union. To achieve this, it imposes obligations on data controllers—entities that determine the purposes and means of data processing—and grants rights to data subjects, the individuals identifiable through their personal data. It also establishes core principles for data processing. Given that AI systems used in workplaces often process personal data, several GDPR provisions could potentially address AI-driven discrimination.

Personal data can be integral to both the creation and operation of AI systems. It may be used to train datasets or to perform tasks such as categorizing, scoring, or making decisions about job applicants, as exemplified by the said  “Amazon case.” However, scholars have identified significant limitations in the GDPR’s capacity to address such scenarios. Key challenges include compliance gaps, enforcement difficulties, and the limited resources and sanctioning powers of Data Protection Authorities (DPAs). Moreover, the GDPR’s scope is restricted to personal data, excluding predictive models that lack individual identification. While the law’s reliance on open and abstract norms provides flexibility, it also complicates practical application. For example, the complex decision logic behind AI systems can obstruct the right to understand algorithmic decisions. Another critical tension arises from Article 9 of the GDPR, which strictly regulates the processing of sensitive personal data, such as racial or ethnic origin. While this provision aims to protect job applicants’ rights, it may also hinder efforts to detect and mitigate bias in AI systems. Article 9 prohibits the processing of special categories of data except under specific conditions, such as explicit consent or the necessity to meet employment-related obligations. At first glance, this safeguard could enhance fairness in recruitment by preventing diversity bias, especially when considered alongside principles like data minimisation and purpose limitation under Article 5, which require HR practitioners to collect only relevant information.

From legislation to holistic action: Combating together workplace discrimination in the age of AI

This discussion highlights some of the limitations of existing legal frameworks in addressing the complex and evolving challenges posed by AI systems in the workplace. Nonetheless, there is cautious optimism regarding the future implementation of Regulation (EU) 2024/1689 (more commonly known as the AI Act), which establishes harmonised rules for artificial intelligence and acknowledges the profound implications of workplace AI on career trajectories, livelihoods, and workers’ rights. Recital 57 explicitly recognises the potential of AI systems to reinforce historical patterns of social discrimination and infringe upon fundamental rights, including privacy and data protection. To mitigate these risks, the Artificial Intelligence Act classifies AI systems used in HR practices as high-risk—one of the four risk categories defined under its regulatory framework–and imposes stringent obligations on both AI system providers and employers who utilise these systems, seeking to ensure greater accountability and address severe concerns including discrimination.

However, the law is not the only solution to tackle AI-driven discrimination in the workplace. While legal frameworks provide essential protections and accountability mechanisms, they must be complemented by proactive efforts from organisations, AI developers, and society at large. Fostering a culture of awareness and responsibility, where discrimination in all forms is actively identified and challenged, is crucial for creating truly equitable work environments. As we observe Zero Discrimination Day, it is clear that the fight against AI-driven discrimination requires a multifaceted approach—one that combines legal, technological, and ethical measures to protect workers and job applicants and uphold the values of dignity, autonomy, and equality in the digital age. Only through such comprehensive efforts can we hope to create a future where technology works for everyone, without reinforcing existing societal imbalances.