The BIAS project attends the summer school on "Law and Language"at Pavia University

Carlotta Rigotti and Eduard Fosch-Villaronga delivered a lecture on AI and non-discrimination, engaging students with the Debiaser demo.

 

From 16 to 20 September 2024, the historic city of Pavia (Italy) hosted an intensive summer school on ‘Law and Language,’ bringing together experts and students from various European universities. Co-organised by the University of Pavia and Würzburg University, this gathering provided a unique platform for participants to explore the evolving relationship between law, language, and technology. Topics ranged from honing legal English skills and examining language policies within EU institutions to navigating the complexities of artificial intelligence (AI) as an emerging ‘language’ that the legal system must adapt to and regulate.

Representing Leiden University and the Horizon Europe BIAS project, Carlotta Rigotti and Eduard Fosch-Villaronga led a two-day session on AI and non-discrimination. Their lectures focused on the role of AI within and outside the labor market, addressing the biases that arise in automated decision-making systems and how the law seeks to mitigate these issues.

On the first day, Eduard began with a critical exploration of how AI systems can perpetuate both long-standing and newly emerging forms of discrimination. He outlined how these technologies, despite their promise of neutrality, can replicate societal biases when the data it relies upon is tainted by historical patterns of inequality. A classic example is the ‘Amazon case‘, where the tech giant’s recruitment tool was found to discriminate against female candidates. Trained on a decade’s worth of resumes, the AI system favored male applicants due to the tech industry’s male dominance.  Additionally, Eduard discussed how AI-driven content moderation systems used by social media platforms frequently struggle to discern the context and intent behind user posts. Designed to automatically detect and remove harmful content they often fail to recognize when speech serves a socially valuable purpose – such as when LGBTQIA+ individuals reclaim slurs historically used against them. Without the ability to interpret such nuances, AI can inadvertently silence marginalized voices rather than protect them.

Following this critical foundation, Carlotta shifted the focus towards the legal frameworks able to address AI-driven discrimination. She provided a detailed examination of existing concepts and provisions, particularly highlighting the European Union’s anti-discrimination laws, the General Data Protection Regulation (GDPR), and the recently adopted AI Act. However, she pointed out the fragmented nature of these frameworks, emphasizing that they remain insufficient in addressing the unique challenges posed by emerging technologies. While the AI Act represents a significant advancement, it still falls short of providing the comprehensive governance necessary to navigate the complexities of AI’s impact, including in the labor market. This is an area currently being explored by Carlotta and Eduard in collaboration with Antonio Aloisi from IE University and Nastazja Potocka-Sionek from the University of Luxembourg, with a new publication set to be released soon.

On the second day, the discussion shifted to the ethical implications of AI, with a focus on the guidelines established by the High-Level Expert Group on AI (AI HLEG) for creating trustworthy AI systems. Carlotta and Eduard guided the students through the core principles of these guidelines, which include – amongst others – transparency, accountability, and fairness. These ethical tenets are essential for ensuring that AI systems do not reinforce discrimination but instead operate in ways that are socially and legally responsible.

The interactive component of the session involved the BIAS project’s Debiaser demo, a tool currently developed to help mitigate diversity bias in the hiring process. Students were divided into groups and tasked with simulating the role of human resources (HR) personnel responsible for ranking job applicants. First, they manually ranked candidates based on their CVs, identifying both mandatory and desirable qualifications for the position. Then, they used the Debiaser demo to assess the same candidates, comparing their initial rankings with those generated by the AI tool.

The exercise illuminated the subtle yet significant presence of bias in both manual and AI-assisted decision-making. Students recognized that their manual rankings often unconsciously favored or disregarded certain qualifications due to implicit assumptions. Similarly, the Debiaser tool either replicated these biases or highlighted them, prompting critical reflection. This practical demonstration underscored the dual potential of AI tools to either exacerbate or mitigate discrimination, emphasizing that their impact largely depends on their design and implementation.

The session concluded with a broader discussion on the future of AI in the labor market, focusing on the tension between its potential benefits and the ethical challenges it presents. Students debated whether AI, in its current form, is capable of fostering truly fair hiring practices or if its inherent limitations might necessitate a more cautious approach. While some saw AI as a promising solution to reduce human error and bias, others expressed concerns about its inability to fully grasp the complex social dynamics that underpin discrimination. 

For the students, this two-day lecture provided not only an in-depth exploration of AI and its legal implications but also a chance to engage with the practical challenges of integrating new technologies into established legal and ethical frameworks. As they departed Pavia, many left with a deeper understanding of the complexities surrounding AI and the law, especially in the labor market – a topic that will undoubtedly continue to shape legal discourse in the years to come. For Carlotta and Eduard, the summer school was an opportunity to highlight the critical work being done through the BIAS project and to underscore the importance of addressing diversity biases in AI systems. Their lectures served as a reminder that the intersection of law, language, and technology is not just an academic exercise, but a vital area of study with real-world implications for society.

photo_6019064523094344531_y