30th September 2024
On September 26 and 27 2024, the halls of IE University in Madrid buzzed with discussions, debates, and exchanges of ideas during the third edition of Lawtomation Days. Academics, legal experts, policymakers, and representatives from the private sector convened from across the globe to explore the transformative potential of digital and technology-driven ecosystems, considering how rapid technological advancements could foster more sustainable, inclusive, and ethically responsible environments.
Eduard Fosch-Villaronga and Carlotta Rigotti participated in the conference to present their ongoing research on AI transparency in the workplace, conducted in collaboration with Tessa Verhoef from Leiden University. Their working paper aligned closely with one of the event’s key themes: the regulation of algorithmic management and data transparency in employment contexts.
This year’s edition of Lawtomation Days had an ambitious agenda. It sought to explore the role of legal frameworks in shaping automation across a variety of fields – consumer law, criminal law, labor law, public law, private law, and trade law. Beyond merely facilitating automation, the event focused on how law could act as a tool to ensure that technological developments would serve the public good.
The digital ecosystem, after all, was evolving rapidly. New technologies had the power to revolutionise industries, making them more efficient and inclusive. However, they also posed ethical dilemmas and practical challenges, particularly when it came to regulation. The European Union had already adopted significant legislative measures to govern the digital landscape, such as the so-called Digital Services Act and the AI Act. The conference participants aimed to examine whether these laws, with all their promise, were truly up to the task of managing the complexities and risks associated with emerging technologies.
Lawtomation Days was not just a theoretical exercise. It sought to strike a balance between optimism and realism, cutting through the hype and alarmism that often surrounds conversations about digital innovation. The participants engaged in critical evaluations of regulatory frameworks, exploring their effectiveness, shortcomings, and the opportunities for improvement. The goal was to bring a much-needed clarity to the conversation about AI’s role in our everyday lives and how the law could adapt to ensure that technological progress did not come at the expense of fundamental rights or societal well-being.
Amid the flurry of panels and presentations, Eduard and Carlotta’s session stood out as a thought-provoking contribution to the dialogue on AI and employment.Their research, part of the BIAS project and conducted in collaboration with Tessa Verhoef from Leiden University, focused on AI transparency in the labor market, examining workers’ and job applicants’ perspectives and attitudes toward the role of AI systems in the workplace.
Carlotta and Eduard opened the session by highlighting the urgency of their research. AI systems, particularly those used in human resources (HR), are rapidly becoming a fixture in modern workplaces. Additionally, the recently adopted Regulation (EU) 2024/1689 (more commonly known as the AI Act) had classified AI systems used in employment contexts as high-risk, placing stringent obligations on both developers and employers. These obligations go beyond existing requirements for data protection and non-discrimination, a subject Carlotta and Eduard had already addressed in a previous publication on fairness and AI in the hiring process.
Eduard elaborated on the concept of transparency, which has long been viewed as a potential remedy for the inherent asymmetry of information and power between employers and employees, as well as job applicants. He pointed out that transparency in AI-driven workplaces is often oversimplified to mere information disclosure—merely informing workers about the use of AI and how it operates in decision-making processes. However, this is far from sufficient. For transparency to be meaningful, the information provided has to be accessible and relevant, empowering workers to understand how decisions are made. The challenge, he noted, is that AI systems are often characterized by opacity. The complex algorithms and decision-making processes underlying these systems are difficult, if not impossible, for the average employee and job applicant to comprehend. Without meaningful transparency, workers are still left in the dark, unable to challenge or understand decisions that could have profound effects on their careers and well-being. Eduard’s emphasis on this point resonated with the audience. The opacity of AI systems is not just a technical issue; it is a matter of fundamental rights.
After Eduard’s remarks, Carlotta moved on to the specifics of their research within the context of the BIAS project. She highlighted that the project’s Consortium had conducted an extensive survey across the European Union, Iceland, Norway, Switzerland, and Turkey, collecting data from nearly 6,000 workers and job applicants who had engaged with AI systems in the workplace. The focus of their current research was to examine the responses related specifically to workers’ and job applicants’ experiences and perceptions of AI transparency.
The findings were both revealing and concerning. While the full paper will be published soon, it can be anticipated that more than half of the respondents reported having direct interactions with AI in their workplaces but, interestingly, about 20% initially overlooked these encounters. However, when prompted with familiar examples – like LinkedIn – many were able to recall their experiences. This highlights a concerning disconnect: even as AI becomes increasingly integrated into the workplace, awareness among employees is often lacking, emphasizing the pressing need for improved transparency and communication. Surprisingly, this lack of awareness was particularly pronounced among Millennials and Gen Z, groups typically viewed as technologically savvy. This raises important questions about the effectiveness of current methods for conveying information about AI systems, suggesting that enhancing digital literacy and developing user-friendly communication strategies are essential for fostering better understanding.
As Carlotta concluded her presentation, she left the audience with a set of early recommendations. To effectively comply with the AI Act’s requirements and ratio, organizations should adopt proactive strategies to inform and educate workers about the role of AI systems in the workplace. Employers and policymakers would need to work together to develop clear, accessible guidelines that could bridge the current awareness gap and ensure that AI systems were used ethically and responsibly. This effort should also adopt an intersectional perspective, recognizing that diverse experiences and intersecting identities must be acknowledged and valued. While Eduard and Carlotta’s research has established a solid foundation for future studies, it was evident that there is still much work to be done, despite the vibrant discussions among atendees who have faced similar challenges in their own fields of work.