20th March 2025
AI is reshaping the hiring process, making recruitment faster and more efficient. But what happens when algorithms unintentionally reinforce discrimination? Studies show that job-matching AI can disadvantage women, misinterpreting slower response times as a lack of interest—ultimately reducing their job opportunities.
To tackle these challenges, LOBA, the Portuguese partner of the BIAS project, hosted a free, hands-on capacity-building session in collaboration with Smart Venice (SVEN), the BIAS partner leading this training. This session is part of the BIAS Capacity-Building series “Shaping Responsible & Inclusive AI in Recruitment”. The event, held at Associação Nacional dos Jovens Empresários (ANJE) premises in Porto on March 17, 2025, brought together HR professionals, recruiters, and AI ethics enthusiasts from Claire Joster, an HR firm specializing in executive search, for an insightful discussion on fairness in AI-powered recruitment.
The session kicked off with a deep dive into the concept of bias. Participants engaged in a self-reflection exercise using the Diversity Wheel from Johns Hopkins University. This activity encouraged them to examine how personal identity traits—such as gender, age, and race—have shaped their professional experiences. Through an interactive brainstorming session, they reflected on:
This thought-provoking start set the stage for the day’s discussions, emphasizing the need for awareness in AI-driven decision-making.
Smart Venice continued by guiding participants through an engaging presentation on AI fundamentals and its role in recruitment. Topics included:
💡 AI Basics – Understanding machine learning, NLP, and word embeddings
⚖️ Bias in AI – Social stereotypes in word embeddings, automation bias, and AI hallucinations
🔍 Ethical & Legal Frameworks – AI Act, GDPR, anti-discrimination laws, and real-life legal cases like “Schufa”, sparking discussions on how current policies address (or fail to address) bias in automated hiring tools.
The afternoon session focused on real-world case studies of bias in AI-powered hiring tools. Participants explored:
🤖 Algorithmic hiring and the risks associated with AI recruitment tools
🎯 The decoy effect in hiring decisions
⚠️ Algorithmic discrimination and how companies can mitigate it
🔎 The concept of fairness in recruitment
To make theory meet practice, a group exercise was organized using a candidate ranking AI model developed by the BIAS project with ChatGPT. Participants got hands-on experience interacting with AI-based HR selection tools, assessing their strengths, weaknesses, and potential biases. The goal? To develop critical thinking when working with AI and challenge preconceived notions about automated hiring.
The session concluded with a plenary discussion, where participants shared their insights and debated strategies to ensure fairness in AI-powered recruitment.
A huge thank you to Claire Joster for their expertise and active participation in the session, Smart Venice for leading training, and ANJE for providing a fantastic venue. This event was a big step forward in fostering ethical AI in HR, and we look forward to continuing the conversation.
Want to learn more? Stay tuned for future BIAS project initiatives! 🚀