A four-year project, funded by the European Union’s Horizon Europe Research and Innovation program that will empower the Artificial Intelligence (AI) and Human Resources Management (HRM) communities by addressing and mitigating algorithmic biases.
Artificial Intelligence (AI) is increasingly deployed in the labour market to recruit, train, and engage employees or monitor for infractions that can lead to disciplinary proceedings. One type of AI is Natural Language Processing (NLP) based tools that can analyse text to make inferences or decisions. However, NLP-based systems face the implicit biases of the models they are based upon that they learn. Such bias can be already encoded in the data used for machine learning training, which contains the stereotypes of our society, and thus be reflected inside the models and the decision-making.
This can lead to partial decisions that run contrary to the goals of the European Pillar of Social Rights in relation to work and employment and the United Nations’ Sustainable Development Goals.
Despite a strong desire in Europe to ensure equality in employment, most studies of European labour markets have concluded that there is discrimination across many factors such as gender, nationality, or sexual orientation. Therefore, addressing how AI used in the labour market either contributes to or can help mitigate this discrimination is of great importance. That is the main concern of the BIAS project.
Who are the BIAS target groups?
By participating and being part of:
National Labs
Interviews and surveys
Capacity building and awareness raising sessions
Co-creation workshops
Policymaking activities
Trustworthy AI helix
The Debiaser open-source software