10th December 2024
Artificial Intelligence (AI) is changing our society in deep and pervasive ways. Like all technologies, AI can be leveraged for the good of society but also raises several ethical concerns. As we celebrate Human Rights Day on December 10th, the anniversary of the Universal Declaration of Human Rights (UDHR), it is important to think about how human rights can inform ethical AI.
The UDHR is a milestone document issued in 1948 expressing for the first time the fundamental human rights to be universally protected. The Declaration is composed of thirty equally significant articles founded on the principles of dignity and equality in rights: that is, every human being has the same inalienable rights.
But what do human rights have to do with AI? A lot!
Artificial Intelligence for decision-making and AI-powered recommendation systems are being increasingly used for diverse aspects of our lives, including recruitment and employee management, with which the BIAS project engages. This use can raise numerous ethical questions regarding biased algorithms, discrimination, and transparency. Datasets used to train algorithms can carry historical discriminations and inequalities which, if not properly corrected for, can be amplified by AI and lead to systematic discrimination of certain groups. This can happen not only with the use of biased historical data to train the AI, but also in other instances in which context is not taken into account.
For example, several AI systems match candidates with the job description. This use, however, does not consider how job descriptions can be biased too, for instance using masculine coded words. In addition, employees and job candidates should be able to appeal to the AI assessment made about them. However, this is often difficult due to the “black box” effect, according to which algorithms are like “black boxes” in which data is inputted, but we do not know how it is processed.
In this day, it is important to think about how human rights can help us address these issues and promote ethical AI. Algorithmic bias and consequent discrimination involve various articles of the UDHR, such as Article 2 on non-discrimination, Article 8 on the right to seek justice, Article 23 on the right to work, as well as the overall principle of dignity underlying all human rights. Biased AI systems go against the principle of non-discrimination, however, their impact on human rights is even deeper. Unequal treatment due to discrimination leads to humiliation, and the impossibility of appealing to AI decisions puts people in a state of helplessness, which goes against the principle of dignity. Adopting human rights as a holistic framework for the design of ethical AI can help developers take into consideration the many perspectives and ways in which AI systems can impact people’s lives.
Human rights are not just a checklist that needs to be ticked, but they are universally protected rights of all people that need to be safeguarded. Designing AI for equality needs adaptation to context and attention to people’s experiences and situations. Discrimination can manifest itself in different ways in different contexts, for which AI needs to be adapted. Adopting human rights as a guiding principle in AI design means leaving behind the idea that we can design universally fair systems, but recognizing that people have different experiences which call for different solutions. Context knowledge and attention to people are central to creating ethical AIs.
The BIAS project, with its interdisciplinary team, aims at mitigating bias in a contextualized way to promote AI which can truly benefit society. Our knowledge is rooted in people’s experiences, investigated through Social Sciences and Humanities research, which is then translated into technical AI approaches and solutions. Ensuring all people involved are guaranteed their fundamental human rights can help us imagine a truly ethical AI. As we celebrate Human Rights Day, we can be more aware of the risks to human rights posed by AI, as well as how they can guide us in creating more ethical AIs.