Law: Fairness and Discrimination
Contents
Law: Fairness and Discrimination#
Discrimination can generally be characterized as the morally objectionable practice of subjecting a person (or group of persons) to a treatment in some social dimension that, for no good reason, is disadvantageous compared to the treatment awarded to other persons who are in a similar situation, but who belong to another socially salient group.1 Central to this definition is a comparative element: the treatment under consideration is different from treatment received by another, similarly situated person. But what does it mean to be “similarly situated”? This is one of the primary questions (EU) non-discrimination law aims to answer.
EU Non-Discrimination law#
EU law is a form of supranational law: member states of the EU transfer parts of their sovereignty to the EU, which can then legislate in specific fields. The body of EU law comprises, among other things, the foundational treaties, secondary legislation mainly in the form of regulations and directives, and case law. While regulations apply directly within all member states, directives require member states to transpose their content, i.e. to implement it in their own legal system. Directives then leave member states discretion as to how the regulatory aim is to be achieved. In the field of non-discrimination law, directives reflect a so-called minimum harmonization approach, meaning that the law sets common minimum standards that must be achieved by all member states, but still allows individual member states to incorporate stricter measures as long as they comply with the EU treaties.
Regulations and directives are forms of statutory law: written laws that are passed by the EU legislator. Statutory law can’t cover all relevant aspects of all possible cases. Consequently, to be applied in factual cases, the law needs to be interpreted by a court in a judgment. To do so, the Court of Justice of the EU takes into account the “spirit, the general scheme and the wording” of given legal provisions, including their aim as set out in the preamble and the preparatory documents, as well as previous judicial decisions that were rendered in similar cases in the past (case law). In the EU, a mechanism called the preliminary reference procedure allows member state courts to dialogue with the Court of Justice of the European Union (CJEU). Individuals are not able to access the CJEU directly, but national courts can ask questions regarding the interpretation and validity of EU law to the CJEU. After receiving the response of the CJEU, the national court then makes the final decision by implementing the CJEU’s interpretation of EU law to the specific circumstances in the case at hand.
It is important to note that the law is not made up of static rules. In response to social advancements, new statutory law may be introduced and the interpretation of existing legal norms may change over time as new cases emerge. Over the years, EU non-discrimination law has evolved.
Four main directives make up today’s EU non-discrimination law: the Race Equality Directive 2000/43/EC; the Framework Equality Directive; and the gender equality Directives 2004/113/EC and 2006/54/EC. Additionally, primary law provisions include Articles 2 and 3(3) of the Treaty on European Union, Articles 8, 10, 19 and 157 of the Treaty on the Functioning of the European Union (the last two corresponding to ex-Article 13 EC and Article 119 EEC) as well as Articles 20, 21 and 23 of the Charter of Fundamental Rights of the EU (the Charter), adopted in 2000 and elevated to the same status as the Treaties in 2009.
Establishing Discrimination#
To understand how EU non-discrimination law operates, we need to first distinguish between the notions of direct and indirect discrimination. This distinction is key because it determines to what extent disparities can be justified: direct discrimination cannot be justified except for a limited number of derogations, whereas prima facie indirect discrimination can be justified much more widely.
Direct discrimination occurs when “one person is treated less favourably than another is, has been or would be treated in a comparable situation on grounds of” a protected characteristic. In other words: protected characteristics cannot be explicitly included in decision-making processes covered by EU non-discrimination law.
Traditionally, direct discrimination prescribes that “likes should be treated alike”, promoting formal equality. A problem with this conceptualization of equality is that it is unable to redress more complex forms of injustice such as proxy discrimination and structural inequality. For example, a rule banning all individuals shorter than 1,70m from applying to jobs with the police will exclude a large majority of women. However, because the selection does not explicitly depend on sex or gender, it does not amount to direct discrimination.
To complement the legal protection of equality, the Court of Justice has adopted the doctrine of indirect discrimination, which, in certain situations, forbids treating those who are unalike in a like manner. Specifically, indirect discrimination occurs where “an apparently neutral provision, criterion or practice would put persons of a protected group at a particular disadvantage compared with other persons, unless that provision, criterion or practice is objectively justified by a legitimate aim and the means of achieving that aim are appropriate and necessary”.
This conception of equality forbids applying the same rule to legal subjects who are positioned differently. Considering again the application of the same height requirement to male and female candidates, we can see it would fall within the concept of indirect discrimination.
The ban on indirect discrimination has often been described as promoting substantive equality because it creates an obligation to accommodate legally protected differences (for instance height difference resulting from one’s sex) and associated lifestyles (for instance protecting certain religious holidays). Since indirect discrimination focuses on the disadvantageous effects of given rules and practices rather than the inclusion of protected characteristics in given decisions, it allows addressing proxy discrimination that impacts protected groups. To some extent, this creates an obligation for decision-makers to account for the unjust status quo. For example, the gender pay gap is a well-known form of institutionalised discrimination. The practice of using newly recruited employees’ past salaries to decide on their new pay in salary negotiations could be regarded as indirect discrimination on grounds of sex, because it tends to perpetuate the gender pay gap.
From the definitions of direct and indirect discrimination, we can identify four main elements in a discrimination case.
“On grounds of”…#
To determine whether the case is one of direct or indirect discrimination, it is necessary to assess whether a decision was taken “on grounds of” a protected characteristic. When a protected characteristic is explicitly used as a basis for a decision, that decision falls under the notion of direct discrimination. By contrast, if a decision creates a disadvantage to a protected group albeit not targeting that group, it falls within the notion of indirect discrimination.
…”a protected characteristic” in an area covered by EU law (personal and material scope)#
Protected characteristics vary across sectors. For example, the widest protection against discrimination can be found in relation to employment, where discrimination is banned in relation to racial or ethnic origin, sex or gender, religion or belief, disability, age and sexual orientation. In relation to access to goods and services, only racial or ethnic origin and sex or gender are protected characteristics. Although a major concern from a social or moral point of view is that algorithmic systems operate differently based on people’s income or socio-economic background, this form of disadvantage does not fall within the scope of protection offered by EU secondary law. In addition, while discriminatory effects may occur at the intersection of two or more vectors of disadvantage (for example race and gender or age and sexual orientation) 2, the CJEU has so far failed to recognise intersectional discrimination explicitly.
…where there is evidence for “less favourable treatment” or “particular disadvantage”…#
To establish a case of discrimination, an applicant first needs to bring prima facie evidence, i.e. sufficient evidence for a rebuttable presumption of discrimination to be established by the judge. Evidence of prima facie direct discrimination could include, for instance, information about another group or individual of a different protected group being treated more favourably. If such a comparator does not exist, EU law allows applicants to construct a hypothetical comparator. Evidence of prima facie indirect discrimination involves raising a reasonable suspicion that a given disadvantage affects a protected group. This could, but does not have to, involve statistics. By contrast with US law which relies a lot on statistical evidence, evidence in EU law is much more contextual and hardly relies on statistical comparisons. If prima facie discrimination is established, the burden of proving that discrimination has not occurred shifts to the defendant.
…unless there is an “objective justification”#
While direct discrimination is not justifiable in principle (except for a few exceptions provided for by the law), the indirect discrimination doctrine allows for a prima facie discriminatory measure to be “objectively justified” where it fulfils a legitimate aim and passes the so-called proportionality test. The law does not provide concrete guidelines on whether the means to achieve a legitimate aim are necessary and proportionate. Due to the large variety yet small number of cases, the proportionality test cannot be settled in advance based on previous case law. One rule that stands out is that if the same legitimate aim can be achieved through less discriminatory alternatives, those must be used.3 Other than that, however, objective justifications are judged on a case-by-case basis, depending on the significance of the harm and the legitimacy of the aim.
To illustrate the different components of discrimination in the context of machine learning, let’s consider the following example.
Amazon’s Hiring Algorithm
A commonly cited example of algorithmic bias is a resume selection algorithm that was under development at Amazon in 2017 4. As it turned out, the algorithm penalised words that indicated the applicant’s gender, such as participation in the women’s chess team or attending an all-woman’s college. It is important to note that Amazon’s hiring algorithm was not necessarily less accurate for women compared to men. Instead, the main culprit for the disparity was unequal hiring rates: in the past, the company had primarily hired men for technical roles. An important question is why these hiring rates differed. We can identify at least two potential reasons: either the data is a biased measurement of reality or reality is biased.
First, we might be looking at a case of measurement bias: historical hiring decisions are incomplete measurements of actual employee quality. When measurement bias is associated with a sensitive characteristic, in this case gender, the model is likely to replicate the pattern which can result in an unfair allocation of jobs 5. In other words, the sensitive characteristic is implicitly included as a factor in decision-making. This type of unfairness speaks to the exclusionary function of formal equality: protected characteristics should be excluded from decision-making. Second, gender disparities in hiring rates could in part be explained by disparities in behaviour caused by factors related to structural inequality. For example, women may have been systematically discouraged from pursuing technical roles, resulting in fewer suitable candidates. From this perspective, the wrongness of Amazon’s hiring algorithm can best be considered through the lens of substantive equality.
How would such a case of algorithmic unfairness be captured by EU discrimination law? According to Amazon, the algorithm was never actually used. For the sake of our argument, however, let’s assume that the algorithm was deployed in the EU. Employment discrimination on the basis of gender clearly falls within the material scope of non-discrimination law. While gender is not used directly as a factor by the algorithm, penalising applicants on the basis of characteristics highly associated with the applicant’s gender can be seen as a form of proxy discrimination that would either fall under the indirect discrimination doctrine or, under the direct discrimination doctrine if the decision criteria used are determined to be “inextricably linked” with sex or gender.
If the Court would approach the case from the perspective of indirect discrimination, this would raise two further questions. First, we need to determine whether candidates faced a “particular disadvantage”. The Court assesses this question contextually and does not consistently rely on statistics. Second, the indirect discrimination doctrine allows for an objective justification. If Amazon’s hiring algorithm is interpreted as indirect discrimination, the accuracy of the algorithm on a test set may be deemed an acceptable justification in court.6 Without access to information regarding the data collection procedure and machine learning process, it is difficult for applicants to prove whether accuracy – as indicated by the alleged offender – is a good reflection of effectiveness in practice.
Concluding Remarks#
In this section, we have explained the broad outlines of EU non-discrimination law. There are a few key takeaways for computer scientists. First of all, non-discrimination law is highly contextual and almost never provides explicit constraints for fairness metrics. Put differently, you will not find a specific fairness metric and threshold value in statutory law. Second, the EU court hardly relies on statistical evidence. Instead, legal compliance in the EU is much more about the reasoning and justification of design choices than satisfying a particular fairness constraint. For example, what factors are “relevant” and should (not) play a factor in decision-making? What normative standard should be used to determine what should be equal (e.g., formal or substantive equality)?
Note
This section was adapted from Weerts et al.7.
References#
- 1
Kasper Lippert-Rasmussen. The badness of discrimination. Ethical Theory and Moral Practice, 9(2):167–185, 2006.
- 2
Kimberle Crenshaw. Mapping the margins: intersectionality, identity politics, and violence against women of color. Stan. L. Rev., 43:1241, 1990.
- 3
Christa Tobler. Indirect discrimination: a case study into the development of the legal concept of indirect discrimination under EC law. Volume 10. Intersentia nv, 2005.
- 4
Jeffrey Dastin. Amazon scraps secret ai recruiting tool that showed bias against women. In Ethics of Data and Analytics, pages 296–299. Auerbach Publications, 2018.
- 5
Abigail Z. Jacobs and Hanna Wallach. Measurement and fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ‘21, 375–385. New York, NY, USA, 2021. Association for Computing Machinery. URL: https://doi.org/10.1145/3442188.3445901, doi:10.1145/3442188.3445901.
- 6
Jeremias Adams-Prassl, Reuben Binns, and Aislinn Kelly-Lyth. Directly discriminatory algorithms. The Modern Law Review, n/a(n/a):, 2022. URL: https://onlinelibrary.wiley.com/doi/abs/10.1111/1468-2230.12759, arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/1468-2230.12759, doi:https://doi.org/10.1111/1468-2230.12759.
- 7
Hilde Weerts, Raphaële Xenidis, Fabien Tarissan, Henrik Palmer Olsen, and Mykola Pechenizkiy. Algorithmic unfairness through the lens of eu non-discrimination law: or why the law is not a decision tree. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 805–816. 2023.