Interdisciplinary Perspectives on Fair-ML
Contents
Interdisciplinary Perspectives on Fair-ML#
Computer scientists have been productive in developing fairness-aware machine learning algorithms. But which algorithm is most suitable in which scenario?
From a technical perspective, algorithm selection of fair-ml algorithms does not differ much from the selection of other approaches in a data scientist’s toolbox. Most algorithms can easily be integrated into existing model selection pipelines and evaluated accordingly.
From a practical perspective, various characteristics could be relevant:
Machine learning algorithm. Some fair-ml algorithms are designed specifically for a particular machine learning algorithm, while others are model-agnostic. For example, adversarial learning and leaf relabeling are designed for neural networks and decision trees respectively, while relabeling and reject option classification can be used with any binary classification algorithm.
Fairness constraints. Is the algorithm designed with one or more particular fairness constraints in mind (e.g., demographic parity or equalized odds) or does it also allow custom constraints?
Sensitive feature. Some algorithms are designed specifically for binary sensitive features, while others take into consideration categorical and numerical sensitive features as well as intersectional groups.
Access to sensitive features at prediction time. Does the algorithm require access to sensitive features at prediction time or only during training?
Computational complexity. Is the algorithm computationally cheap or expensive?
However, one crucial question remains: does the application of a fairness-aware machine learning algorithm actually lead to fairer outcomes?
In this chapter, we will leverage interdisciplinary insights to help answer this question.
Chapter Summary
We consider fairness metrics and mitigation algorithms through the lens of egalitarianism, a school of thought in political philosophy. We show that different fairness metrics correspond to different understandings of what ought to be equal between groups. However, even when a fair distribution of outcomes corresponds to a particular fairness constraint, enforcing that constraint algorithmically does not always lead to the fair distribution and can have unintended side effects.
In this section, we explore the requirements set by EU non-discrimination law and how it applies to fairness of machine learning systems.
Science and Technology Studies
In the translation of a real-world problem to a machine learning task, data scientists and researchers may fall into what 1 refer to as an abstraction trap. In this section, we discuss several of these traps - and what you can do to avoid them.
References#
- 1
Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. Fairness and abstraction in sociotechnical systems. FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, pages 59–68, 2019. doi:10.1145/3287560.3287598.