Zurück zum Heft
Cover von: Fairnessprinzipien der algorithmischen Verwaltung
Yoan Hermstrüwer

Fairnessprinzipien der algorithmischen Verwaltung

Rubrik: Abhandlungen
Jahrgang 145 (2020) / Heft 3, S. 479-521 (43)
Publiziert 12.03.2021
DOI 10.1628/aoer-2020-0013
  • Artikel PDF
  • lieferbar
  • 10.1628/aoer-2020-0013
Aufgrund einer Systemumstellung kann es vorübergehend u.a. zu Zugriffsproblemen kommen. Wir arbeiten mit Hochdruck an einer Lösung. Wir bitten um Entschuldigung für die Umstände.
Beschreibung
This essay explores different fairness principles and their legal implications in the context of government decisions based on machine learning classifications and predictions. In the first part, I discuss the tension arising between the need to achieve individual justice in administrative procedures and the generality of outputs generated by machine learning models. While some discrimination risks are due to biased training data and incorrect generalizations, others result from maximizing predictive accuracy when using unbiased training data. The latter case gives rise to a normative conflict known as the accuracy-discrimination-tradeoff. In the second part, I explore four different dimensions along which fairness can be achieved: procedural fairness; data fairness; algorithmic fairness; and judicial fairness. In exploring these fairness dimensions I develop three arguments. First, I argue that prevailing black box concepts of machine learning models are analytically misleading. Rather, the use of machine learning models can help public authorities to diagnose and specify the discrimination risks that government decisions entail. Second, I argue that machine learning models require a more formal conception of equal protection rights. As a consequence, they shed light on some of the doctrinal ambiguities in antidiscrimination law and facilitate principled legal choices when determining the »equality of what«. Finally, I argue that dominant approaches in antidiscrimination law and privacy law that prohibit the use of certain predictors bear the risk of aggravating the very discrimination problems they are intended to mitigate. Therefore, legal rules should be interpreted so as to authorize machine learning models to use data about protected groups with a view to redressing discrimination of these groups.