Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: https://doi.org/10.21256/zhaw-20277
Publikationstyp: Konferenz: Paper
Art der Begutachtung: Peer review (Publikation)
Titel: How (not) to measure bias in face recognition networks
Autor/-in: Glüge, Stefan
Amirian, Mohammadreza
Flumini, Dandolo
Stadelmann, Thilo
et. al: No
DOI: 10.21256/zhaw-20277
10.1007/978-3-030-58309-5_10
Tagungsband: Artificial Neural Networks in Pattern Recognition
Herausgeber/-in des übergeordneten Werkes: Schilling, Frank-Peter
Stadelmann, Thilo
Angaben zur Konferenz: 9th IAPR TC 3 Workshop on Artificial Neural Networks for Pattern Recognition (ANNPR'20), Winterthur, Switzerland, 2-4 September 2020
Erscheinungsdatum: 2-Sep-2020
Reihe: Lecture Notes in Computer Science
Reihenzählung: 12294
Verlag / Hrsg. Institution: Springer
Verlag / Hrsg. Institution: Cham
Sprache: Englisch
Schlagwörter: Deep learning; Convolutional neural network; Fairness
Fachgebiet (DDC): 006: Spezielle Computerverfahren
Zusammenfassung: Within the last years Face Recognition (FR) systems have achieved human-like (or better) performance, leading to extensive deployment in large-scale practical settings. Yet, especially for sensible domains such as FR we expect algorithms to work equally well for everyone, regardless of somebody's age, gender, skin colour and/or origin. In this paper, we investigate a methodology to quantify the amount of bias in a trained Convolutional Neural Network (CNN) model for FR that is not only intuitively appealing, but also has already been used in the literature to argue for certain debiasing methods. It works by measuring the "blindness" of the model towards certain face characteristics in the embeddings of faces based on internal cluster validation measures. We conduct experiments on three openly available FR models to determine their bias regarding race, gender and age, and validate the computed scores by comparing their predictions against the actual drop in face recognition performance for minority cases. Interestingly, we could not link a crisp clustering in the embedding space to a strong bias in recognition rates|it is rather the opposite. We therefore offer arguments for the reasons behind this observation and argue for the need of a less naive clustering approach to develop a working measure for bias in FR models.
URI: https://digitalcollection.zhaw.ch/handle/11475/20277
Volltext Version: Akzeptierte Version
Lizenz (gemäss Verlagsvertrag): Lizenz gemäss Verlagsvertrag
Departement: Life Sciences und Facility Management
School of Engineering
Organisationseinheit: Institut für Informatik (InIT)
Institut für Angewandte Mathematik und Physik (IAMP)
Institut für Computational Life Sciences (ICLS)
Publiziert im Rahmen des ZHAW-Projekts: Libra: A One-Tool Solution for MLD4 Compliance
Enthalten in den Sammlungen:Publikationen Life Sciences und Facility Management
Publikationen School of Engineering

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
2020_Gluege-etal_Bias-in-face-recognition-networks_ANNPR.pdfAccepted Version3.78 MBAdobe PDFMiniaturbild
Öffnen/Anzeigen
Zur Langanzeige
Glüge, S., Amirian, M., Flumini, D., & Stadelmann, T. (2020). How (not) to measure bias in face recognition networks [Conference paper]. In F.-P. Schilling & T. Stadelmann (Eds.), Artificial Neural Networks in Pattern Recognition. Springer. https://doi.org/10.21256/zhaw-20277
Glüge, S. et al. (2020) ‘How (not) to measure bias in face recognition networks’, in F.-P. Schilling and T. Stadelmann (eds) Artificial Neural Networks in Pattern Recognition. Cham: Springer. Available at: https://doi.org/10.21256/zhaw-20277.
S. Glüge, M. Amirian, D. Flumini, and T. Stadelmann, “How (not) to measure bias in face recognition networks,” in Artificial Neural Networks in Pattern Recognition, Sep. 2020. doi: 10.21256/zhaw-20277.
GLÜGE, Stefan, Mohammadreza AMIRIAN, Dandolo FLUMINI und Thilo STADELMANN, 2020. How (not) to measure bias in face recognition networks. In: Frank-Peter SCHILLING und Thilo STADELMANN (Hrsg.), Artificial Neural Networks in Pattern Recognition. Conference paper. Cham: Springer. 2 September 2020
Glüge, Stefan, Mohammadreza Amirian, Dandolo Flumini, and Thilo Stadelmann. 2020. “How (Not) to Measure Bias in Face Recognition Networks.” Conference paper. In Artificial Neural Networks in Pattern Recognition, edited by Frank-Peter Schilling and Thilo Stadelmann. Cham: Springer. https://doi.org/10.21256/zhaw-20277.
Glüge, Stefan, et al. “How (Not) to Measure Bias in Face Recognition Networks.” Artificial Neural Networks in Pattern Recognition, edited by Frank-Peter Schilling and Thilo Stadelmann, Springer, 2020, https://doi.org/10.21256/zhaw-20277.


Alle Ressourcen in diesem Repository sind urheberrechtlich geschützt, soweit nicht anderweitig angezeigt.