Please use this identifier to cite or link to this item: https://doi.org/10.21256/zhaw-20277
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGlüge, Stefan-
dc.contributor.authorAmirian, Mohammadreza-
dc.contributor.authorFlumini, Dandolo-
dc.contributor.authorStadelmann, Thilo-
dc.date.accessioned2020-07-20T08:02:59Z-
dc.date.available2020-07-20T08:02:59Z-
dc.date.issued2020-09-02-
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/20277-
dc.description.abstractWithin the last years Face Recognition (FR) systems have achieved human-like (or better) performance, leading to extensive deployment in large-scale practical settings. Yet, especially for sensible domains such as FR we expect algorithms to work equally well for everyone, regardless of somebody's age, gender, skin colour and/or origin. In this paper, we investigate a methodology to quantify the amount of bias in a trained Convolutional Neural Network (CNN) model for FR that is not only intuitively appealing, but also has already been used in the literature to argue for certain debiasing methods. It works by measuring the "blindness" of the model towards certain face characteristics in the embeddings of faces based on internal cluster validation measures. We conduct experiments on three openly available FR models to determine their bias regarding race, gender and age, and validate the computed scores by comparing their predictions against the actual drop in face recognition performance for minority cases. Interestingly, we could not link a crisp clustering in the embedding space to a strong bias in recognition rates|it is rather the opposite. We therefore offer arguments for the reasons behind this observation and argue for the need of a less naive clustering approach to develop a working measure for bias in FR models.de_CH
dc.language.isoende_CH
dc.publisherSpringerde_CH
dc.relation.ispartofseriesLecture Notes in Computer Sciencede_CH
dc.rightsLicence according to publishing contractde_CH
dc.subjectDeep learningde_CH
dc.subjectConvolutional neural networkde_CH
dc.subjectFairnessde_CH
dc.subject.ddc006: Spezielle Computerverfahrende_CH
dc.titleHow (not) to measure bias in face recognition networksde_CH
dc.typeKonferenz: Paperde_CH
dcterms.typeTextde_CH
zhaw.departementLife Sciences und Facility Managementde_CH
zhaw.departementSchool of Engineeringde_CH
zhaw.organisationalunitInstitut für Angewandte Informationstechnologie (InIT)de_CH
zhaw.organisationalunitInstitut für Angewandte Mathematik und Physik (IAMP)de_CH
zhaw.organisationalunitInstitut für Computational Life Sciences (ICLS)de_CH
zhaw.publisher.placeChamde_CH
dc.identifier.doi10.21256/zhaw-20277-
dc.identifier.doi10.1007/978-3-030-58309-5_10de_CH
zhaw.conference.details9th IAPR TC 3 Workshop on Artificial Neural Networks for Pattern Recognition (ANNPR'20), Winterthur, Switzerland, 2-4 September 2020de_CH
zhaw.funding.euNode_CH
zhaw.originated.zhawYesde_CH
zhaw.parentwork.editorSchilling, Frank-Peter-
zhaw.parentwork.editorStadelmann, Thilo-
zhaw.publication.statusacceptedVersionde_CH
zhaw.series.number12294de_CH
zhaw.publication.reviewPeer review (Publikation)de_CH
zhaw.title.proceedingsArtificial Neural Networks in Pattern Recognitionde_CH
zhaw.webfeedDatalabde_CH
zhaw.webfeedInformation Engineeringde_CH
zhaw.webfeedZHAW digitalde_CH
zhaw.webfeedHigh Performance Computing (HPC)de_CH
zhaw.webfeedComputer Vision, Perception and Cognitionde_CH
zhaw.webfeedPredictive Analyticsde_CH
zhaw.funding.zhawLibra: A One-Tool Solution for MLD4 Compliancede_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
Appears in collections:Publikationen Life Sciences und Facility Management
Publikationen School of Engineering

Files in This Item:
File Description SizeFormat 
2020_Gluege-etal_Bias-in-face-recognition-networks_ANNPR.pdfAccepted Version3.78 MBAdobe PDFThumbnail
View/Open
Show simple item record
Glüge, S., Amirian, M., Flumini, D., & Stadelmann, T. (2020). How (not) to measure bias in face recognition networks [Conference paper]. In F.-P. Schilling & T. Stadelmann (Eds.), Artificial Neural Networks in Pattern Recognition. Springer. https://doi.org/10.21256/zhaw-20277
Glüge, S. et al. (2020) ‘How (not) to measure bias in face recognition networks’, in F.-P. Schilling and T. Stadelmann (eds) Artificial Neural Networks in Pattern Recognition. Cham: Springer. Available at: https://doi.org/10.21256/zhaw-20277.
S. Glüge, M. Amirian, D. Flumini, and T. Stadelmann, “How (not) to measure bias in face recognition networks,” in Artificial Neural Networks in Pattern Recognition, Sep. 2020. doi: 10.21256/zhaw-20277.
GLÜGE, Stefan, Mohammadreza AMIRIAN, Dandolo FLUMINI und Thilo STADELMANN, 2020. How (not) to measure bias in face recognition networks. In: Frank-Peter SCHILLING und Thilo STADELMANN (Hrsg.), Artificial Neural Networks in Pattern Recognition. Conference paper. Cham: Springer. 2 September 2020
Glüge, Stefan, Mohammadreza Amirian, Dandolo Flumini, and Thilo Stadelmann. 2020. “How (Not) to Measure Bias in Face Recognition Networks.” Conference paper. In Artificial Neural Networks in Pattern Recognition, edited by Frank-Peter Schilling and Thilo Stadelmann. Cham: Springer. https://doi.org/10.21256/zhaw-20277.
Glüge, Stefan, et al. “How (Not) to Measure Bias in Face Recognition Networks.” Artificial Neural Networks in Pattern Recognition, edited by Frank-Peter Schilling and Thilo Stadelmann, Springer, 2020, https://doi.org/10.21256/zhaw-20277.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.