Please use this identifier to cite or link to this item: https://doi.org/10.21256/zhaw-23113
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHertweck, Corinna-
dc.contributor.authorHeitz, Christoph-
dc.date.accessioned2021-09-13T08:41:03Z-
dc.date.available2021-09-13T08:41:03Z-
dc.date.issued2021-07-07-
dc.identifier.isbn978-1-6654-3874-2de_CH
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/23113-
dc.description.abstractWhile the field of algorithmic fairness has brought forth many ways to measure and improve the fairness of machine learning models, these findings are still not widely used in practice. We suspect that one reason for this is that the field of algorithmic fairness came up with a lot of definitions of fairness, which are difficult to navigate. The goal of this paper is to provide data scientists with an accessible introduction to group fairness metrics and to give some insight into the philosophical reasoning for caring about these metrics. We will do this by considering in which sense socio-demographic groups are compared for making a statement on fairness.de_CH
dc.language.isoende_CH
dc.publisherIEEEde_CH
dc.rightsLicence according to publishing contractde_CH
dc.subjectAlgorithmic fairnessde_CH
dc.subjectGroup fairnessde_CH
dc.subjectStatistical parityde_CH
dc.subjectIndependencede_CH
dc.subjectSeparationde_CH
dc.subjectSufficiencyde_CH
dc.subject.ddc006: Spezielle Computerverfahrende_CH
dc.subject.ddc170: Ethikde_CH
dc.titleA systematic approach to group fairness in automated decision makingde_CH
dc.typeKonferenz: Paperde_CH
dcterms.typeTextde_CH
zhaw.departementSchool of Engineeringde_CH
zhaw.organisationalunitInstitut für Datenanalyse und Prozessdesign (IDP)de_CH
dc.identifier.doi10.1109/SDS51136.2021.00008de_CH
dc.identifier.doi10.21256/zhaw-23113-
zhaw.conference.details8th Swiss Conference on Data Science, Lucerne, Switzerland, 9 June 2021de_CH
zhaw.funding.euNode_CH
zhaw.originated.zhawYesde_CH
zhaw.pages.end6de_CH
zhaw.pages.start1de_CH
zhaw.publication.statusacceptedVersionde_CH
zhaw.publication.reviewPeer review (Publikation)de_CH
zhaw.title.proceedingsProceedings of the 8th SDSde_CH
zhaw.funding.snf187473de_CH
zhaw.funding.zhawSocially acceptable AI and fairness trade-offs in predictive analyticsde_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
Appears in collections:Publikationen School of Engineering

Files in This Item:
File Description SizeFormat 
2021_Hertweck-Heitz_Group-fairness-automated-decision-making.pdfAccepted Version107.87 kBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.