Please use this identifier to cite or link to this item:
Publication type: Working paper – expertise – study
Title: Learning to ignore : fair and task independent representations
Authors: Bödi, Linda Helen
Grabner, Helmut
et. al: No
DOI: 10.21256/zhaw-21602
Extent: 14
Issue Date: 2020
Publisher / Ed. Institution: ZHAW Zürcher Hochschule für Angewandte Wissenschaften
Language: English
Subject (DDC): 006: Special computer methods
Abstract: Training fair machine learning models, aiming for their interpretability and solving the problem of domain shift has gained a lot of interest in the last years. There is a vast amount of work addressing these topics, mostly in separation. In this work we show that they can be seen as a common framework of learning invariant representations. The representations should allow to predict the target while at the same time being invariant to sensitive attributes which split the dataset into subgroups. Our approach is based on the simple observation that it is impossible for any learning algorithm to differentiate samples if they have the same feature representation. This is formulated as an additional loss (regularizer) enforcing a common feature representation across subgroups. We apply it to learn fair models and interpret the influence of the sensitive attribute. Furthermore it can be used for domain adaptation, transferring knowledge and learning effectively from very few examples. In all applications it is essential not only to learn to predict the target, but also to learn what to ignore.
License (according to publishing contract): CC BY 4.0: Attribution 4.0 International
Departement: School of Engineering
Organisational Unit: Institute of Data Analysis and Process Design (IDP)
Appears in collections:Publikationen School of Engineering

Files in This Item:
File Description SizeFormat 
2020_Boedi-Grabner_Learning-to-ignore.pdf3.81 MBAdobe PDFThumbnail

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.