Enhancing Fairness of Visual Attribute Predictors

From MaRDI portal
Publication:6404760

arXiv2207.05727MaRDI QIDQ6404760FDOQ6404760


Authors: Tobias Hänel, Nishant Kumar, Dmitrij Schlesinger, Mengze Li, Erdem Ünal, Abouzar Eslami, Stefan Gumhold Edit this on Wikidata


Publication date: 7 July 2022

Abstract: The performance of deep neural networks for image recognition tasks such as predicting a smiling face is known to degrade with under-represented classes of sensitive attributes. We address this problem by introducing fairness-aware regularization losses based on batch estimates of Demographic Parity, Equalized Odds, and a novel Intersection-over-Union measure. The experiments performed on facial and medical images from CelebA, UTKFace, and the SIIM-ISIC melanoma classification challenge show the effectiveness of our proposed fairness losses for bias mitigation as they improve model fairness while maintaining high classification performance. To the best of our knowledge, our work is the first attempt to incorporate these types of losses in an end-to-end training scheme for mitigating biases of visual attribute predictors. Our code is available at https://github.com/nish03/FVAP.




Has companion code repository: https://github.com/nish03/fvap









This page was built for publication: Enhancing Fairness of Visual Attribute Predictors

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6404760)