fairmodels: A Flexible Tool For Bias Detection, Visualization, And Mitigation
From MaRDI portal
Publication:99238
DOI10.48550/ARXIV.2104.00507arXiv2104.00507MaRDI QIDQ99238FDOQ99238
Przemysław Biecek, Jakub Wiśniewski
Publication date: 1 April 2021
Abstract: Machine learning decision systems are getting omnipresent in our lives. From dating apps to rating loan seekers, algorithms affect both our well-being and future. Typically, however, these systems are not infallible. Moreover, complex predictive models are really eager to learn social biases present in historical data that can lead to increasing discrimination. If we want to create models responsibly then we need tools for in-depth validation of models also from the perspective of potential discrimination. This article introduces an R package fairmodels that helps to validate fairness and eliminate bias in classification models in an easy and flexible fashion. The fairmodels package offers a model-agnostic approach to bias detection, visualization and mitigation. The implemented set of functions and fairness metrics enables model fairness validation from different perspectives. The package includes a series of methods for bias mitigation that aim to diminish the discrimination in the model. The package is designed not only to examine a single model, but also to facilitate comparisons between multiple models.
Cited In (1)
This page was built for publication: fairmodels: A Flexible Tool For Bias Detection, Visualization, And Mitigation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q99238)