Achieving fairness with a simple ridge penalty
From MaRDI portal
(Redirected from Publication:2080355)
Abstract: In this paper we present a general framework for estimating regression models subject to a user-defined level of fairness. We enforce fairness as a model selection step in which we choose the value of a ridge penalty to control the effect of sensitive attributes. We then estimate the parameters of the model conditional on the chosen penalty value. Our proposal is mathematically simple, with a solution that is partly in closed form, and produces estimates of the regression coefficients that are intuitive to interpret as a function of the level of fairness. Furthermore, it is easily extended to generalised linear models, kernelised regression models and other penalties; and it can accommodate multiple definitions of fairness. We compare our approach with the regression model from Komiyama et al. (2018), which implements a provably-optimal linear regression model; and with the fair models from Zafar et al. (2019). We evaluate these approaches empirically on six different data sets, and we find that our proposal provides better goodness of fit and better predictive accuracy for the same level of fairness. In addition, we highlight a source of bias in the original experimental evaluation in Komiyama et al. (2018).
Recommendations
Cites work
- scientific article; zbMATH DE number 3945130 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 7064055 (Why is no real title available?)
- scientific article; zbMATH DE number 3385132 (Why is no real title available?)
- A decision-theoretic generalization of on-line learning and an application to boosting
- A note on a general definition of the coefficient of determination
- Coefficients of determination in logistic regression models -- a new proposal: the coefficient of discrimination
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- Exponentiated gradient versus gradient descent for linear predictors
- Regularization and Variable Selection Via the Elastic Net
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- Successive Lagrangian relaxation algorithm for nonconvex quadratic optimization
- Two-parameter ridge regression and its convergence to the eventual pairwise model
Cited in
(3)
This page was built for publication: Achieving fairness with a simple ridge penalty
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2080355)