Tackling algorithmic bias in neural-network classifiers using Wasserstein-2 regularization
From MaRDI portal
Publication:2103876
DOI10.1007/S10851-022-01090-2OpenAlexW4288262450WikidataQ123246212 ScholiaQ123246212MaRDI QIDQ2103876FDOQ2103876
Authors: Laurent Risser, Alberto González Sanz, Quentin Vincenot, Jean-Michel Loubes
Publication date: 9 December 2022
Published in: Journal of Mathematical Imaging and Vision (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1908.05783
Recommendations
- Studying bias in visual features through the lens of optimal transport
- Wasserstein-based fairness interpretability framework for machine learning models
- Achieving fair treatment in algorithmic classification
- Certifying the fairness of KNN in the presence of dataset bias
- scientific article; zbMATH DE number 7064055
Cites Work
- Adaptive subgradient methods for online learning and stochastic optimization
- Optimal transport for applied mathematicians. Calculus of variations, PDEs, and modeling
- Title not available (Why is that?)
- Title not available (Why is that?)
- Anchor Regression: Heterogeneous Data Meet Causality
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Convergence of a Newton algorithm for semi-discrete optimal transport
- Optimization methods for large-scale machine learning
- Central limit theorems for empirical transportation cost in general dimension
- Empirical optimal transport on countable metric spaces: distributional limits and statistical applications
- Inference for empirical Wasserstein distances on finite spaces
- An algorithm for removing sensitive information: application to race-independent recidivism prediction
- Title not available (Why is that?)
- A central limit theorem for Lp transportation cost on the real line with application to fairness assessment in machine learning
Cited In (6)
- A comprehensive and fair comparison of two neural operators (with practical extensions) based on FAIR data
- Central limit theorems for general transportation costs
- An Improved Central Limit Theorem and Fast Convergence Rates for Entropic Transportation Costs
- Nonlinear inverse optimal transport: identifiability of the transport cost from its marginals and optimal values
- Studying bias in visual features through the lens of optimal transport
- Wasserstein-based fairness interpretability framework for machine learning models
Uses Software
This page was built for publication: Tackling algorithmic bias in neural-network classifiers using Wasserstein-2 regularization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2103876)