Learning models with uniform performance via distributionally robust optimization

From MaRDI portal
Publication:820804

DOI10.1214/20-AOS2004zbMATH Open1473.62019arXiv1810.08750OpenAlexW3188960136MaRDI QIDQ820804FDOQ820804

John C. Duchi, Hongseok Namkoong

Publication date: 28 September 2021

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: A common goal in statistics and machine learning is to learn models that can perform well against distributional shifts, such as latent heterogeneous subpopulations, unknown covariate shifts, or unmodeled temporal effects. We develop and analyze a distributionally robust stochastic optimization (DRO) framework that learns a model providing good performance against perturbations to the data-generating distribution. We give a convex formulation for the problem, providing several convergence guarantees. We prove finite-sample minimax upper and lower bounds, showing that distributional robustness sometimes comes at a cost in convergence rates. We give limit theorems for the learned parameters, where we fully specify the limiting distribution so that confidence intervals can be computed. On real tasks including generalizing to unknown subpopulations, fine-grained recognition, and providing good tail performance, the distributionally robust approach often exhibits improved performance.


Full work available at URL: https://arxiv.org/abs/1810.08750




Recommendations




Cites Work


Cited In (18)

Uses Software





This page was built for publication: Learning models with uniform performance via distributionally robust optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q820804)