Asymptotic optimality in stochastic optimization

From MaRDI portal
Publication:2656586

DOI10.1214/19-AOS1831zbMATH Open1461.62146arXiv1612.05612OpenAlexW3128015591MaRDI QIDQ2656586FDOQ2656586


Authors: Feng Ruan, John C. Duchi Edit this on Wikidata


Publication date: 11 March 2021

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: We study local complexity measures for stochastic convex optimization problems, providing a local minimax theory analogous to that of H'{a}jek and Le Cam for classical statistical problems. We give complementary optimality results, developing fully online methods that adaptively achieve optimal convergence guarantees. Our results provide function-specific lower bounds and convergence results that make precise a correspondence between statistical difficulty and the geometric notion of tilt-stability from optimization. As part of this development, we show how variants of Nesterov's dual averaging---a stochastic gradient-based procedure---guarantee finite time identification of constraints in optimization problems, while stochastic gradient procedures fail. Additionally, we highlight a gap between problems with linear and nonlinear constraints: standard stochastic-gradient-based procedures are suboptimal even for the simplest nonlinear constraints, necessitating the development of asymptotically optimal Riemannian stochastic gradient methods.


Full work available at URL: https://arxiv.org/abs/1612.05612




Recommendations




Cites Work


Cited In (20)

Uses Software





This page was built for publication: Asymptotic optimality in stochastic optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2656586)