Optimal convergence rates for Nesterov acceleration

From MaRDI portal
Publication:5206941

DOI10.1137/18M1186757zbMATH Open1453.90117arXiv1805.05719OpenAlexW2994888550WikidataQ126559838 ScholiaQ126559838MaRDI QIDQ5206941FDOQ5206941


Authors: Jean-François Aujol, Charles Dossal, Aude Rondepierre Edit this on Wikidata


Publication date: 19 December 2019

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Abstract: In this paper, we study the behavior of solutions of the ODE associated to Nesterov acceleration. It is well-known since the pioneering work of Nesterov that the rate of convergence O(1/t2) is optimal for the class of convex functions with Lipschitz gradient. In this work, we show that better convergence rates can be obtained with some additional geometrical conditions, such as L ojasiewicz property. More precisely, we prove the optimal convergence rates that can be obtained depending on the geometry of the function F to minimize. The convergence rates are new, and they shed new light on the behavior of Nesterov acceleration schemes. We prove in particular that the classical Nesterov scheme may provide convergence rates that are worse than the classical gradient descent scheme on sharp functions: for instance, the convergence rate for strongly convex functions is not geometric for the classical Nesterov scheme (while it is the case for the gradient descent algorithm). This shows that applying the classical Nesterov acceleration on convex functions without looking more at the geometrical properties of the objective functions may lead to sub-optimal algorithms.


Full work available at URL: https://arxiv.org/abs/1805.05719




Recommendations




Cites Work


Cited In (32)





This page was built for publication: Optimal convergence rates for Nesterov acceleration

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5206941)