Bounds for the tracking error of first-order online optimization methods

From MaRDI portal
Publication:2032000

DOI10.1007/S10957-021-01836-9zbMATH Open1470.90082arXiv2003.02400OpenAlexW3136112531MaRDI QIDQ2032000FDOQ2032000

S. Becker, Emiliano Dall'Anese, Liam Madden

Publication date: 15 June 2021

Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)

Abstract: This paper investigates online algorithms for smooth time-varying optimization problems, focusing first on methods with constant step-size, momentum, and extrapolation-length. Assuming strong convexity, precise results for the tracking iterate error (the limit supremum of the norm of the difference between the optimal solution and the iterates) for online gradient descent are derived. The paper then considers a general first-order framework, where a universal lower bound on the tracking iterate error is established. Furthermore, a method using "long-steps" is proposed and shown to achieve the lower bound up to a fixed constant. This method is then compared with online gradient descent for specific examples. Finally, the paper analyzes the effect of regularization when the cost is not strongly convex. With regularization, it is possible to achieve a non-regret bound. The paper ends by testing the accelerated and regularized methods on synthetic time-varying least-squares and logistic regression problems, respectively.


Full work available at URL: https://arxiv.org/abs/2003.02400





Cites Work


Uses Software






This page was built for publication: Bounds for the tracking error of first-order online optimization methods

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2032000)