Bounds for the tracking error of first-order online optimization methods
From MaRDI portal
Publication:2032000
DOI10.1007/S10957-021-01836-9zbMATH Open1470.90082arXiv2003.02400OpenAlexW3136112531MaRDI QIDQ2032000FDOQ2032000
S. Becker, Emiliano Dall'Anese, Liam Madden
Publication date: 15 June 2021
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Abstract: This paper investigates online algorithms for smooth time-varying optimization problems, focusing first on methods with constant step-size, momentum, and extrapolation-length. Assuming strong convexity, precise results for the tracking iterate error (the limit supremum of the norm of the difference between the optimal solution and the iterates) for online gradient descent are derived. The paper then considers a general first-order framework, where a universal lower bound on the tracking iterate error is established. Furthermore, a method using "long-steps" is proposed and shown to achieve the lower bound up to a fixed constant. This method is then compared with online gradient descent for specific examples. Finally, the paper analyzes the effect of regularization when the cost is not strongly convex. With regularization, it is possible to achieve a non-regret bound. The paper ends by testing the accelerated and regularized methods on synthetic time-varying least-squares and logistic regression problems, respectively.
Full work available at URL: https://arxiv.org/abs/2003.02400
online optimizationTikhonov regularizationsmooth convex optimizationconvergence boundNesterov acceleration
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Introductory lectures on convex optimization. A basic course.
- Adaptive restart for accelerated gradient schemes
- Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- First-Order Methods in Optimization
- Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent
- Understanding Machine Learning
- Accelerated and inexact forward-backward algorithms
- Multiuser Optimization: Distributed Algorithms and Error Analysis
- First-order methods of smooth convex optimization with inexact oracle
- Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
- Distributed Maximum Likelihood Sensor Network Localization
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Online Learning and Online Convex Optimization
- Gradient Convergence in Gradient methods with Errors
- Logarithmic Regret Algorithms for Online Convex Optimization
- Convex analysis and monotone operator theory in Hilbert spaces
- Performance of first-order methods for smooth convex minimization: a novel approach
- Optimized first-order methods for smooth convex minimization
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- Some methods of speeding up the convergence of iteration methods
- Convex optimization: algorithms and complexity
- Gradient methods for nonstationary unconstrained optimization problems
- On lower and upper bounds in smooth and strongly convex optimization
- Lectures on convex optimization
- Convergence Analysis of Saddle Point Problems in Time Varying Wireless Systems— Control Theoretical Approach
- Online Learning With Inexact Proximal Online Gradient Descent Algorithms
- Prediction-Correction Algorithms for Time-Varying Constrained Optimization
- A primer on monotone operator methods
- Stability of Over-Relaxations for the Forward-Backward Algorithm, Application to FISTA
- Non-stationary stochastic optimization
- Online Primal-Dual Methods With Measurement Feedback for Time-Varying Convex Optimization
- Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions
Uses Software
This page was built for publication: Bounds for the tracking error of first-order online optimization methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2032000)