Accelerated extra-gradient descent: a novel accelerated first-order method
From MaRDI portal
Publication:4993286
DOI10.4230/LIPICS.ITCS.2018.23zbMATH Open1462.90088arXiv1706.04680MaRDI QIDQ4993286FDOQ4993286
Authors: Jelena Diakonikolas, Lorenzo Orecchia
Publication date: 15 June 2021
Abstract: We provide a novel accelerated first-order method that achieves the asymptotically optimal convergence rate for smooth functions in the first-order oracle model. To this day, Nesterov's Accelerated Gradient Descent (AGD) and variations thereof were the only methods achieving acceleration in this standard blackbox model. In contrast, our algorithm is significantly different from AGD, as it relies on a predictor-corrector approach similar to that used by Mirror-Prox [Nemirovski, 2004] and Extra-Gradient Descent [Korpelevich, 1977] in the solution of convex-concave saddle point problems. For this reason, we dub our algorithm Accelerated Extra-Gradient Descent (AXGD). Its construction is motivated by the discretization of an accelerated continuous-time dynamics [Krichene et al., 2015] using the classical method of implicit Euler discretization. Our analysis explicitly shows the effects of discretization through a conceptually novel primal-dual viewpoint. Moreover, we show that the method is quite general: it attains optimal convergence rates for other classes of objectives (e.g., those with generalized smoothness properties or that are non-smooth and Lipschitz-continuous) using the appropriate choices of step lengths. Finally, we present experiments showing that our algorithm matches the performance of Nesterov's method, while appearing more robust to noise in some cases.
Full work available at URL: https://arxiv.org/abs/1706.04680
Recommendations
- Convergence rate of \(\mathcal{O}(1/k)\) for optimistic gradient and extragradient methods in smooth convex-concave saddle point problems
- Catalyst acceleration for first-order convex optimization: from theory to practice
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Optimized first-order methods for smooth convex minimization
- A variational perspective on accelerated methods in optimization
Cites Work
- Smooth minimization of non-smooth functions
- Introductory lectures on convex optimization. A basic course.
- Title not available (Why is that?)
- Gradient methods for minimizing composite functions
- Lectures on modern convex optimization. Analysis, algorithms, and engineering applications
- Title not available (Why is that?)
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Excessive Gap Technique in Nonsmooth Convex Minimization
- Solving Ordinary Differential Equations I
- First-order methods of smooth convex optimization with inexact oracle
- An optimal method for stochastic composite optimization
- Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems
- Universal gradient methods for convex optimization problems
- A differential equation for modeling Nesterov's accelerated gradient method: theory and insights
- QIP = PSPACE
- Accelerating the cubic regularization of Newton's method on convex problems
- Using optimization to break the epsilon barrier: a faster and simpler width-independent algorithm for solving positive linear programs in parallel
- A simple, combinatorial algorithm for solving SDD systems in nearly-linear time
- A new approach to computing maximum flows using electrical flows
- Title not available (Why is that?)
Cited In (12)
- Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization
- Linear coupling: an ultimate unification of gradient and mirror descent
- Computing the best approximation over the intersection of a polyhedral set and the doubly nonnegative cone
- Convergence rate of \(\mathcal{O}(1/k)\) for optimistic gradient and extragradient methods in smooth convex-concave saddle point problems
- Robust accelerated gradient methods for smooth strongly convex functions
- Cyclic Coordinate Dual Averaging with Extrapolation
- Accelerated gradient methods combining Tikhonov regularization with geometric damping driven by the Hessian
- The approximate duality gap technique: a unified theory of first-order methods
- Accelerated gradient methods with absolute and relative noise in the gradient
- Generalized momentum-based methods: a Hamiltonian perspective
- Unified acceleration of high-order algorithms under general Hölder continuity
- No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization
Uses Software
This page was built for publication: Accelerated extra-gradient descent: a novel accelerated first-order method
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4993286)