A variational perspective on accelerated methods in optimization
From MaRDI portal
Publication:4646231
DOI10.1073/PNAS.1614734113zbMATH Open1404.90098arXiv1603.04245OpenAlexW2963430672WikidataQ37451331 ScholiaQ37451331MaRDI QIDQ4646231FDOQ4646231
Ashia C. Wilson, Andre Wibisono, Michael Jordan
Publication date: 11 January 2019
Published in: Proceedings of the National Academy of Sciences (Search for Journal in Brave)
Abstract: Accelerated gradient methods play a central role in optimization, achieving optimal rates in many settings. While many generalizations and extensions of Nesterov's original acceleration method have been proposed, it is not yet clear what is the natural scope of the acceleration concept. In this paper, we study accelerated methods from a continuous-time perspective. We show that there is a Lagrangian functional that we call the emph{Bregman Lagrangian} which generates a large class of accelerated methods in continuous time, including (but not limited to) accelerated gradient descent, its non-Euclidean extension, and accelerated higher-order gradient methods. We show that the continuous-time limit of all of these methods correspond to traveling the same curve in spacetime at different speeds. From this perspective, Nesterov's technique and many of its generalizations can be viewed as a systematic way to go from the continuous-time curves generated by the Bregman Lagrangian to a family of discrete-time accelerated algorithms.
Full work available at URL: https://arxiv.org/abs/1603.04245
Recommendations
- Multiscale analysis of accelerated gradient methods
- A variational formulation of accelerated optimization on Riemannian manifolds
- Understanding the acceleration phenomenon via high-resolution differential equations
- A differential equation for modeling Nesterov's accelerated gradient method: theory and insights
- Generalized momentum-based methods: a Hamiltonian perspective
Cited In (only showing first 100 items - show all)
- Online accelerated data-driven learning for optimal feedback control of discrete-time partially uncertain systems
- Dynamical Systems Theory and Algorithms for NP-hard Problems
- Conformal mirror descent with logarithmic divergences
- Linear convergence of forward-backward accelerated algorithms without knowledge of the modulus of strong convexity
- Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization
- A new dynamical system with self-adaptive dynamical stepsize for pseudomonotone mixed variational inequalities
- Practical perspectives on symplectic accelerated optimization
- Differential equations in data analysis
- A Recursively Recurrent Neural Network (R2N2) Architecture for Learning Iterative Algorithms
- Hessian Barrier Algorithms for Linearly Constrained Optimization Problems
- Fast gradient method for low-rank matrix estimation
- Title not available (Why is that?)
- The regularized submodular maximization via the Lyapunov method
- Accelerated Optimization in the PDE Framework: Formulations for the Manifold of Diffeomorphisms
- Bregman dynamics, contact transformations and convex optimization
- Accelerated Optimization in the PDE Framework Formulations for the Active Contour Case
- From Halpern's fixed-point iterations to Nesterov's accelerated interpretations for root-finding problems
- An ordinary differential equation for modeling Halpern fixed-point Algorithm
- Exploring critical points of energy landscapes: from low-dimensional examples to phase field crystal PDEs
- A second order primal-dual dynamical system for a convex-concave bilinear saddle point problem
- Lagrangian and Hamiltonian dynamics for probabilities on the statistical bundle
- Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods
- Time-adaptive Lagrangian variational integrators for accelerated optimization
- Near-optimal tensor methods for minimizing the gradient norm of convex functions and accelerated primal–dual tensor methods
- Decentralized concurrent learning with coordinated momentum and restart
- Search Direction Correction with Normalized Gradient Makes First-Order Methods Faster
- Essential convergence rate of ordinary differential equations appearing in optimization
- Fast symplectic integrator for Nesterov-type acceleration method
- Convergence rate bounds for the mirror descent method: IQCs, Popov criterion and Bregman divergence
- A stochastic iteratively regularized Gauss-Newton method
- Fast convergence rates and trajectory convergence of a Tikhonov regularized inertial primal-dual dynamical system with time scaling and vanishing damping
- Inertial accelerated augmented Lagrangian algorithms with scaling coefficients to solve exactly and inexactly linearly constrained convex optimization problems
- Perseus: a simple and optimal high-order method for variational inequalities
- Multiscale Analysis of Accelerated Gradient Methods
- No-regret algorithms in on-line learning, games and convex optimization
- No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization
- The Approximate Duality Gap Technique: A Unified Theory of First-Order Methods
- Preconditioned accelerated gradient descent methods for locally Lipschitz smooth objectives with applications to the solution of nonlinear PDEs
- Discrete processes and their continuous limits
- Fast primal-dual algorithm via dynamical system for a linearly constrained convex optimization problem
- Online optimization of switched LTI systems using continuous-time and hybrid accelerated gradient flows
- Triggered gradient tracking for asynchronous distributed optimization
- Explicit stabilised gradient descent for faster strongly convex optimisation
- Title not available (Why is that?)
- Inducing strong convergence of trajectories in dynamical systems associated to monotone inclusions with composite structure
- Analysis of Optimization Algorithms via Integral Quadratic Constraints: Nonstrongly Convex Problems
- PDE acceleration: a convergence rate analysis and applications to obstacle problems
- Conformal symplectic and relativistic optimization
- Stochastic Methods for Composite and Weakly Convex Optimization Problems
- Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints
- First-order methods for convex optimization
- On dissipative symplectic integration with applications to gradient-based optimization
- A primal-dual flow for affine constrained convex optimization
- Novel projection neurodynamic approaches for constrained convex optimization
- Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction
- A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex Optimization
- Selection dynamics for deep neural networks
- Accelerated variational PDEs for efficient solution of regularized inversion problems
- A Continuous-Time Analysis of Distributed Stochastic Gradient
- Projected Dynamical Systems on Irregular, Non-Euclidean Domains for Nonlinear Optimization
- The Connections Between Lyapunov Functions for Some Optimization Algorithms and Differential Equations
- Convergence Rates of Inertial Primal-Dual Dynamical Methods for Separable Convex Optimization Problems
- Title not available (Why is that?)
- Title not available (Why is that?)
- Continuous relaxations for the traveling salesman problem
- Accelerated optimization on Riemannian manifolds via discrete constrained variational integrators
- Accelerations for a variety of global optimization methods
- Stochastic mirror descent dynamics and their convergence in monotone variational inequalities
- An \(O(s^r)\)-resolution ODE framework for understanding discrete-time algorithms and applications to the linear convergence of minimax problems
- Unified Acceleration of High-Order Algorithms under General Hölder Continuity
- Neurodynamic optimization approaches with finite/fixed-time convergence for absolute value equations
- Title not available (Why is that?)
- Generalized Momentum-Based Methods: A Hamiltonian Perspective
- Adaptive Hamiltonian Variational Integrators and Applications to Symplectic Accelerated Optimization
- Is there an analog of Nesterov acceleration for gradient-based MCMC?
- Accelerated differential inclusion for convex optimization
- Mass-spring-damper networks for distributed optimization in non-Euclidean spaces
- Fractional differential equation approach for convex optimization with convergence rate analysis
- Contractivity of Runge--Kutta Methods for Convex Gradient Systems
- Title not available (Why is that?)
- User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
- An Optimal High-Order Tensor Method for Convex Optimization
- Accelerated information gradient flow
- A control-theoretic perspective on optimal high-order optimization
- A Variational Formulation of Accelerated Optimization on Riemannian Manifolds
- Iterative ensemble Kalman methods: a unified perspective with some new variants
- Accelerated schemes for a class of variational inequalities
- Accelerating optimization by tracing valley
- From differential equation solvers to accelerated first-order methods for convex optimization
- Revisiting the ODE method for recursive algorithms: fast convergence using quasi stochastic approximation
- Understanding the acceleration phenomenon via high-resolution differential equations
- Title not available (Why is that?)
- On the Convergence of Gradient-Like Flows with Noisy Gradient Input
- Robust hybrid zero-order optimization algorithms with acceleration via averaging in time
- How does momentum benefit deep neural networks architecture design? A few case studies
- Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity
- Accelerating Proximal Markov Chain Monte Carlo by Using an Explicit Stabilized Method
- Accelerations for global optimization covering methods using second derivatives
- Inertial primal-dual dynamics with damping and scaling for linearly constrained convex optimization problems
- Bregman Itoh-Abe methods for sparse optimisation
This page was built for publication: A variational perspective on accelerated methods in optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4646231)