A variational perspective on accelerated methods in optimization
From MaRDI portal
Publication:4646231
Abstract: Accelerated gradient methods play a central role in optimization, achieving optimal rates in many settings. While many generalizations and extensions of Nesterov's original acceleration method have been proposed, it is not yet clear what is the natural scope of the acceleration concept. In this paper, we study accelerated methods from a continuous-time perspective. We show that there is a Lagrangian functional that we call the emph{Bregman Lagrangian} which generates a large class of accelerated methods in continuous time, including (but not limited to) accelerated gradient descent, its non-Euclidean extension, and accelerated higher-order gradient methods. We show that the continuous-time limit of all of these methods correspond to traveling the same curve in spacetime at different speeds. From this perspective, Nesterov's technique and many of its generalizations can be viewed as a systematic way to go from the continuous-time curves generated by the Bregman Lagrangian to a family of discrete-time accelerated algorithms.
Recommendations
- Multiscale analysis of accelerated gradient methods
- A variational formulation of accelerated optimization on Riemannian manifolds
- Understanding the acceleration phenomenon via high-resolution differential equations
- A differential equation for modeling Nesterov's accelerated gradient method: theory and insights
- Generalized momentum-based methods: a Hamiltonian perspective
Cited in
(only showing first 100 items - show all)- High-order symplectic Lie group methods on \(SO(n)\) using the polar decomposition
- Finding extremals of Lagrangian actions
- The connections between Lyapunov functions for some optimization algorithms and differential equations
- A primal-dual flow for affine constrained convex optimization
- Novel projection neurodynamic approaches for constrained convex optimization
- On the convergence of gradient-like flows with noisy gradient input
- The approximate duality gap technique: a unified theory of first-order methods
- A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex Optimization
- Is there an analog of Nesterov acceleration for gradient-based MCMC?
- Generalized momentum-based methods: a Hamiltonian perspective
- Unified acceleration of high-order algorithms under general Hölder continuity
- scientific article; zbMATH DE number 1268598 (Why is no real title available?)
- Inducing strong convergence of trajectories in dynamical systems associated to monotone inclusions with composite structure
- User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
- A Continuous-Time Analysis of Distributed Stochastic Gradient
- Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints
- Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming
- scientific article; zbMATH DE number 7370590 (Why is no real title available?)
- Bregman Itoh-Abe methods for sparse optimisation
- Accelerated information gradient flow
- Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction
- A differential equation for modeling Nesterov's accelerated gradient method: theory and insights
- A control-theoretic perspective on optimal high-order optimization
- Stochastic modified equations and dynamics of stochastic gradient algorithms. I: Mathematical foundations
- Preconditioned accelerated gradient descent methods for locally Lipschitz smooth objectives with applications to the solution of nonlinear PDEs
- Accelerated differential inclusion for convex optimization
- How does momentum benefit deep neural networks architecture design? A few case studies
- Stochastic mirror descent dynamics and their convergence in monotone variational inequalities
- Accelerated optimization on Riemannian manifolds via discrete constrained variational integrators
- PDE acceleration: a convergence rate analysis and applications to obstacle problems
- Selection dynamics for deep neural networks
- Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity
- Deterministic neural networks optimization from a continuous and energy point of view
- Conformal symplectic and relativistic optimization
- A general system of differential equations to model first-order adaptive algorithms
- Accelerating Proximal Markov Chain Monte Carlo by Using an Explicit Stabilized Method
- Accelerated schemes for a class of variational inequalities
- Accelerating optimization by tracing valley
- Convergence rates of inertial primal-dual dynamical methods for separable convex optimization problems
- Accelerations for global optimization covering methods using second derivatives
- Accelerations for a variety of global optimization methods
- Mass-spring-damper networks for distributed optimization in non-Euclidean spaces
- From differential equation solvers to accelerated first-order methods for convex optimization
- Finding geodesics joining given points
- Iterative ensemble Kalman methods: a unified perspective with some new variants
- First-order methods for convex optimization
- Contractivity of Runge-Kutta methods for convex gradient systems
- Fractional differential equation approach for convex optimization with convergence rate analysis
- Discrete processes and their continuous limits
- Inertial primal-dual dynamics with damping and scaling for linearly constrained convex optimization problems
- Triggered gradient tracking for asynchronous distributed optimization
- Projected dynamical systems on irregular, non-Euclidean domains for nonlinear optimization
- Analysis of optimization algorithms via integral quadratic constraints: nonstrongly convex problems
- Adaptive Hamiltonian variational integrators and applications to symplectic accelerated optimization
- Explicit stabilised gradient descent for faster strongly convex optimisation
- Revisiting the ODE method for recursive algorithms: fast convergence using quasi stochastic approximation
- Fast primal-dual algorithm via dynamical system for a linearly constrained convex optimization problem
- Online optimization of switched LTI systems using continuous-time and hybrid accelerated gradient flows
- Robust hybrid zero-order optimization algorithms with acceleration via averaging in time
- On dissipative symplectic integration with applications to gradient-based optimization
- Accelerated variational PDEs for efficient solution of regularized inversion problems
- Stochastic Methods for Composite and Weakly Convex Optimization Problems
- scientific article; zbMATH DE number 7306890 (Why is no real title available?)
- An \(O(s^r)\)-resolution ODE framework for understanding discrete-time algorithms and applications to the linear convergence of minimax problems
- A variational formulation of accelerated optimization on Riemannian manifolds
- An optimal high-order tensor method for convex optimization
- Neurodynamic optimization approaches with finite/fixed-time convergence for absolute value equations
- Understanding the acceleration phenomenon via high-resolution differential equations
- Continuous relaxations for the traveling salesman problem
- scientific article; zbMATH DE number 7370630 (Why is no real title available?)
- The regularized submodular maximization via the Lyapunov method
- A new dynamical system with self-adaptive dynamical stepsize for pseudomonotone mixed variational inequalities
- Time-adaptive Lagrangian variational integrators for accelerated optimization
- Accelerated Optimization in the PDE Framework Formulations for the Active Contour Case
- Essential convergence rate of ordinary differential equations appearing in optimization
- Online accelerated data-driven learning for optimal feedback control of discrete-time partially uncertain systems
- Hessian barrier algorithms for linearly constrained optimization problems
- An ordinary differential equation for modeling Halpern fixed-point Algorithm
- Practical perspectives on symplectic accelerated optimization
- Near-optimal tensor methods for minimizing the gradient norm of convex functions and accelerated primal–dual tensor methods
- Decentralized concurrent learning with coordinated momentum and restart
- Accelerated Forward-Backward Optimization using Deep Learning
- Fast symplectic integrator for Nesterov-type acceleration method
- Multiscale analysis of accelerated gradient methods
- From Halpern's fixed-point iterations to Nesterov's accelerated interpretations for root-finding problems
- Differential equations in data analysis
- Exploring critical points of energy landscapes: from low-dimensional examples to phase field crystal PDEs
- Accelerated Optimization in the PDE Framework: Formulations for the Manifold of Diffeomorphisms
- Accelerated extra-gradient descent: a novel accelerated first-order method
- Fast gradient method for low-rank matrix estimation
- Conformal mirror descent with logarithmic divergences
- Dynamical Systems Theory and Algorithms for NP-hard Problems
- No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization
- Bregman dynamics, contact transformations and convex optimization
- A Recursively Recurrent Neural Network (R2N2) Architecture for Learning Iterative Algorithms
- Last-iterate convergence of saddle-point optimizers via high-resolution differential equations
- Linear convergence of forward-backward accelerated algorithms without knowledge of the modulus of strong convexity
- Convergence rate bounds for the mirror descent method: IQCs, Popov criterion and Bregman divergence
- No-regret algorithms in on-line learning, games and convex optimization
- A stochastic iteratively regularized Gauss-Newton method
This page was built for publication: A variational perspective on accelerated methods in optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4646231)