Convergence of iterates for first-order optimization algorithms with inertia and Hessian driven damping
From MaRDI portal
Publication:6042225
DOI10.1080/02331934.2021.2009828zbMath1519.90260arXiv2107.05943OpenAlexW3180018585MaRDI QIDQ6042225
Zaki Chbani, Hedy Attouch, Jalal Fadili, Hassan Riahi
Publication date: 16 May 2023
Published in: Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2107.05943
time rescalingconvergence of iteratesNesterov accelerated gradient methodHessian driven dampinginertial optimization algorithms
Related Items (7)
Fast convergence of inertial dynamics with Hessian-driven damping under geometry assumptions ⋮ Fast continuous dynamics inside the graph of subdifferentials of nonsmooth convex functions ⋮ An ordinary differential equation for modeling Halpern fixed-point Algorithm ⋮ Practical perspectives on symplectic accelerated optimization ⋮ Convergence of inertial dynamics driven by sums of potential and nonpotential operators with implicit Newton-like damping ⋮ Accelerated gradient methods combining Tikhonov regularization with geometric damping driven by the Hessian ⋮ On the effect of perturbations in first-order optimization methods with inertia and Hessian driven damping
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Optimized first-order methods for smooth convex minimization
- Fast convex optimization via inertial dynamics with Hessian driven damping
- On the convergence of the iterates of the ``fast iterative shrinkage/thresholding algorithm
- Weak versus strong convergence of a regularized Newton dynamic for maximal monotone operators
- Asymptotic convergence of nonlinear contraction semigroups in Hilbert space
- A second-order gradient-like dissipative dynamical system with Hessian-driven damping. Application to optimization and mechanics.
- Introductory lectures on convex optimization. A basic course.
- Continuous Newton-like inertial dynamics for monotone inclusions
- Understanding the acceleration phenomenon via high-resolution differential equations
- A control-theoretic perspective on optimal high-order optimization
- Fast convex optimization via inertial dynamics combining viscous and Hessian-driven damping with time rescaling
- First-order optimization algorithms via inertial systems with Hessian driven damping
- Tikhonov regularization of a second order dynamical system with Hessian driven damping
- Accelerated proximal point method for maximally monotone operators
- Convergence rate of inertial forward-backward algorithm beyond Nesterov's rule
- Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity
- The Rate of Convergence of Nesterov's Accelerated Forward-Backward Method is Actually Faster Than $1/k^2$
- Convergence Rates of Inertial Forward-Backward Algorithms
- Asymptotic for a second-order evolution equation with convex potential andvanishing damping term
- Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α ≤ 3
- Finite Convergence of Proximal-Gradient Inertial Algorithms Combining Dry Friction with Hessian-Driven Damping
- Newton-like Inertial Dynamics and Proximal Algorithms Governed by Maximally Monotone Operators
- An Inertial Newton Algorithm for Deep Learning
- Fast Proximal Methods via Time Scaling of Damped Inertial Dynamics
- Weak convergence of the sequence of successive approximations for nonexpansive mappings
- Convex analysis and monotone operator theory in Hilbert spaces
This page was built for publication: Convergence of iterates for first-order optimization algorithms with inertia and Hessian driven damping