Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods
From MaRDI portal
Publication:6073850
DOI10.1007/s00245-023-10047-9arXiv2102.07366OpenAlexW3130654565MaRDI QIDQ6073850
Chan-Woo Park, Ernest K. Ryu, Jisun Park
Publication date: 18 September 2023
Published in: Applied Mathematics and Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2102.07366
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Primal-dual subgradient methods for convex problems
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Optimized first-order methods for smooth convex minimization
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- The exact information-based complexity of smooth convex minimization
- On the convergence analysis of the optimized gradient method
- On the convergence rate of the Halpern-iteration
- Accelerating the cubic regularization of Newton's method on convex problems
- On optimality of Krylov's information when solving linear operator equations
- Information-based complexity of linear operator equations
- Introductory lectures on convex optimization. A basic course.
- Adaptive restart of the optimized gradient method for convex optimization
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- On the oracle complexity of smooth strongly convex minimization
- Efficient first-order methods for convex minimization: a constructive approach
- Accelerated proximal point method for maximally monotone operators
- Performance of first-order methods for smooth convex minimization: a novel approach
- A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Efficiency of the Accelerated Coordinate Descent Method on Structured Optimization Problems
- Tight Sublinear Convergence Rate of the Proximal Point Algorithm for Maximal Monotone Inclusion Problems
- Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- Generalizing the Optimized Gradient Method for Smooth Convex Minimization
- Using Optimization to Obtain a Width-Independent, Parallel, Simpler, and Faster Positive SDP Solver
- Another Look at the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)
- Relatively Smooth Convex Optimization by First-Order Methods, and Applications
- Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent
- A variational perspective on accelerated methods in optimization
- Katyusha: the first direct acceleration of stochastic gradient methods
- Convergence Rates of the Heavy Ball Method for Quasi-strongly Convex Optimization
- Large-Scale Convex Optimization
- Worst-Case Convergence Analysis of Inexact Gradient and Newton Methods Through Semidefinite Programming Performance Estimation
- Operator Splitting Performance Estimation: Tight Contraction Factors and Optimal Parameter Selection
- Optimal Convergence Rates for Nesterov Acceleration
- Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- Convex Analysis
- A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications
- An optimal gradient method for smooth strongly convex minimization
This page was built for publication: Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods