Accelerated First-Order Methods for Convex Optimization with Locally Lipschitz Continuous Gradient
DOI10.1137/22M1500496zbMATH Open1522.90101arXiv2206.01209OpenAlexW4386291915MaRDI QIDQ6046830FDOQ6046830
Author name not available (Why is that?), Zhaosong Lu
Publication date: 6 September 2023
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2206.01209
convex optimizationproximal gradient methoditeration complexityproximal augmented Lagrangian methodaccelerated first-order methodsoperation complexitylocally Lipschitz continuous gradient
Convex programming (90C25) Optimality conditions and duality in mathematical programming (90C46) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Gradient methods for minimizing composite functions
- Complexity of Variants of Tseng's Modified F-B Splitting and Korpelevich's Methods for Hemivariational Inequalities with Applications to Saddle-point and Convex Optimization Problems
- Monotone Operators and the Proximal Point Algorithm
- Iteration-complexity of first-order penalty methods for convex programming
- Accelerated, Parallel, and Proximal Coordinate Descent
- A Modified Forward-Backward Splitting Method for Maximal Monotone Mappings
- A Forward-Backward Splitting Method for Monotone Inclusions Without Cocoercivity
- Iteration-complexity of first-order augmented Lagrangian methods for convex programming
- Complexity of first-order inexact Lagrangian and penalty methods for conic convex programming
- Adaptive inexact fast augmented Lagrangian methods for constrained convex optimization
- Relatively Smooth Convex Optimization by First-Order Methods, and Applications
- A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications
- An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
- Nearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approach
- On the Nonergodic Convergence Rate of an Inexact Augmented Lagrangian Framework for Composite Convex Programming
- Accelerated Bregman proximal gradient methods for relatively smooth convex optimization
- Iteration-Complexity of First-Order Augmented Lagrangian Methods for Convex Conic Programming
Cited In (2)
This page was built for publication: Accelerated First-Order Methods for Convex Optimization with Locally Lipschitz Continuous Gradient
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6046830)