Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions
DOI10.1287/moor.2019.1047zbMath1469.90108arXiv1711.01136OpenAlexW3037678684MaRDI QIDQ4991666
No author found.
Publication date: 3 June 2021
Published in: Mathematics of Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1711.01136
linear convergencerelative smoothnessincremental aggregated gradientBregman distance growthLipschitz-like/convexity
Numerical mathematical programming methods (65K05) Convex programming (90C25) Abstract computational complexity for mathematical programming problems (90C60)
Related Items (max. 100)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Minimizing finite sums with the stochastic average gradient
- The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Incremental proximal methods for large scale convex optimization
- Ergodic convergence to a zero of the sum of monotone operators in Hilbert space
- A simplified view of first order methods for optimization
- From error bounds to the complexity of first-order descent methods for convex functions
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Nonconvex proximal incremental aggregated gradient method with linear convergence
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
- On linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuity
- Linear convergence of first order methods for non-strongly convex optimization
- Augmented $\ell_1$ and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm
- Convergence Analysis of a Proximal-Like Minimization Algorithm Using Bregman Functions
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- Monotone Operators and the Proximal Point Algorithm
- First Order Methods Beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems
- First-Order Methods in Optimization
- Relatively Smooth Convex Optimization by First-Order Methods, and Applications
- Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods
- Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Regularization and Variable Selection Via the Elastic Net
- A Convergent Incremental Gradient Method with a Constant Step Size
- A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications
This page was built for publication: Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions