Linear Convergence of Descent Methods for the Unconstrained Minimization of Restricted Strongly Convex Functions
From MaRDI portal
Publication:2821800
DOI10.1137/140992990zbMath1348.65094MaRDI QIDQ2821800
Publication date: 23 September 2016
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://semanticscholar.org/paper/2c8b02a331801460a35cc8a8fe7bf33684f7c656
conjugate gradient method; error bound; linear convergence; gradient descent method; conjugate gradient; Broyden-Fletcher-Goldfarb-Shanno method; restricted strong convexity; quadratic splines
Related Items
Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling, Adaptively sketched Bregman projection methods for linear systems, Regularized Kaczmarz Algorithms for Tensor Recovery, Newton-MR: inexact Newton method with minimum residual sub-problem solver, On the convergence rate of Fletcher‐Reeves nonlinear conjugate gradient methods satisfying strong Wolfe conditions: Application to parameter identification in problems governed by general dynamics, Linear convergence of the randomized sparse Kaczmarz method, Extended randomized Kaczmarz method for sparse least squares and impulsive noise problems, New analysis of linear convergence of gradient-type methods via unifying error bound conditions
Uses Software
Cites Work
- Strongly convex programming for exact matrix completion and robust principal component analysis
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Global convergence of a two-parameter family of conjugate gradient methods without line search
- A nonmonotone conjugate gradient algorithm for unconstrained optimization
- On the limited memory BFGS method for large scale optimization
- Efficient generalized conjugate gradient algorithms. I: Theory
- An algorithm for a singly constrained class of quadratic programs subject upper and lower bounds
- A new algorithm of nonlinear conjugate gradient method with strong convergence
- Fast linearized Bregman iteration for compressive sensing and sparse denoising
- Efficient hybrid conjugate gradient techniques
- Characterization of the subdifferential of some matrix norms
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Error bounds in mathematical programming
- On the characterization of quadratic splines
- Strong conical hull intersection property, bounded linear regularity, Jameson's property \((G)\), and error bounds in convex optimization
- Linearly convergent descent methods for the unconstrained minimization of convex quadratic splines
- A conjugate gradient method for the unconstrained minimization of strictly convex quadratic splines
- Damped techniques for the limited memory BFGS method for large-scale optimization
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Alternative proofs of the convergence properties of the conjugate- gradient method
- Nonsmooth analysis of singular values. I: Theory
- Nonsmooth analysis of singular values. II: Applications
- Augmented $\ell_1$ and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm
- The Linearized Bregman Method via Split Feasibility Problems: Analysis and Generalizations
- Asynchronous Stochastic Coordinate Descent: Parallelism and Convergence Properties
- Robust principal component analysis?
- Analysis and Generalizations of the Linearized Bregman Method
- Convergence of the linearized Bregman iteration for ℓ₁-norm minimization
- A Singular Value Thresholding Algorithm for Matrix Completion
- Rank-Sparsity Incoherence for Matrix Decomposition
- A Newton Method for Convex Regression, Data Smoothing, and Quadratic Programming with Bounded Constraints
- Sparse and Redundant Representations
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- Global Convergence of a Cass of Quasi-Newton Methods on Convex Problems
- A Tool for the Analysis of Quasi-Newton Methods with Application to Unconstrained Minimization
- Some continuity properties of polyhedral multifunctions
- On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- Convergence conditions for restarted conjugate gradient methods with inaccurate line searches
- Quasi-Newton Methods, Motivation and Theory
- Algorithms for nonlinear constraints that use lagrangian functions
- Atomic Decomposition by Basis Pursuit
- Numerical Optimization
- Variational Analysis
- Strong Semismoothness of Eigenvalues of Symmetric Matrices and Its Application to Inverse Eigenvalue Problems
- Rate of Convergence of Several Conjugate Gradient Algorithms
- Error Bounds for Piecewise Convex Quadratic Programs and Applications
- Exact Regularization of Polyhedral Norms
- A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property
- A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search
- Function minimization by conjugate gradients
- The conjugate gradient method in extremal problems
- Linear Convergence of the Conjugate Gradient Method
- Methods of conjugate gradients for solving linear systems
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item