Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems
Publication:2070400
DOI10.1007/S11228-021-00591-3zbMath1484.90082OpenAlexW3174661081MaRDI QIDQ2070400
Jin Zhang, Shangzhi Zeng, Jane J. Ye, Xiao-Ming Yuan
Publication date: 24 January 2022
Published in: Set-Valued and Variational Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11228-021-00591-3
linear convergencecalmnessmachine learningvariational analysisstatisticsmetric subregularityproximal gradient methodproximal alternating linearized minimizationrandomized block coordinate proximal gradient method
Convex programming (90C25) Nonsmooth analysis (49J52) Set-valued and variational analysis (49J53) Methods of reduced gradient type (90C52)
Related Items (14)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- On the rate of convergence of the proximal alternating linearized minimization algorithm for convex problems
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems
- On directional metric regularity, subregularity and optimality conditions for nonsmooth mathematical programs
- Iteration complexity analysis of block coordinate descent methods
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- A coordinate gradient descent method for nonsmooth separable minimization
- Ergodic convergence to a zero of the sum of monotone operators in Hilbert space
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Introductory lectures on convex optimization. A basic course.
- From error bounds to the complexity of first-order descent methods for convex functions
- A unified approach to error bounds for structured convex optimization problems
- Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods
- Verifiable sufficient conditions for the error bound property of second-order cone complementarity problems
- Regularity and conditioning of solution mappings in variational analysis
- New characterizations of Hoffman constants for systems of linear constraints
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
- Adaptive restart for accelerated gradient schemes
- Random block coordinate descent methods for linearly constrained optimization over networks
- Linear convergence of first order methods for non-strongly convex optimization
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Calmness of constraint systems with applications
- Optimization in High Dimensions via Accelerated, Parallel, and Proximal Coordinate Descent
- A Proximal-Gradient Homotopy Method for the Sparse Least-Squares Problem
- Optimization with Sparsity-Inducing Penalties
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Several Classes of Stationary Points for Rank Regularized Minimization Problems
- Lipschitz Behavior of Solutions to Convex Minimization Problems
- Parallel Random Coordinate Descent Method for Composite Minimization: Convergence Analysis and Error Bounds
- Some continuity properties of polyhedral multifunctions
- On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- Stability Theory for Systems of Inequalities. Part I: Linear Systems
- Variational Analysis
- Necessary Optimality Conditions for Optimization Problems with Variational Inequality Constraints
- Global Error Bounds for Convex Conic Problems
- First-Order Methods in Optimization
- Sparsity and Smoothness Via the Fused Lasso
- On the Calmness of a Class of Multifunctions
- Constrained Minima and Lipschitzian Penalties in Metric Spaces
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Approximations to Solutions to Systems of Linear Inequalities
- Error bounds for solutions of linear equations and inequalities
- Metric Subregularity of Piecewise Linear Multifunctions and Applications to Piecewise Linear Multiobjective Optimization
- Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
- Quantitative Convergence Analysis of Iterated Expansive, Set-Valued Mappings
- Mathematical Programs with Geometric Constraints in Banach Spaces: Enhanced Optimality, Exact Penalty, and Sensitivity
- Metric subregularity of the convex subdifferential in Banach spaces
- Model Selection and Estimation in Regression with Grouped Variables
- Simultaneous Regression Shrinkage, Variable Selection, and Supervised Clustering of Predictors with OSCAR
- Convex Analysis
- New Constraint Qualifications for Mathematical Programs with Equilibrium Constraints via Variational Analysis
This page was built for publication: Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems