A unified approach to error bounds for structured convex optimization problems
From MaRDI portal
Publication:1675267
DOI10.1007/s10107-016-1100-9zbMath1380.65109arXiv1512.03518OpenAlexW2963254198MaRDI QIDQ1675267
Anthony Man-Cho So, Zirui Zhou
Publication date: 27 October 2017
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1512.03518
Related Items
A Unified Analysis of Descent Sequences in Weakly Convex Optimization, Including Convergence Rates for Bundle Methods, Interior quasi-subgradient method with non-Euclidean distances for constrained quasi-convex optimization problems in Hilbert spaces, Augmented Lagrangian methods for convex matrix optimization problems, Kurdyka-Łojasiewicz exponent via inf-projection, A First-Order Optimization Algorithm for Statistical Learning with Hierarchical Sparsity Structure, Quadratic growth conditions and uniqueness of optimal solution to Lasso, Linear convergence of first order methods for non-strongly convex optimization, A block symmetric Gauss-Seidel decomposition theorem for convex composite quadratic programming and its applications, Quadratic Growth Conditions for Convex Matrix Optimization Problems Associated with Spectral Functions, Linear convergence of Frank-Wolfe for rank-one matrix recovery without strong convexity, Linearly-convergent FISTA variant for composite optimization with duality, A unified approach to synchronization problems over subgroups of the orthogonal group, Characterization of the Robust Isolated Calmness for a Class of Conic Programming Problems, An extended Ulm-like method for inverse singular value problems with multiple and/or zero singular values, Linear Convergence of a Proximal Alternating Minimization Method with Extrapolation for \(\boldsymbol{\ell_1}\) -Norm Principal Component Analysis, Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry, A strict complementarity approach to error bound and sensitivity of solution of conic programs, On the Estimation Performance and Convergence Rate of the Generalized Power Method for Phase Synchronization, Retraction-based first-order feasible methods for difference-of-convex programs with smooth inequality and simple geometric constraints, A Highly Efficient Semismooth Newton Augmented Lagrangian Method for Solving Lasso Problems, Error bound and isocost imply linear convergence of DCA-based algorithms to D-stationarity, An efficient augmented Lagrangian method with semismooth Newton solver for total generalized variation, Extended Newton-type method for inverse singular value problems with multiple and/or zero singular values, An inexact Riemannian proximal gradient method, On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition, A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo-Tseng error bound property, On the Simplicity and Conditioning of Low Rank Semidefinite Programs, RSG: Beating Subgradient Method without Smoothness and Strong Convexity, Double fused Lasso penalized LAD for matrix regression, New characterizations of Hoffman constants for systems of linear constraints, Kurdyka-Łojasiewicz property of zero-norm composite functions, Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods, Error bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGM, On the linear convergence of forward-backward splitting method. I: Convergence analysis, Double fused Lasso regularized regression with both matrix and vector valued predictors, New analysis of linear convergence of gradient-type methods via unifying error bound conditions, Faster subgradient methods for functions with Hölderian growth, Spectral Operators of Matrices: Semismoothness and Characterizations of the Generalized Jacobian, Preconditioned proximal point methods and notions of partial subregularity, Metric subregularity and/or calmness of the normal cone mapping to the \(p\)-order conic constraint system, Novel Reformulations and Efficient Algorithms for the Generalized Trust Region Subproblem, Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems, Convergence rates of forward-Douglas-Rachford splitting method, Accelerate stochastic subgradient method by leveraging local growth condition, Convergence rates of subgradient methods for quasi-convex optimization problems, Quadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods, On the R-superlinear convergence of the KKT residuals generated by the augmented Lagrangian method for convex composite conic programming, On Degenerate Doubly Nonnegative Projection Problems, Hölderian Error Bounds and Kurdyka-Łojasiewicz Inequality for the Trust Region Subproblem, Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity, Perturbation techniques for convergence analysis of proximal gradient method and other first-order algorithms via variational analysis
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Gradient methods for minimizing composite functions
- On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems
- Fixed point and Bregman iterative methods for matrix rank minimization
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Component selection and smoothing in multivariate nonparametric regression
- A coordinate gradient descent method for nonsmooth separable minimization
- Convex multi-task feature learning
- Characterization of the subdifferential of some matrix norms
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Descent methods for convex essentially smooth minimization
- Local behavior of an iterative framework for generalized equations with nonisolated solutions
- Introductory lectures on convex optimization. A basic course.
- Strong conical hull intersection property, bounded linear regularity, Jameson's property \((G)\), and error bounds in convex optimization
- Optimal Hoffman-type estimates in eigenvalue and semidefinite inequality constraints
- Coordinate descent algorithms
- An introduction to a class of matrix cone programming
- A Proximal-Gradient Homotopy Method for the Sparse Least-Squares Problem
- Metric Subregularity and Calmness for Nonconvex Generalized Equations in Banach Spaces
- Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
- Calmness and Error Bounds for Convex Constraint Systems
- Implicit Functions and Solution Mappings
- The Group Lasso for Logistic Regression
- Some continuity properties of polyhedral multifunctions
- On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
- Strong Semismoothness of Eigenvalues of Symmetric Matrices and Its Application to Inverse Eigenvalue Problems
- Error Bounds for Linear Matrix Inequalities
- On Projection Algorithms for Solving Convex Feasibility Problems
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Error bounds and metric subregularity
- Recovering Low-Rank Matrices From Few Coefficients in Any Basis
- Model Selection and Estimation in Regression with Grouped Variables
- Signal Recovery by Proximal Forward-Backward Splitting
- Convex Analysis