A unified approach to error bounds for structured convex optimization problems
From MaRDI portal
Abstract: Error bounds, which refer to inequalities that bound the distance of vectors in a test set to a given set by a residual function, have proven to be extremely useful in analyzing the convergence rates of a host of iterative methods for solving optimization problems. In this paper, we present a new framework for establishing error bounds for a class of structured convex optimization problems, in which the objective function is the sum of a smooth convex function and a general closed proper convex function. Such a class encapsulates not only fairly general constrained minimization problems but also various regularized loss minimization formulations in machine learning, signal processing, and statistics. Using our framework, we show that a number of existing error bound results can be recovered in a unified and transparent manner. To further demonstrate the power of our framework, we apply it to a class of nuclear-norm regularized loss minimization problems and establish a new error bound for this class under a strict complementarity-type regularity condition. We then complement this result by constructing an example to show that the said error bound could fail to hold without the regularity condition. Consequently, we obtain a rather complete answer to a question raised by Tseng. We believe that our approach will find further applications in the study of error bounds for structured convex optimization problems.
Recommendations
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- New error bounds and their applications to convergence analysis of iterative algorithms
- scientific article; zbMATH DE number 1328979
- Error bounds revisited
- Error bound results for convex inequality systems via conjugate duality
Cites work
- scientific article; zbMATH DE number 47363 (Why is no real title available?)
- scientific article; zbMATH DE number 5060482 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A coordinate gradient descent method for nonsmooth separable minimization
- A proximal-gradient homotopy method for the sparse least-squares problem
- An introduction to a class of matrix cone programming
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Calmness and Error Bounds for Convex Constraint Systems
- Characterization of the subdifferential of some matrix norms
- Component selection and smoothing in multivariate nonparametric regression
- Convex Analysis
- Convex multi-task feature learning
- Coordinate descent algorithms
- Descent methods for convex essentially smooth minimization
- Error Bounds for Linear Matrix Inequalities
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Error bounds and metric subregularity
- Error bounds, calmness and their applications in nonsmooth analysis
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Fixed point and Bregman iterative methods for matrix rank minimization
- Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
- Gradient methods for minimizing composite functions
- Implicit Functions and Solution Mappings
- Introductory lectures on convex optimization. A basic course.
- Local behavior of an iterative framework for generalized equations with nonisolated solutions
- Metric subregularity and calmness for nonconvex generalized equations in Banach spaces
- Model Selection and Estimation in Regression with Grouped Variables
- On Projection Algorithms for Solving Convex Feasibility Problems
- On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
- On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems
- Optimal Hoffman-type estimates in eigenvalue and semidefinite inequality constraints
- Recovering Low-Rank Matrices From Few Coefficients in Any Basis
- Signal Recovery by Proximal Forward-Backward Splitting
- Some continuity properties of polyhedral multifunctions
- Strong Semismoothness of Eigenvalues of Symmetric Matrices and Its Application to Inverse Eigenvalue Problems
- Strong conical hull intersection property, bounded linear regularity, Jameson's property \((G)\), and error bounds in convex optimization
- The Group Lasso for Logistic Regression
Cited in
(59)- A Unified Analysis of Descent Sequences in Weakly Convex Optimization, Including Convergence Rates for Bundle Methods
- Linear Convergence of a Proximal Alternating Minimization Method with Extrapolation for \(\boldsymbol{\ell_1}\) -Norm Principal Component Analysis
- A highly efficient algorithm for solving exclusive lasso problems
- Kurdyka-Łojasiewicz exponent via Hadamard parametrization
- On degenerate doubly nonnegative projection problems
- Linearly-convergent FISTA variant for composite optimization with duality
- Linear convergence of Frank-Wolfe for rank-one matrix recovery without strong convexity
- On the simplicity and conditioning of low rank semidefinite programs
- Quadratic growth conditions and uniqueness of optimal solution to Lasso
- Hölderian Error Bounds and Kurdyka-Łojasiewicz Inequality for the Trust Region Subproblem
- On the quadratic convergence of the cubic regularization method under a local error bound condition
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Interior quasi-subgradient method with non-Euclidean distances for constrained quasi-convex optimization problems in Hilbert spaces
- An extended Ulm-like method for inverse singular value problems with multiple and/or zero singular values
- Kurdyka-Łojasiewicz property of zero-norm composite functions
- Linear convergence of first order methods for non-strongly convex optimization
- Rigorous Error Bounds for the Optimal Value in Semidefinite Programming
- Error bounds for rank constrained optimization problems and applications
- Double fused Lasso regularized regression with both matrix and vector valued predictors
- Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- Spectral operators of matrices: semismoothness and characterizations of the generalized Jacobian
- Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry
- An inexact Riemannian proximal gradient method
- A highly efficient semismooth Newton augmented Lagrangian method for solving lasso problems
- A First-Order Optimization Algorithm for Statistical Learning with Hierarchical Sparsity Structure
- Convergence rates of subgradient methods for quasi-convex optimization problems
- Novel reformulations and efficient algorithms for the generalized trust region subproblem
- An efficient augmented Lagrangian method with semismooth Newton solver for total generalized variation
- Error bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGM
- Faster subgradient methods for functions with Hölderian growth
- scientific article; zbMATH DE number 841069 (Why is no real title available?)
- New characterizations of Hoffman constants for systems of linear constraints
- Convergence rates of forward-Douglas-Rachford splitting method
- New error bounds and their applications to convergence analysis of iterative algorithms
- Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
- Perturbation techniques for convergence analysis of proximal gradient method and other first-order algorithms via variational analysis
- Characterization of the robust isolated calmness for a class of conic programming problems
- Extended Newton-type method for inverse singular value problems with multiple and/or zero singular values
- A unified approach to synchronization problems over subgroups of the orthogonal group
- On the estimation performance and convergence rate of the generalized power method for phase synchronization
- Error bound and isocost imply linear convergence of DCA-based algorithms to D-stationarity
- Augmented Lagrangian methods for convex matrix optimization problems
- Kurdyka-Łojasiewicz exponent via inf-projection
- Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems
- A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo-Tseng error bound property
- On the linear convergence of forward-backward splitting method. I: Convergence analysis
- Quadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods
- A block symmetric Gauss-Seidel decomposition theorem for convex composite quadratic programming and its applications
- Metric subregularity and/or calmness of the normal cone mapping to the \(p\)-order conic constraint system
- Quadratic growth conditions for convex matrix optimization problems associated with spectral functions
- Double fused Lasso penalized LAD for matrix regression
- Preconditioned proximal point methods and notions of partial subregularity
- A strict complementarity approach to error bound and sensitivity of solution of conic programs
- Retraction-based first-order feasible methods for difference-of-convex programs with smooth inequality and simple geometric constraints
- Error Bound and Reduced-Gradient Projection Algorithms for Convex Minimization over a Polyhedral Set
- On the R-superlinear convergence of the KKT residuals generated by the augmented Lagrangian method for convex composite conic programming
- Accelerate stochastic subgradient method by leveraging local growth condition
This page was built for publication: A unified approach to error bounds for structured convex optimization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1675267)