On the linear convergence of the approximate proximal splitting method for non-smooth convex optimization
From MaRDI portal
(Redirected from Publication:489108)
Abstract: Consider the problem of minimizing the sum of two convex functions, one being smooth and the other non-smooth. In this paper, we introduce a general class of approximate proximal splitting (APS) methods for solving such minimization problems. Methods in the APS class include many well-known algorithms such as the proximal splitting method (PSM), the block coordinate descent method (BCD) and the approximate gradient projection methods for smooth convex optimization. We establish the linear convergence of APS methods under a local error bound assumption. Since the latter is known to hold for compressive sensing and sparse group LASSO problems, our analysis implies the linear convergence of the BCD method for these problems without strong convexity assumption.
Recommendations
- On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems
- Global convergence of splitting methods for nonconvex composite optimization
- A modified proximal gradient method for a family of nonsmooth convex optimization problems
- scientific article; zbMATH DE number 6453672
- scientific article; zbMATH DE number 7404502
Cites work
- scientific article; zbMATH DE number 3534286 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A coordinate gradient descent method for nonsmooth separable minimization
- A proximal point algorithm for log-determinant optimization with group Lasso regularization
- A unified convergence analysis of block successive minimization methods for nonsmooth optimization
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Consistency of the group Lasso and multiple kernel learning
- Convergence of Iterates of an Inexact Matrix Splitting Algorithm for the Symmetric Monotone Linear Complementarity Problem
- Convergence of a block coordinate descent method for nondifferentiable minimization
- Convex Analysis
- Error Bound and Convergence Analysis of Matrix Splitting Algorithms for the Affine Variational Inequality Problem
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- Incremental majorization-minimization optimization with application to large-scale machine learning
- Inexact Newton methods for the nonlinear complementarity problem
- Iteration complexity analysis of block coordinate descent methods
- Model Selection and Estimation in Regression with Grouped Variables
- On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- On the convergence of a basic iterative method for the implicit complementarity problem
- On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems
- Parallel random coordinate descent method for composite minimization: convergence analysis and error bounds
- Remarks on Convergence of the Matrix Splitting Algorithm for the Symmetric Linear Complementarity Problem
- Signal Recovery by Proximal Forward-Backward Splitting
- Sparse Reconstruction by Separable Approximation
- The Gradient Projection Method for Nonlinear Programming. Part I. Linear Constraints
- The Gradient Projection Method for Nonlinear Programming. Part II. Nonlinear Constraints
- The Group Lasso for Logistic Regression
Cited in
(9)- scientific article; zbMATH DE number 6453672 (Why is no real title available?)
- A coordinate descent homotopy method for linearly constrained nonsmooth convex minimization
- Global convergence of splitting methods for nonconvex composite optimization
- A unified convergence analysis of block successive minimization methods for nonsmooth optimization
- On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems
- The factor-Lasso and \(k\)-step bootstrap approach for inference in high-dimensional economic applications
- Convergence analysis of an improved Bregman-type Peaceman-Rachford splitting algorithm for nonconvex nonseparable linearly constrained optimization problems
- A block successive upper-bound minimization method of multipliers for linearly constrained convex optimization
- A homotopy alternating direction method of multipliers for linearly constrained separable convex optimization
This page was built for publication: On the linear convergence of the approximate proximal splitting method for non-smooth convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q489108)