Linear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization Problems
From MaRDI portal
Publication:2957980
DOI10.1137/16M1055323zbMath1359.90138arXiv1512.09302OpenAlexW2963118312WikidataQ57511154 ScholiaQ57511154MaRDI QIDQ2957980
Xiaojun Chen, Bo Wen, Ting Kei Pong
Publication date: 31 January 2017
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1512.09302
error boundextrapolationlinear convergenceconvex minimizationaccelerated gradient methodnonconvex nonsmooth minimization
Numerical mathematical programming methods (65K05) Convex programming (90C25) Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30)
Related Items
Non-convex regularization and accelerated gradient algorithm for sparse portfolio selection ⋮ Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Nonconvex Optimization ⋮ Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition ⋮ A note on the Douglas-Rachford splitting method for optimization problems involving hypoconvex functions ⋮ On compositions of special cases of Lipschitz continuous operators ⋮ Convergence of inexact quasisubgradient methods with extrapolation ⋮ A fast proximal iteratively reweighted nuclear norm algorithm for nonconvex low-rank matrix minimization problems ⋮ A proximal algorithm with backtracked extrapolation for a class of structured fractional programming ⋮ A block symmetric Gauss-Seidel decomposition theorem for convex composite quadratic programming and its applications ⋮ Proximal gradient methods for general smooth graph total variation model in unsupervised learning ⋮ Zero-norm regularized problems: equivalent surrogates, proximal MM method and statistical error bound ⋮ A refined convergence analysis of \(\mathrm{pDCA}_{e}\) with applications to simultaneous sparse recovery and outlier detection ⋮ General inertial proximal gradient method for a class of nonconvex nonsmooth optimization problems ⋮ Convergence of the Momentum Method for Semialgebraic Functions with Locally Lipschitz Gradients ⋮ Non-smooth non-convex Bregman minimization: unification and new algorithms ⋮ Double inertial parameters forward-backward splitting method: Applications to compressed sensing, image processing, and SCAD penalty problems ⋮ Sparse and risk diversification portfolio selection ⋮ Inertial projected gradient method for large-scale topology optimization ⋮ Accelerated smoothing hard thresholding algorithms for \(\ell_0\) regularized nonsmooth convex regression problem ⋮ A class of modified accelerated proximal gradient methods for nonsmooth and nonconvex minimization problems ⋮ A non-convex piecewise quadratic approximation of \(\ell_0\) regularization: theory and accelerated algorithm ⋮ An extrapolated proximal iteratively reweighted method for nonconvex composite optimization problems ⋮ Proximal gradient method with extrapolation and line search for a class of non-convex and non-smooth problems ⋮ A smoothing proximal gradient algorithm with extrapolation for the relaxation of \({\ell_0}\) regularization problem ⋮ Error bound and isocost imply linear convergence of DCA-based algorithms to D-stationarity ⋮ A double extrapolation primal-dual algorithm for saddle point problems ⋮ A new piecewise quadratic approximation approach for \(L_0\) norm minimization problem ⋮ A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo-Tseng error bound property ⋮ The convergence rate analysis of the symmetric ADMM for the nonconvex separable optimization problems ⋮ The chain rule for VU-decompositions of nonsmooth functions ⋮ Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems ⋮ Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems ⋮ Nonconvex proximal incremental aggregated gradient method with linear convergence ⋮ Local linear convergence of the alternating direction method of multipliers for nonconvex separable optimization problems ⋮ Kurdyka-Łojasiewicz property of zero-norm composite functions ⋮ Inexact proximal memoryless quasi-Newton methods based on the Broyden family for minimizing composite functions ⋮ Accelerated iterative hard thresholding algorithm for \(l_0\) regularized regression problem ⋮ An accelerated smoothing gradient method for nonconvex nonsmooth minimization in image processing ⋮ An improved linear convergence of FISTA for the LASSO problem with application to CT image reconstruction ⋮ An inexact proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth optimization problems ⋮ Some modified fast iterative shrinkage thresholding algorithms with a new adaptive non-monotone stepsize strategy for nonsmooth and convex minimization problems ⋮ Inertial proximal incremental aggregated gradient method with linear convergence guarantees ⋮ Linear convergence of proximal incremental aggregated gradient method for nonconvex nonsmooth minimization problems ⋮ Extrapolated smoothing descent algorithm for constrained nonconvex and nonsmooth composite problems ⋮ An accelerated majorization-minimization algorithm with convergence guarantee for non-Lipschitz wavelet synthesis model * ⋮ Perturbation techniques for convergence analysis of proximal gradient method and other first-order algorithms via variational analysis
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Sparse solutions to random standard quadratic optimization problems
- On the convergence of the iterates of the ``fast iterative shrinkage/thresholding algorithm
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- A coordinate gradient descent method for linearly constrained smooth optimization and support vector machines training
- Convex analysis and nonlinear optimization. Theory and examples.
- Dual extrapolation and its applications to solving variational inequalities and related problems
- A coordinate gradient descent method for nonsmooth separable minimization
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Introductory lectures on convex optimization. A basic course.
- An algorithm for total variation minimization and applications
- Templates for convex cone problems with applications to sparse signal recovery
- Adaptive restart for accelerated gradient schemes
- Exact matrix completion via convex optimization
- Local Linear Convergence of ISTA and FISTA on the LASSO Problem
- NESTA: A Fast and Accurate First-Order Method for Sparse Recovery
- Decoding by Linear Programming
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
- Continuous Characterizations of the Maximum Clique Problem
- Variational Analysis
- A Linearly Convergent Dual-Based Gradient Projection Algorithm for Quadratically Constrained Convex Minimization
- Convex Analysis
- Compressed sensing