A smoothing proximal gradient algorithm with extrapolation for the relaxation of \({\ell_0}\) regularization problem
From MaRDI portal
Publication:2696923
DOI10.1007/s10589-022-00446-zOpenAlexW4315648069MaRDI QIDQ2696923
Publication date: 17 April 2023
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2112.01114
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Gradient methods for minimizing composite functions
- Smoothing approach to Nash equilibrium formulations for a class of equilibrium problems with shared complementarity constraints
- On the convergence of the iterates of the ``fast iterative shrinkage/thresholding algorithm
- Smoothing methods for nonsmooth, nonconvex minimization
- Smoothing technique and its applications in semidefinite optimization
- Error bounds and convergence analysis of feasible descent methods: A general approach
- A smoothing method for a mathematical program with P-matrix linear complementarity constraints
- Nonlinear rescaling vs. smoothing technique in convex optimization
- A new look at smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequalities
- Adaptive restart for accelerated gradient schemes
- iPiano: Inertial Proximal Algorithm for Nonconvex Optimization
- Smoothing and First Order Methods: A Unified Framework
- Linear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization Problems
- Computing B-Stationary Points of Nonsmooth DC Programs
- Smoothing Projected Gradient Method and Its Application to Stochastic Linear Complementarity Problems
- Smoothing Methods and Semismooth Methods for Nondifferentiable Operator Equations
- First-Order Methods in Optimization
- Smoothing accelerated algorithm for constrained nonsmooth convex optimization problems
- Sublinear optimization for machine learning
- Some methods of speeding up the convergence of iteration methods
- Compressed sensing
This page was built for publication: A smoothing proximal gradient algorithm with extrapolation for the relaxation of \({\ell_0}\) regularization problem