Nonmonotone Enhanced Proximal DC Algorithms for a Class of Structured Nonsmooth DC Programming
From MaRDI portal
Publication:5242930
DOI10.1137/18M1214342zbMath1430.90471WikidataQ126855759 ScholiaQ126855759MaRDI QIDQ5242930
Publication date: 8 November 2019
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30)
Related Items (12)
Difference-of-Convex Algorithms for a Class of Sparse Group $\ell_0$ Regularized Optimization Problems ⋮ Convergence rate analysis of an extrapolated proximal difference-of-convex algorithm ⋮ A proximal subgradient algorithm with extrapolation for structured nonconvex nonsmooth problems ⋮ Convergence of a Class of Nonmonotone Descent Methods for Kurdyka–Łojasiewicz Optimization Problems ⋮ A three-operator splitting algorithm with deviations for generalized DC programming ⋮ Hybrid Algorithms for Finding a D-Stationary Point of a Class of Structured Nonsmooth DC Minimization ⋮ Open issues and recent advances in DC programming and DCA ⋮ Error bound and isocost imply linear convergence of DCA-based algorithms to D-stationarity ⋮ On the superiority of PGMs to PDCAs in nonsmooth nonconvex sparse regression ⋮ Solving nonnegative sparsity-constrained optimization via DC quadratic-piecewise-linear approximations ⋮ First-Order Algorithms for a Class of Fractional Optimization Problems ⋮ A unified Douglas-Rachford algorithm for generalized DC programming
Cites Work
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- A proximal difference-of-convex algorithm with extrapolation
- DC programming and DCA: thirty years of developments
- DC formulations and algorithms for sparse optimization problems
- DC programming: overview.
- Adaptive restart for accelerated gradient schemes
- Enhanced proximal DC algorithms with extrapolation for a class of structured nonsmooth DC minimization
- Computing B-Stationary Points of Nonsmooth DC Programs
- Two-Point Step Size Gradient Methods
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sparse Reconstruction by Separable Approximation
- Optimal Joint Base Station Assignment and Beamforming for Heterogeneous Networks
- A New Decomposition Method for Multiuser DC-Programming and Its Applications
- A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming
- Difference-of-Convex Learning: Directional Stationarity, Optimality, and Sparsity
- Convex Analysis
This page was built for publication: Nonmonotone Enhanced Proximal DC Algorithms for a Class of Structured Nonsmooth DC Programming