Error bound and isocost imply linear convergence of DCA-based algorithms to D-stationarity
From MaRDI portal
Publication:2697002
DOI10.1007/s10957-023-02171-xOpenAlexW4322732110MaRDI QIDQ2697002
Publication date: 17 April 2023
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10957-023-02171-x
Related Items
Cites Work
- Nearly unbiased variable selection under minimax concave penalty
- A coordinate gradient descent method for nonsmooth separable minimization
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Convex analysis approach to d. c. programming: Theory, algorithms and applications
- A unified approach to error bounds for structured convex optimization problems
- Convergence analysis of difference-of-convex algorithm with subanalytic data
- A proximal difference-of-convex algorithm with extrapolation
- DC programming and DCA: thirty years of developments
- DC formulations and algorithms for sparse optimization problems
- The DC (Difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems
- Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods
- On the superiority of PGMs to PDCAs in nonsmooth nonconvex sparse regression
- Further properties of the forward-backward envelope with applications to difference-of-convex programming
- A refined convergence analysis of \(\mathrm{pDCA}_{e}\) with applications to simultaneous sparse recovery and outlier detection
- A successive difference-of-convex approximation method for a class of nonconvex nonsmooth optimization problems
- Enhanced proximal DC algorithms with extrapolation for a class of structured nonsmooth DC minimization
- A Unified Convergence Analysis of Block Successive Minimization Methods for Nonsmooth Optimization
- Linear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization Problems
- Computing B-Stationary Points of Nonsmooth DC Programs
- On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- Error Bound and Convergence Analysis of Matrix Splitting Algorithms for the Affine Variational Inequality Problem
- Variational Analysis
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
- Nonmonotone Enhanced Proximal DC Algorithms for a Class of Structured Nonsmooth DC Programming
- Difference-of-Convex Learning: Directional Stationarity, Optimality, and Sparsity
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems