A global two-stage algorithm for non-convex penalized high-dimensional linear regression problems
From MaRDI portal
Publication:6177008
DOI10.1007/s00180-022-01249-warXiv2111.11801OpenAlexW3215217380MaRDI QIDQ6177008
Publication date: 29 August 2023
Published in: Computational Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2111.11801
difference of convex functionsglobal convergencehigh-dimensional linear regressiontwo-stage algorithmprimal dual active set with continuation algorithm
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Nearly unbiased variable selection under minimax concave penalty
- DC approximation approaches for sparse optimization
- One-step sparse estimates in nonconcave penalized likelihood models
- Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods
- Least angle regression. (With discussion)
- A unified primal dual active set algorithm for nonconvex sparse recovery
- Calibrating nonconvex penalized regression in ultra-high dimension
- Coordinate descent algorithms for lasso penalized regression
- High-dimensional graphs and variable selection with the Lasso
- Variable selection using MM algorithms
- Atomic Decomposition by Basis Pursuit
- Proximal Newton-Type Methods for Minimizing Composite Functions
- Computing B-Stationary Points of Nonsmooth DC Programs
- Proximity algorithms for image models: denoising
- SparseNet: Coordinate Descent With Nonconvex Penalties
- Adapting to Unknown Smoothness via Wavelet Shrinkage
- Decoding by Linear Programming
- The Primal-Dual Active Set Strategy as a Semismooth Newton Method
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Composite Difference-Max Programs for Modern Statistical Estimation Problems
- The Convergence Guarantees of a Non-Convex Approach for Sparse Recovery
- A Primal Dual Active Set Algorithm With Continuation for Compressed Sensing
- A Highly Efficient Semismooth Newton Augmented Lagrangian Method for Solving Lasso Problems
- A Statistical View of Some Chemometrics Regression Tools
- Sparse Approximate Solutions to Linear Systems
- Sequential Lasso Cum EBIC for Feature Selection With Ultra-High Dimensional Feature Space
- Difference-of-Convex Learning: Directional Stationarity, Optimality, and Sparsity
This page was built for publication: A global two-stage algorithm for non-convex penalized high-dimensional linear regression problems