Multi-stage convex relaxation for feature selection
From MaRDI portal
Publication:2435243
Abstract: A number of recent work studied the effectiveness of feature selection using Lasso. It is known that under the restricted isometry properties (RIP), Lasso does not generally lead to the exact recovery of the set of nonzero coefficients, due to the looseness of convex relaxation. This paper considers the feature selection property of nonconvex regularization, where the solution is given by a multi-stage convex relaxation scheme. Under appropriate conditions, we show that the local solution obtained by this procedure recovers the set of nonzero coefficients without suffering from the bias of Lasso relaxation, which complements parameter estimation results of this procedure.
Recommendations
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations
- Analysis of multi-stage convex relaxation for sparse regularization
- Decoding by Linear Programming
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- High-dimensional graphs and variable selection with the Lasso
- Nearly unbiased variable selection under minimax concave penalty
- On the conditions used to prove oracle results for the Lasso
- One-step sparse estimates in nonconcave penalized likelihood models
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Simultaneous analysis of Lasso and Dantzig selector
- Some sharp performance bounds for least squares regression with L₁ regularization
- Sparsity in penalized empirical risk minimization
- Sparsity oracle inequalities for the Lasso
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- The Adaptive Lasso and Its Oracle Properties
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Cited in
(21)- Separating variables to accelerate non-convex regularized optimization
- Gaining Outlier Resistance With Progressive Quantiles: Fast Algorithms and Theoretical Studies
- Pathwise coordinate optimization for sparse learning: algorithm and theory
- Sparse classification: a scalable discrete optimization perspective
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis
- Relaxed sparse eigenvalue conditions for sparse estimation via non-convex regularized regression
- Analysis of multi-stage convex relaxation for sparse regularization
- An extrapolated proximal iteratively reweighted method for nonconvex composite optimization problems
- Towards statistically provable geometric 3D human pose recovery
- Smoothing neural network for \(L_0\) regularized optimization problem with general convex constraints
- Variable selection and parameter estimation with the Atan regularization method
- Efficient nonconvex sparse group feature selection via continuous and discrete optimization
- A unifying framework of high-dimensional sparse estimation with difference-of-convex (DC) regularizations
- A solution approach for cardinality minimization problem based on fractional programming
- On semiparametric exponential family graphical models
- Weak Signal Identification and Inference in Penalized Likelihood Models for Categorical Responses
- Proximal gradient method with automatic selection of the parameter by automatic differentiation
- Calibrating nonconvex penalized regression in ultra-high dimension
- Strong oracle optimality of folded concave penalized estimation
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Estimation of sparse covariance matrix via non-convex regularization
This page was built for publication: Multi-stage convex relaxation for feature selection
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2435243)