Multi-stage convex relaxation for feature selection
From MaRDI portal
Publication:2435243
DOI10.3150/12-BEJ452zbMATH Open1359.62293arXiv1106.0565OpenAlexW2963172671MaRDI QIDQ2435243FDOQ2435243
Authors: Tong Zhang
Publication date: 4 February 2014
Published in: Bernoulli (Search for Journal in Brave)
Abstract: A number of recent work studied the effectiveness of feature selection using Lasso. It is known that under the restricted isometry properties (RIP), Lasso does not generally lead to the exact recovery of the set of nonzero coefficients, due to the looseness of convex relaxation. This paper considers the feature selection property of nonconvex regularization, where the solution is given by a multi-stage convex relaxation scheme. Under appropriate conditions, we show that the local solution obtained by this procedure recovers the set of nonzero coefficients without suffering from the bias of Lasso relaxation, which complements parameter estimation results of this procedure.
Full work available at URL: https://arxiv.org/abs/1106.0565
Recommendations
Linear regression; mixed models (62J05) Learning and adaptive systems in artificial intelligence (68T05) Ridge regression; shrinkage estimators (Lasso) (62J07)
Cites Work
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Title not available (Why is that?)
- One-step sparse estimates in nonconcave penalized likelihood models
- On the conditions used to prove oracle results for the Lasso
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional graphs and variable selection with the Lasso
- Analysis of multi-stage convex relaxation for sparse regularization
- Title not available (Why is that?)
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- Sparsity oracle inequalities for the Lasso
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Decoding by Linear Programming
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- Sparsity in penalized empirical risk minimization
- Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations
Cited In (21)
- Gaining Outlier Resistance With Progressive Quantiles: Fast Algorithms and Theoretical Studies
- Pathwise coordinate optimization for sparse learning: algorithm and theory
- Sparse classification: a scalable discrete optimization perspective
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis
- Relaxed sparse eigenvalue conditions for sparse estimation via non-convex regularized regression
- Analysis of multi-stage convex relaxation for sparse regularization
- An extrapolated proximal iteratively reweighted method for nonconvex composite optimization problems
- Towards statistically provable geometric 3D human pose recovery
- Smoothing neural network for \(L_0\) regularized optimization problem with general convex constraints
- Variable selection and parameter estimation with the Atan regularization method
- Efficient nonconvex sparse group feature selection via continuous and discrete optimization
- A unifying framework of high-dimensional sparse estimation with difference-of-convex (DC) regularizations
- On semiparametric exponential family graphical models
- A solution approach for cardinality minimization problem based on fractional programming
- Weak Signal Identification and Inference in Penalized Likelihood Models for Categorical Responses
- Proximal gradient method with automatic selection of the parameter by automatic differentiation
- Calibrating nonconvex penalized regression in ultra-high dimension
- Strong oracle optimality of folded concave penalized estimation
- Estimation of sparse covariance matrix via non-convex regularization
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Separating variables to accelerate non-convex regularized optimization
Uses Software
This page was built for publication: Multi-stage convex relaxation for feature selection
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2435243)