Support recovery without incoherence: a case for nonconvex regularization
From MaRDI portal
Publication:682289
DOI10.1214/16-AOS1530zbMATH Open1385.62008arXiv1412.5632OpenAlexW2964346891MaRDI QIDQ682289FDOQ682289
Authors: Po-Ling Loh, Martin J. Wainwright
Publication date: 14 February 2018
Published in: The Annals of Statistics (Search for Journal in Brave)
Abstract: We demonstrate that the primal-dual witness proof method may be used to establish variable selection consistency and -bounds for sparse regression problems, even when the loss function and/or regularizer are nonconvex. Using this method, we derive two theorems concerning support recovery and -guarantees for the regression estimator in a general setting. Our results provide rigorous theoretical justification for the use of nonconvex regularization: For certain nonconvex regularizers with vanishing derivative away from the origin, support recovery consistency may be guaranteed without requiring the typical incoherence conditions present in -based methods. We then derive several corollaries that illustrate the wide applicability of our method to analyzing composite objective functions involving losses such as least squares, nonconvex modified least squares for errors-in variables linear regression, the negative log likelihood for generalized linear models, and the graphical Lasso. We conclude with empirical studies to corroborate our theoretical predictions.
Full work available at URL: https://arxiv.org/abs/1412.5632
Recommendations
- Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls
- Regularized \(M\)-estimators with nonconvexity: statistical and algorithmic theory for local optima
- A class of null space conditions for sparse recovery via nonconvex, non-separable minimizations
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
Asymptotic properties of parametric estimators (62F12) Ridge regression; shrinkage estimators (Lasso) (62J07)
Cited In (46)
- Numerical characterization of support recovery in sparse regression with correlated design
- An unbiased approach to compressed sensing
- Sparse M-estimators in semi-parametric copula models
- Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls
- Bias versus non-convexity in compressed sensing
- Sparse regression: scalable algorithms and empirical performance
- Finite-sample analysis of \(M\)-estimators using self-concordance
- Inference in high dimensional linear measurement error models
- High dimensional generalized linear models for temporal dependent data
- Robust High-Dimensional Regression with Coefficient Thresholding and Its Application to Imaging Data Analysis
- Difference-of-convex learning: directional stationarity, optimality, and sparsity
- The finite sample properties of sparse M-estimators with pseudo-observations
- Sparse classification: a scalable discrete optimization perspective
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
- Bayesian regularization for graphical models with unequal shrinkage
- On high-dimensional Poisson models with measurement error: hypothesis testing for nonlinear nonconvex optimization
- Consistency bounds and support recovery of d-stationary solutions of sparse sample average approximations
- Nonbifurcating Phylogenetic Tree Inference via the Adaptive LASSO
- Second-order optimality conditions and improved convergence results for regularization methods for cardinality-constrained optimization problems
- High‐dimensional sparse multivariate stochastic volatility models
- Byzantine-robust distributed sparse learning for \(M\)-estimation
- Title not available (Why is that?)
- A discussion on practical considerations with sparse regression methodologies
- Bayesian Estimation of Gaussian Conditional Random Fields
- Adaptive Huber trace regression with low-rank matrix parameter via nonconvex regularization
- Markov neighborhood regression for statistical inference of high-dimensional generalized linear models
- Which bridge estimator is the best for variable selection?
- Efficient learning with a family of nonconvex regularizers by redistributing nonconvexity
- Oracle inequalities for local and global empirical risk minimizers
- Inference for high-dimensional linear expectile regression with de-biasing method
- Iteratively reweighted \(\ell_1\)-penalized robust regression
- On the sign consistency of the Lasso for the high-dimensional Cox model
- Regularized \(M\)-estimators with nonconvexity: statistical and algorithmic theory for local optima
- Almost sure uniqueness of a global minimum without convexity
- Comment: Feature Screening and Variable Selection via Iterative Ridge Regression
- Low-Rank Regression Models for Multiple Binary Responses and their Applications to Cancer Cell-Line Encyclopedia Data
- On uniqueness guarantees of solution in convex regularized linear inverse problems
- Statistical analysis of sparse approximate factor models
- Fully polynomial-time randomized approximation schemes for global optimization of high-dimensional minimax concave penalized generalized linear models
- High-dimensional robust approximated \(M\)-estimators for mean regression with asymmetric data
- Structure learning of sparse directed acyclic graphs incorporating the scale-free property
- An ensemble EM algorithm for Bayesian variable selection
- Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis
- Penalized wavelet nonparametric univariate logistic regression for irregular spaced data
- A class of null space conditions for sparse recovery via nonconvex, non-separable minimizations
- A general theory of concave regularization for high-dimensional sparse estimation problems
This page was built for publication: Support recovery without incoherence: a case for nonconvex regularization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q682289)