Support recovery without incoherence: a case for nonconvex regularization
From MaRDI portal
(Redirected from Publication:682289)
Abstract: We demonstrate that the primal-dual witness proof method may be used to establish variable selection consistency and -bounds for sparse regression problems, even when the loss function and/or regularizer are nonconvex. Using this method, we derive two theorems concerning support recovery and -guarantees for the regression estimator in a general setting. Our results provide rigorous theoretical justification for the use of nonconvex regularization: For certain nonconvex regularizers with vanishing derivative away from the origin, support recovery consistency may be guaranteed without requiring the typical incoherence conditions present in -based methods. We then derive several corollaries that illustrate the wide applicability of our method to analyzing composite objective functions involving losses such as least squares, nonconvex modified least squares for errors-in variables linear regression, the negative log likelihood for generalized linear models, and the graphical Lasso. We conclude with empirical studies to corroborate our theoretical predictions.
Recommendations
- Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls
- Regularized \(M\)-estimators with nonconvexity: statistical and algorithmic theory for local optima
- A class of null space conditions for sparse recovery via nonconvex, non-separable minimizations
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
Cited in
(46)- Efficient learning with a family of nonconvex regularizers by redistributing nonconvexity
- Sparse M-estimators in semi-parametric copula models
- Byzantine-robust distributed sparse learning for \(M\)-estimation
- Bayesian Estimation of Gaussian Conditional Random Fields
- A class of null space conditions for sparse recovery via nonconvex, non-separable minimizations
- Inference for high-dimensional linear expectile regression with de-biasing method
- Robust High-Dimensional Regression with Coefficient Thresholding and Its Application to Imaging Data Analysis
- Iteratively reweighted \(\ell_1\)-penalized robust regression
- On the sign consistency of the Lasso for the high-dimensional Cox model
- Numerical characterization of support recovery in sparse regression with correlated design
- Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis
- Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls
- High‐dimensional sparse multivariate stochastic volatility models
- Bias versus non-convexity in compressed sensing
- Oracle inequalities for local and global empirical risk minimizers
- Sparse regression: scalable algorithms and empirical performance
- scientific article; zbMATH DE number 7370571 (Why is no real title available?)
- Bayesian regularization for graphical models with unequal shrinkage
- Adaptive Huber trace regression with low-rank matrix parameter via nonconvex regularization
- High-dimensional robust approximated \(M\)-estimators for mean regression with asymmetric data
- Finite-sample analysis of \(M\)-estimators using self-concordance
- Inference in high dimensional linear measurement error models
- A general theory of concave regularization for high-dimensional sparse estimation problems
- High dimensional generalized linear models for temporal dependent data
- Consistency bounds and support recovery of d-stationary solutions of sparse sample average approximations
- Markov neighborhood regression for statistical inference of high-dimensional generalized linear models
- Which bridge estimator is the best for variable selection?
- An unbiased approach to compressed sensing
- Structure learning of sparse directed acyclic graphs incorporating the scale-free property
- The finite sample properties of sparse M-estimators with pseudo-observations
- Comment: Feature Screening and Variable Selection via Iterative Ridge Regression
- Nonbifurcating Phylogenetic Tree Inference via the Adaptive LASSO
- Almost sure uniqueness of a global minimum without convexity
- Statistical analysis of sparse approximate factor models
- On uniqueness guarantees of solution in convex regularized linear inverse problems
- A discussion on practical considerations with sparse regression methodologies
- Regularized \(M\)-estimators with nonconvexity: statistical and algorithmic theory for local optima
- Difference-of-convex learning: directional stationarity, optimality, and sparsity
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
- Sparse classification: a scalable discrete optimization perspective
- An ensemble EM algorithm for Bayesian variable selection
- On high-dimensional Poisson models with measurement error: hypothesis testing for nonlinear nonconvex optimization
- Second-order optimality conditions and improved convergence results for regularization methods for cardinality-constrained optimization problems
- Low-Rank Regression Models for Multiple Binary Responses and their Applications to Cancer Cell-Line Encyclopedia Data
- Fully polynomial-time randomized approximation schemes for global optimization of high-dimensional minimax concave penalized generalized linear models
- Penalized wavelet nonparametric univariate logistic regression for irregular spaced data
This page was built for publication: Support recovery without incoherence: a case for nonconvex regularization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q682289)