Support recovery without incoherence: a case for nonconvex regularization
From MaRDI portal
(Redirected from Publication:682289)
Abstract: We demonstrate that the primal-dual witness proof method may be used to establish variable selection consistency and -bounds for sparse regression problems, even when the loss function and/or regularizer are nonconvex. Using this method, we derive two theorems concerning support recovery and -guarantees for the regression estimator in a general setting. Our results provide rigorous theoretical justification for the use of nonconvex regularization: For certain nonconvex regularizers with vanishing derivative away from the origin, support recovery consistency may be guaranteed without requiring the typical incoherence conditions present in -based methods. We then derive several corollaries that illustrate the wide applicability of our method to analyzing composite objective functions involving losses such as least squares, nonconvex modified least squares for errors-in variables linear regression, the negative log likelihood for generalized linear models, and the graphical Lasso. We conclude with empirical studies to corroborate our theoretical predictions.
Recommendations
- Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls
- Regularized \(M\)-estimators with nonconvexity: statistical and algorithmic theory for local optima
- A class of null space conditions for sparse recovery via nonconvex, non-separable minimizations
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
Cited in
(46)- Penalized wavelet nonparametric univariate logistic regression for irregular spaced data
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Numerical characterization of support recovery in sparse regression with correlated design
- Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls
- An unbiased approach to compressed sensing
- Bias versus non-convexity in compressed sensing
- Sparse regression: scalable algorithms and empirical performance
- Inference in high dimensional linear measurement error models
- Finite-sample analysis of \(M\)-estimators using self-concordance
- Sparse M-estimators in semi-parametric copula models
- High dimensional generalized linear models for temporal dependent data
- Robust High-Dimensional Regression with Coefficient Thresholding and Its Application to Imaging Data Analysis
- Difference-of-convex learning: directional stationarity, optimality, and sparsity
- The finite sample properties of sparse M-estimators with pseudo-observations
- Sparse classification: a scalable discrete optimization perspective
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
- Bayesian regularization for graphical models with unequal shrinkage
- Consistency bounds and support recovery of d-stationary solutions of sparse sample average approximations
- On high-dimensional Poisson models with measurement error: hypothesis testing for nonlinear nonconvex optimization
- Second-order optimality conditions and improved convergence results for regularization methods for cardinality-constrained optimization problems
- Nonbifurcating Phylogenetic Tree Inference via the Adaptive LASSO
- High‐dimensional sparse multivariate stochastic volatility models
- Byzantine-robust distributed sparse learning for \(M\)-estimation
- scientific article; zbMATH DE number 7370571 (Why is no real title available?)
- A discussion on practical considerations with sparse regression methodologies
- Bayesian Estimation of Gaussian Conditional Random Fields
- Which bridge estimator is the best for variable selection?
- Adaptive Huber trace regression with low-rank matrix parameter via nonconvex regularization
- Markov neighborhood regression for statistical inference of high-dimensional generalized linear models
- Oracle inequalities for local and global empirical risk minimizers
- Efficient learning with a family of nonconvex regularizers by redistributing nonconvexity
- On the sign consistency of the Lasso for the high-dimensional Cox model
- Iteratively reweighted \(\ell_1\)-penalized robust regression
- Inference for high-dimensional linear expectile regression with de-biasing method
- Regularized \(M\)-estimators with nonconvexity: statistical and algorithmic theory for local optima
- Almost sure uniqueness of a global minimum without convexity
- Comment: Feature Screening and Variable Selection via Iterative Ridge Regression
- Low-Rank Regression Models for Multiple Binary Responses and their Applications to Cancer Cell-Line Encyclopedia Data
- On uniqueness guarantees of solution in convex regularized linear inverse problems
- Statistical analysis of sparse approximate factor models
- High-dimensional robust approximated M-estimators for mean regression with asymmetric data
- Structure learning of sparse directed acyclic graphs incorporating the scale-free property
- Fully polynomial-time randomized approximation schemes for global optimization of high-dimensional minimax concave penalized generalized linear models
- An ensemble EM algorithm for Bayesian variable selection
- Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis
- A class of null space conditions for sparse recovery via nonconvex, non-separable minimizations
This page was built for publication: Support recovery without incoherence: a case for nonconvex regularization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q682289)