A general theory of concave regularization for high-dimensional sparse estimation problems
From MaRDI portal
Publication:5965310
DOI10.1214/12-STS399zbMath1331.62353arXiv1108.4988MaRDI QIDQ5965310
Publication date: 3 March 2016
Published in: Statistical Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1108.4988
global solutionapproximate solutionvariable selectionoracle inequalityconcave regularizationlocal solutionsparse recovery
Related Items
Robust Signal Recovery for High-Dimensional Linear Log-Contrast Models with Compositional Covariates, A review of distributed statistical inference, REMI: REGRESSION WITH MARGINAL INFORMATION AND ITS APPLICATION IN GENOME-WIDE ASSOCIATION STUDIES, Regularized projection score estimation of treatment effects in high-dimensional quantile regression, Bayesian Estimation of Gaussian Conditional Random Fields, Fitting sparse linear models under the sufficient and necessary condition for model identification, Global solutions to folded concave penalized nonconvex learning, Best subset selection via a modern optimization lens, \(\ell_0\)-regularized high-dimensional accelerated failure time model, Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls, On the strong oracle property of concave penalized estimators with infinite penalty derivative at the origin, GSDAR: a fast Newton algorithm for \(\ell_0\) regularized generalized linear models with statistical guarantee, De-biasing the Lasso with degrees-of-freedom adjustment, Random subspace method for high-dimensional regression with the \texttt{R} package \texttt{regRSM}, High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks, Hard Thresholding Regularised Logistic Regression: Theory and Algorithms, Nonlinear Variable Selection via Deep Neural Networks, Distributed testing and estimation under sparse high dimensional models, Bias versus non-convexity in compressed sensing, The Spike-and-Slab LASSO, Variable selection and parameter estimation with the Atan regularization method, Homogeneity detection for the high-dimensional generalized linear model, Principal components adjusted variable screening, The use of random-effect models for high-dimensional variable selection problems, Conditional sure independence screening by conditional marginal empirical likelihood, Balanced estimation for high-dimensional measurement error models, In defense of LASSO, Oracle inequalities for the lasso in the Cox model, Almost sure uniqueness of a global minimum without convexity, Relaxed sparse eigenvalue conditions for sparse estimation via non-convex regularized regression, A doubly sparse approach for group variable selection, Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions, Quantile regression for additive coefficient models in high dimensions, On high-dimensional Poisson models with measurement error: hypothesis testing for nonlinear nonconvex optimization, Simultaneous feature selection and outlier detection with optimality guarantees, Sparse signal reconstruction via the approximations of \(\ell_0\) quasinorm, L 0 -regularization for high-dimensional regression with corrupted data, Sparse and robust estimation with ridge minimax concave penalty, Adaptive bridge estimator for Cox model with a diverging number of parameters, Subspace learning by \(\ell^0\)-induced sparsity, A convex-Nonconvex strategy for grouped variable selection, Matrix completion with nonconvex regularization: spectral operators and scalable algorithms, Retire: robust expectile regression in high dimensions, Robust High-Dimensional Regression with Coefficient Thresholding and Its Application to Imaging Data Analysis, Nonconvex penalized reduced rank regression and its oracle properties in high dimensions, Model selection in high-dimensional quantile regression with seamless \(L_0\) penalty, Communication-efficient distributed estimation for high-dimensional large-scale linear regression, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, High-dimensional composite quantile regression: optimal statistical guarantees and fast algorithms, Goodness-of-Fit Tests for High Dimensional Linear Models, Calibrating nonconvex penalized regression in ultra-high dimension, Estimation and inference for precision matrices of nonstationary time series, On the finite-sample analysis of \(\Theta\)-estimators, An unbiased approach to compressed sensing, Estimation and variable selection with exponential weights, Time-varying Hazards Model for Incorporating Irregularly Measured, High-Dimensional Biomarkers, A two-stage regularization method for variable selection and forecasting in high-order interaction model, High-dimensional grouped folded concave penalized estimation via the LLA algorithm, On the finite-sample analysis of \(\Theta\)-estimators, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, A Tuning-free Robust and Efficient Approach to High-dimensional Regression, Going beyond oracle property: selection consistency and uniqueness of local solution of the generalized linear model, Robust low-rank multiple kernel learning with compound regularization, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, Truncated $L^1$ Regularized Linear Regression: Theory and Algorithm, Penalized least squares estimation with weakly dependent data, Tuning parameter selection for the adaptive LASSO in the autoregressive model, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Learning latent variable Gaussian graphical model for biomolecular network with low sample complexity, Unnamed Item, On a monotone scheme for nonconvex nonsmooth optimization with applications to fracture mechanics, Tractable ADMM schemes for computing KKT points and local minimizers for \(\ell_0\)-minimization problems, Sorted concave penalized regression, Strong oracle optimality of folded concave penalized estimation, Endogeneity in high dimensions, Bayesian Bootstrap Spike-and-Slab LASSO, A unified primal dual active set algorithm for nonconvex sparse recovery, Introduction to the special issue on sparsity and regularization methods, Estimating structured high-dimensional covariance and precision matrices: optimal rates and adaptive estimation, Variance prior forms for high-dimensional Bayesian variable selection, OR Forum—An Algorithmic Approach to Linear Regression, Iteratively reweighted \(\ell_1\)-penalized robust regression, Asymptotic normality and optimalities in estimation of large Gaussian graphical models, High-dimensional linear model selection motivated by multiple testing, Majorized proximal alternating imputation for regularized rank constrained matrix completion, Second-order Stein: SURE for SURE and other applications in high-dimensional inference, Dynamic variable selection with spike-and-slab process priors, A Simple Method for Estimating Interactions Between a Treatment and a Large Number of Covariates, Simultaneous Variable and Covariance Selection With the Multivariate Spike-and-Slab LASSO, Nonnegative estimation and variable selection under minimax concave penalty for sparse high-dimensional linear regression models, Smoothing Newton method for \(\ell^0\)-\(\ell^2\) regularized linear inverse problem, A theoretical understanding of self-paced learning, Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms, Accelerated Stochastic Algorithms for Nonconvex Finite-Sum and Multiblock Optimization, Weighted thresholding homotopy method for sparsity constrained optimization, Nonbifurcating Phylogenetic Tree Inference via the Adaptive LASSO, Joint feature screening for ultra-high-dimensional sparse additive hazards model by the sparsity-restricted pseudo-score estimator, Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming, Robust moderately clipped LASSO for simultaneous outlier detection and variable selection, High-dimensional linear regression with hard thresholding regularization: theory and algorithm
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- Statistics for high-dimensional data. Methods, theory and applications.
- The Dantzig selector and sparsity oracle inequalities
- \(\ell_{1}\)-penalization for mixture regression models
- Comments on: \(\ell_{1}\)-penalization for mixture regression models
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Near-ideal model selection by \(\ell _{1}\) minimization
- One-step sparse estimates in nonconcave penalized likelihood models
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso-type recovery of sparse representations for high-dimensional data
- Estimating the dimension of a model
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Asymptotics for Lasso-type estimators.
- Nonconcave penalized likelihood with a diverging number of parameters.
- Least angle regression. (With discussion)
- On the conditions used to prove oracle results for the Lasso
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Variable selection using MM algorithms
- Atomic Decomposition by Basis Pursuit
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- SparseNet: Coordinate Descent With Nonconvex Penalties
- Scaled sparse linear regression
- Decoding by Linear Programming
- Just relax: convex programming methods for identifying sparse signals in noise
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- A new approach to variable selection in least squares problems
- Shifting Inequality and Recovery of Sparse Signals
- A Statistical View of Some Chemometrics Regression Tools
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations
- Smoothly Clipped Absolute Deviation on High Dimensions
- Some Comments on C P
- Comments on: \(\ell _{1}\)-penalization for mixture regression models