Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
From MaRDI portal
Publication:4975847
DOI10.1109/TIT.2009.2016018zbMath1367.62220MaRDI QIDQ4975847
Publication date: 8 August 2017
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Ridge regression; shrinkage estimators (Lasso) (62J07) Quadratic programming (90C20) Signal theory (characterization, reconstruction, filtering, etc.) (94A12)
Related Items
Worst possible sub-directions in high-dimensional models, Nonnegative-Lasso and application in index tracking, A general family of trimmed estimators for robust high-dimensional data analysis, Nonnegative adaptive Lasso for ultra-high dimensional regression models and a two-stage method applied in financial modeling, Exploiting prior knowledge in compressed sensing to design robust systems for endoscopy image recovery, Sparse Laplacian shrinkage with the graphical Lasso estimator for regression problems, Best subset selection via a modern optimization lens, Learning high-dimensional Gaussian linear structural equation models with heterogeneous error variances, Model selection consistency of Lasso for empirical data, A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model, Penalized logspline density estimation using total variation penalty, Conditional score matching for high-dimensional partial graphical models, Sparse high-dimensional linear regression. Estimating squared error and a phase transition, Adaptive log-density estimation, SLOPE is adaptive to unknown sparsity and asymptotically minimax, De-biasing the Lasso with degrees-of-freedom adjustment, AIC for the Lasso in generalized linear models, Compressed history matching: Exploiting transform-domain sparsity for regularization of nonlinear dynamic data integration problems, A posterior probability approach for gene regulatory network inference in genetic perturbation data, Sparse linear models and \(l_1\)-regularized 2SLS with high-dimensional endogenous regressors and instruments, A unifying framework of high-dimensional sparse estimation with difference-of-convex (DC) regularizations, On the sign consistency of the Lasso for the high-dimensional Cox model, Multivariate factorizable expectile regression with application to fMRI data, The adaptive Lasso in high-dimensional sparse heteroscedastic models, Statistical significance in high-dimensional linear models, Stability, On constrained and regularized high-dimensional regression, Generalized Kalman smoothing: modeling and algorithms, Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression, Adjusting for high-dimensional covariates in sparse precision matrix estimation by \(\ell_1\)-penalization, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Optimal variable selection in multi-group sparse discriminant analysis, \(\ell_{1}\)-penalization for mixture regression models, Sharp recovery bounds for convex demixing, with applications, Discussion: Latent variable graphical model selection via convex optimization, Rejoinder: Latent variable graphical model selection via convex optimization, Autoregressive process modeling via the Lasso procedure, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Two are better than one: fundamental parameters of frame coherence, Sobolev duals for random frames and \(\varSigma \varDelta \) quantization of compressed sensing measurements, Coordinate ascent for penalized semiparametric regression on high-dimensional panel count data, Minimax risks for sparse regressions: ultra-high dimensional phenomenons, Estimating networks with jumps, The Lasso problem and uniqueness, Bootstrap inference for network construction with an application to a breast cancer microarray study, On the conditions used to prove oracle results for the Lasso, Self-concordant analysis for logistic regression, Detection boundary in sparse regression, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence, Least squares after model selection in high-dimensional sparse models, The log-linear group-lasso estimator and its asymptotic properties, Sign-constrained least squares estimation for high-dimensional regression, Oracle inequalities and optimal inference under group sparsity, Bayesian augmented Lagrangian algorithm for system identification, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, A new perspective on least squares under convex constraint, CAM: causal additive models, high-dimensional order search and penalized regression, A necessary and sufficient condition for exact sparse recovery by \(\ell_1\) minimization, On the residual empirical process based on the ALASSO in high dimensions and its functional oracle property, Oracle inequalities for high-dimensional prediction, Accelerating a Gibbs sampler for variable selection on genomics data with summarization and variable pre-selection combining an array DBMS and R, On semidefinite relaxations for the block model, Interpreting latent variables in factor models via convex optimization, Additive model selection, Penalized least squares estimation with weakly dependent data, The generalized Lasso problem and uniqueness, Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error, Improved variable selection with forward-lasso adaptive shrinkage, Sparse principal component based high-dimensional mediation analysis, Rate optimal estimation and confidence intervals for high-dimensional regression with missing covariates, Estimation of (near) low-rank matrices with noise and high-dimensional scaling, High-dimensional Ising model selection using \(\ell _{1}\)-regularized logistic regression, Estimating time-varying networks, On asymptotically optimal confidence regions and tests for high-dimensional models, A distribution-based Lasso for a general single-index model, Estimation of high-dimensional graphical models using regularized score matching, Latent variable graphical model selection via convex optimization, Model selection with mixed variables on the Lasso path, Variable selection with Hamming loss, Consistent multiple changepoint estimation with fused Gaussian graphical models, Sparse recovery via differential inclusions, Iteratively reweighted \(\ell_1\)-penalized robust regression, Subspace clustering of high-dimensional data: a predictive approach, Factor-Adjusted Regularized Model Selection, Evaluating visual properties via robust HodgeRank, Second-order Stein: SURE for SURE and other applications in high-dimensional inference, Principled sure independence screening for Cox models with ultra-high-dimensional covariates, Sparse classification: a scalable discrete optimization perspective, Provable training set debugging for linear regression, In defense of the indefensible: a very naïve approach to high-dimensional inference, Feature selection for data integration with mixed multiview data, The all-or-nothing phenomenon in sparse linear regression, Spatially relaxed inference on high-dimensional linear models, Sparse hierarchical regression with polynomials, Information criteria bias correction for group selection, Penalized wavelet estimation and robust denoising for irregular spaced data, High-dimensional linear regression with hard thresholding regularization: theory and algorithm, LASSO risk and phase transition under dependence, Revisiting feature selection for linear models with FDR and power guarantees, Investigating competition in financial markets: a sparse autologistic model for dynamic network data, Variable Selection With Second-Generation P-Values, An algorithm for quadratic ℓ1-regularized optimization with a flexible active-set strategy, Recovering Structured Signals in Noise: Least-Squares Meets Compressed Sensing, Compressive Classification: Where Wireless Communications Meets Machine Learning, The Noise Collector for sparse recovery in high dimensions, Nonnegative elastic net and application in index tracking, High-dimensional change-point estimation: combining filtering with convex optimization, Unnamed Item, Rates of convergence of the adaptive elastic net and the post-selection procedure in ultra-high dimensional sparse models, Large-scale multivariate sparse regression with applications to UK Biobank, Adaptive Bayesian SLOPE: Model Selection With Incomplete Data, Robust post-selection inference of high-dimensional mean regression with heavy-tailed asymmetric or heteroskedastic errors, Monte Carlo Simulation for Lasso-Type Problems by Estimator Augmentation, A convex optimization framework for the identification of homogeneous reaction systems, Adaptive multi-penalty regularization based on a generalized Lasso path, Statistical inference for model parameters in stochastic gradient descent, Sparse high-dimensional regression: exact scalable algorithms and phase transitions, Perspective functions: proximal calculus and applications in high-dimensional statistics, On estimation error bounds of the Elastic Net when p ≫ n, A robust high dimensional estimation of a finite mixture of the generalized linear model, Review of Bayesian selection methods for categorical predictors using JAGS, High-dimensional dynamic systems identification with additional constraints, Pairwise sparse + low-rank models for variables of mixed type, Quadratic growth conditions and uniqueness of optimal solution to Lasso, Asymptotic Theory of \(\boldsymbol \ell _1\) -Regularized PDE Identification from a Single Noisy Trajectory, L1-norm-based principal component analysis with adaptive regularization, Predictor ranking and false discovery proportion control in high-dimensional regression, Online sparse identification for regression models, Learning rates for partially linear functional models with high dimensional scalar covariates, Debiasing the debiased Lasso with bootstrap, Unnamed Item, A simple homotopy proximal mapping algorithm for compressive sensing, When Ramanujan meets time-frequency analysis in complicated time series analysis, Support union recovery in high-dimensional multivariate regression, Statistical analysis of sparse approximate factor models, Recovery of partly sparse and dense signals, Fundamental limits of exact support recovery in high dimensions, Multi-stage convex relaxation for feature selection, Optimal Sparse Linear Prediction for Block-missing Multi-modality Data Without Imputation, RIPless compressed sensing from anisotropic measurements, Sparse directed acyclic graphs incorporating the covariates, A relaxed-PPA contraction method for sparse signal recovery, Which bridge estimator is the best for variable selection?, Independently Interpretable Lasso for Generalized Linear Models, An unbiased approach to compressed sensing, Estimation and variable selection with exponential weights, A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models, Consistency of \(\ell_1\) recovery from noisy deterministic measurements, Sparse regression: scalable algorithms and empirical performance, A discussion on practical considerations with sparse regression methodologies, A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al., Rejoinder: ``Sparse regression: scalable algorithms and empirical performance, A Tuning-free Robust and Efficient Approach to High-dimensional Regression, A framework for solving mixed-integer semidefinite programs, Adaptive Huber Regression, CHAOTIC ANALOG-TO-INFORMATION CONVERSION: PRINCIPLE AND RECONSTRUCTABILITY WITH PARAMETER IDENTIFIABILITY, A significance test for the lasso, Discussion: ``A significance test for the lasso, Rejoinder: ``A significance test for the lasso, Pivotal estimation via square-root lasso in nonparametric regression, Truncated $L^1$ Regularized Linear Regression: Theory and Algorithm, A Tight Bound of Hard Thresholding, High-Dimensional Sparse Additive Hazards Regression, Sparse semiparametric discriminant analysis, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, Iterative reweighted noninteger norm regularizing SVM for gene expression data classification, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, A global homogeneity test for high-dimensional linear regression, A numerical exploration of compressed sampling recovery, Unnamed Item, Prediction error bounds for linear regression with the TREX, Unnamed Item, Unnamed Item, Boosting with structural sparsity: a differential inclusion approach, Prediction and estimation consistency of sparse multi-class penalized optimal scoring, Variable selection via adaptive false negative control in linear regression, Sorted concave penalized regression, Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors, Low Complexity Regularization of Linear Inverse Problems, BAYESIAN HYPER-LASSOS WITH NON-CONVEX PENALIZATION, Model Selection for High-Dimensional Quadratic Regression via Regularization, Variable Selection for Nonparametric Learning with Power Series Kernels, Approximate support recovery of atomic line spectral estimation: a tale of resolution and precision, High-dimensional regression in practice: an empirical study of finite-sample prediction, variable selection and ranking, A two-stage sequential conditional selection approach to sparse high-dimensional multivariate regression models, Asymptotic theory of the adaptive sparse group Lasso, Simple expressions of the LASSO and SLOPE estimators in low-dimension, Unnamed Item, A Mixed-Integer Fractional Optimization Approach to Best Subset Selection, On the Use of the Lasso for Instrumental Variables Estimation with Some Invalid Instruments, Unnamed Item, Unnamed Item, Robust controllability assessment and optimal actuator placement in dynamic networks, Sparsistency and agnostic inference in sparse PCA, Stability Selection, On model selection consistency of regularized M-estimators, Lasso penalized semiparametric regression on high-dimensional recurrent event data via coordinate descent, Minimax-optimal nonparametric regression in high dimensions, Sparse learning via Boolean relaxations, A survey on compressive sensing: classical results and recent advancements, Variable selection, monotone likelihood ratio and group sparsity, On high-dimensional Poisson models with measurement error: hypothesis testing for nonlinear nonconvex optimization, Bayesian group selection in logistic regression with application to MRI data analysis, Byzantine-robust distributed sparse learning for \(M\)-estimation, A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates, Distributed Sparse Composite Quantile Regression in Ultrahigh Dimensions, A communication-efficient method for ℓ0 regularization linear regression models, Global debiased DC estimations for biased estimators via pro forma regression, Nonparametric Functional Graphical Modeling Through Functional Additive Regression Operator, Clustering High-Dimensional Data via Feature Selection, Sparse approximation over the cube, Optimal false discovery control of minimax estimators, Densely connected sub-Gaussian linear structural equation model learning via \(\ell_1\)- and \(\ell_2\)-regularized regressions, Topological techniques in model selection, An ensemble EM algorithm for Bayesian variable selection, Scalable Bayesian high-dimensional local dependence learning, A power analysis for Model-X knockoffs with \(\ell_p\)-regularized statistics, Complexity analysis of Bayesian learning of high-dimensional DAG models and their equivalence classes, Distributed Decoding From Heterogeneous 1-Bit Compressive Measurements, Robust inference for high‐dimensional single index models, A generalized knockoff procedure for FDR control in structural change detection, Nearly Dimension-Independent Sparse Linear Bandit over Small Action Spaces via Best Subset Selection, Robust High-Dimensional Regression with Coefficient Thresholding and Its Application to Imaging Data Analysis, A unified precision matrix estimation framework via sparse column-wise inverse operator under weak sparsity, On sparsity‐inducing methods in system identification and state estimation, Robust Signal Recovery for High-Dimensional Linear Log-Contrast Models with Compositional Covariates, Sparse and simple structure estimation via prenet penalization, A joint estimation for the high-dimensional regression modeling on stratified data, Sparse quadratic classification rules via linear dimension reduction, Unnamed Item, Structured sparsity through convex optimization, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, A general theory of concave regularization for high-dimensional sparse estimation problems, Performance bounds for parameter estimates of high-dimensional linear models with correlated errors, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso