Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
From MaRDI portal
Publication:4975847
DOI10.1109/TIT.2009.2016018zbMath1367.62220MaRDI QIDQ4975847
Publication date: 8 August 2017
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Ridge regression; shrinkage estimators (Lasso) (62J07) Quadratic programming (90C20) Signal theory (characterization, reconstruction, filtering, etc.) (94A12)
Related Items (only showing first 100 items - show all)
Investigating competition in financial markets: a sparse autologistic model for dynamic network data ⋮ Variable Selection With Second-Generation P-Values ⋮ An algorithm for quadratic ℓ1-regularized optimization with a flexible active-set strategy ⋮ Recovering Structured Signals in Noise: Least-Squares Meets Compressed Sensing ⋮ Compressive Classification: Where Wireless Communications Meets Machine Learning ⋮ The Noise Collector for sparse recovery in high dimensions ⋮ Nonnegative elastic net and application in index tracking ⋮ High-dimensional change-point estimation: combining filtering with convex optimization ⋮ Unnamed Item ⋮ Rates of convergence of the adaptive elastic net and the post-selection procedure in ultra-high dimensional sparse models ⋮ Large-scale multivariate sparse regression with applications to UK Biobank ⋮ Adaptive Bayesian SLOPE: Model Selection With Incomplete Data ⋮ Robust post-selection inference of high-dimensional mean regression with heavy-tailed asymmetric or heteroskedastic errors ⋮ Monte Carlo Simulation for Lasso-Type Problems by Estimator Augmentation ⋮ A convex optimization framework for the identification of homogeneous reaction systems ⋮ Adaptive multi-penalty regularization based on a generalized Lasso path ⋮ Statistical inference for model parameters in stochastic gradient descent ⋮ Sparse high-dimensional regression: exact scalable algorithms and phase transitions ⋮ Perspective functions: proximal calculus and applications in high-dimensional statistics ⋮ On estimation error bounds of the Elastic Net when p ≫ n ⋮ A robust high dimensional estimation of a finite mixture of the generalized linear model ⋮ Review of Bayesian selection methods for categorical predictors using JAGS ⋮ High-dimensional dynamic systems identification with additional constraints ⋮ Pairwise sparse + low-rank models for variables of mixed type ⋮ Quadratic growth conditions and uniqueness of optimal solution to Lasso ⋮ Asymptotic Theory of \(\boldsymbol \ell _1\) -Regularized PDE Identification from a Single Noisy Trajectory ⋮ L1-norm-based principal component analysis with adaptive regularization ⋮ Predictor ranking and false discovery proportion control in high-dimensional regression ⋮ Online sparse identification for regression models ⋮ Learning rates for partially linear functional models with high dimensional scalar covariates ⋮ Debiasing the debiased Lasso with bootstrap ⋮ Unnamed Item ⋮ A simple homotopy proximal mapping algorithm for compressive sensing ⋮ When Ramanujan meets time-frequency analysis in complicated time series analysis ⋮ Support union recovery in high-dimensional multivariate regression ⋮ Statistical analysis of sparse approximate factor models ⋮ Recovery of partly sparse and dense signals ⋮ Fundamental limits of exact support recovery in high dimensions ⋮ Multi-stage convex relaxation for feature selection ⋮ Optimal Sparse Linear Prediction for Block-missing Multi-modality Data Without Imputation ⋮ RIPless compressed sensing from anisotropic measurements ⋮ Sparse directed acyclic graphs incorporating the covariates ⋮ A relaxed-PPA contraction method for sparse signal recovery ⋮ Which bridge estimator is the best for variable selection? ⋮ Independently Interpretable Lasso for Generalized Linear Models ⋮ An unbiased approach to compressed sensing ⋮ Estimation and variable selection with exponential weights ⋮ A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models ⋮ Consistency of \(\ell_1\) recovery from noisy deterministic measurements ⋮ Sparse regression: scalable algorithms and empirical performance ⋮ A discussion on practical considerations with sparse regression methodologies ⋮ A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al. ⋮ Rejoinder: ``Sparse regression: scalable algorithms and empirical performance ⋮ A Tuning-free Robust and Efficient Approach to High-dimensional Regression ⋮ A framework for solving mixed-integer semidefinite programs ⋮ Adaptive Huber Regression ⋮ CHAOTIC ANALOG-TO-INFORMATION CONVERSION: PRINCIPLE AND RECONSTRUCTABILITY WITH PARAMETER IDENTIFIABILITY ⋮ A significance test for the lasso ⋮ Discussion: ``A significance test for the lasso ⋮ Rejoinder: ``A significance test for the lasso ⋮ Pivotal estimation via square-root lasso in nonparametric regression ⋮ Truncated $L^1$ Regularized Linear Regression: Theory and Algorithm ⋮ A Tight Bound of Hard Thresholding ⋮ High-Dimensional Sparse Additive Hazards Regression ⋮ Sparse semiparametric discriminant analysis ⋮ High-dimensional variable screening and bias in subsequent inference, with an empirical comparison ⋮ Iterative reweighted noninteger norm regularizing SVM for gene expression data classification ⋮ Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models ⋮ A global homogeneity test for high-dimensional linear regression ⋮ A numerical exploration of compressed sampling recovery ⋮ Unnamed Item ⋮ Prediction error bounds for linear regression with the TREX ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Boosting with structural sparsity: a differential inclusion approach ⋮ Prediction and estimation consistency of sparse multi-class penalized optimal scoring ⋮ Variable selection via adaptive false negative control in linear regression ⋮ Sorted concave penalized regression ⋮ Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors ⋮ Low Complexity Regularization of Linear Inverse Problems ⋮ BAYESIAN HYPER-LASSOS WITH NON-CONVEX PENALIZATION ⋮ Model Selection for High-Dimensional Quadratic Regression via Regularization ⋮ Variable Selection for Nonparametric Learning with Power Series Kernels ⋮ Approximate support recovery of atomic line spectral estimation: a tale of resolution and precision ⋮ High-dimensional regression in practice: an empirical study of finite-sample prediction, variable selection and ranking ⋮ A two-stage sequential conditional selection approach to sparse high-dimensional multivariate regression models ⋮ Asymptotic theory of the adaptive sparse group Lasso ⋮ Simple expressions of the LASSO and SLOPE estimators in low-dimension ⋮ Unnamed Item ⋮ A Mixed-Integer Fractional Optimization Approach to Best Subset Selection ⋮ On the Use of the Lasso for Instrumental Variables Estimation with Some Invalid Instruments ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Robust controllability assessment and optimal actuator placement in dynamic networks ⋮ Sparsistency and agnostic inference in sparse PCA ⋮ Stability Selection ⋮ On model selection consistency of regularized M-estimators ⋮ Lasso penalized semiparametric regression on high-dimensional recurrent event data via coordinate descent ⋮ Minimax-optimal nonparametric regression in high dimensions ⋮ Sparse learning via Boolean relaxations
This page was built for publication: Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)