Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting

From MaRDI portal
Publication:4974108

DOI10.1109/TIT.2009.2032816zbMath1367.94106arXivmath/0702301OpenAlexW2141556672MaRDI QIDQ4974108

Martin J. Wainwright

Publication date: 8 August 2017

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/math/0702301



Related Items

Fundamental barriers to high-dimensional regression with convex penalties, Sparse high-dimensional linear regression. Estimating squared error and a phase transition, Iterative algorithm for discrete structure recovery, Variable Selection With Second-Generation P-Values, Goodness-of-fit tests for high-dimensional Gaussian linear models, Compressive Classification: Where Wireless Communications Meets Machine Learning, Nonlinear Variable Selection via Deep Neural Networks, On stepwise pattern recovery of the fused Lasso, Unnamed Item, Sparse microwave imaging: principles and applications, Penalised robust estimators for sparse and high-dimensional linear models, Sparsity enabled cluster reduced-order models for control, Sharp recovery bounds for convex demixing, with applications, Distributed Recursive Estimation under Heavy-Tail Communication Noise, Capturing ridge functions in high dimensions from point queries, Support union recovery in high-dimensional multivariate regression, Supervised homogeneity fusion: a combinatorial approach, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Concentration of posterior model probabilities and normalized \({L_0}\) criteria, Sobolev duals for random frames and \(\varSigma \varDelta \) quantization of compressed sensing measurements, Recovery of partly sparse and dense signals, Fundamental limits of exact support recovery in high dimensions, Inferring large graphs using \(\ell_1\)-penalized likelihood, Minimax risks for sparse regressions: ultra-high dimensional phenomenons, On the asymptotic properties of the group lasso estimator for linear models, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, Detection boundary in sparse regression, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), Structured, sparse regression with application to HIV drug resistance, Which bridge estimator is the best for variable selection?, Sparse regression and support recovery with \(\mathbb{L}_2\)-boosting algorithms, Sparse regression: scalable algorithms and empirical performance, A discussion on practical considerations with sparse regression methodologies, Influences of preconditioning on the mutual coherence and the restricted isometry property of Gaussian/Bernoulli measurement matrices, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, A numerical exploration of compressed sampling recovery, Greedy variance estimation for the LASSO, High-dimensional Gaussian model selection on a Gaussian design, Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors, High-dimensional regression with unknown variance, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, Tight conditions for consistency of variable selection in the context of high dimensionality, Variable selection consistency of Gaussian process regression, Approximation of generalized ridge functions in high dimensions, Sparse classification: a scalable discrete optimization perspective, The all-or-nothing phenomenon in sparse linear regression, Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms, High-dimensional model recovery from random sketched data by exploring intrinsic sparsity, Preconditioning for orthogonal matching pursuit with noisy and random measurements: the Gaussian case, Sparse regression at scale: branch-and-bound rooted in first-order optimization, Minimax-optimal nonparametric regression in high dimensions, Sparse learning via Boolean relaxations