Lasso-type recovery of sparse representations for high-dimensional data
DOI10.1214/07-AOS582zbMATH Open1155.62050arXiv0806.0145OpenAlexW3106266785WikidataQ105584243 ScholiaQ105584243MaRDI QIDQ1002157FDOQ1002157
Authors: Nicolai Meinshausen, Bin Yu
Publication date: 25 February 2009
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0806.0145
Recommendations
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- scientific article; zbMATH DE number 5957408
- Necessary and sufficient conditions for variable selection consistency of the Lasso in high dimensions
- Adaptive Lasso for sparse high-dimensional regression models
- On the sensitivity of the Lasso to the number of predictor variables
Asymptotic properties of parametric estimators (62F12) Nonparametric regression and quantile regression (62G08) Linear regression; mixed models (62J05) Statistical ranking and selection procedures (62F07) Ridge regression; shrinkage estimators (Lasso) (62J07)
Cites Work
- The Adaptive Lasso and Its Oracle Properties
- Least angle regression. (With discussion)
- Title not available (Why is that?)
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Title not available (Why is that?)
- Model Selection and Estimation in Regression with Grouped Variables
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Asymptotics for Lasso-type estimators.
- Sparsity oracle inequalities for the Lasso
- A theory for multiresolution signal decomposition: the wavelet representation
- The Group Lasso for Logistic Regression
- Asymptotics of sample eigenstructure for a large dimensional spiked covariance model
- Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ 1 minimization
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Relaxed Lasso
- Local operator theory, random matrices and Banach spaces.
- Title not available (Why is that?)
- Decoding by Linear Programming
- Stable recovery of sparse overcomplete representations in the presence of noise
- Just relax: convex programming methods for identifying sparse signals in noise
- Aggregation for Gaussian regression
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- Adaptive Lasso for Cox's proportional hazards model
- Recovery of Exact Sparse Representations in the Presence of Bounded Noise
- Greed is Good: Algorithmic Results for Sparse Approximation
- Sparse representations in unions of bases
- Using circulant symmetry to model featureless objects
- THE RESOLUTION OF CLOSELY ADJACENT SPECTRAL LINES
Cited In (only showing first 100 items - show all)
- Identifying small mean-reverting portfolios
- Learning sparse classifiers with difference of convex functions algorithms
- Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates
- Estimation and variable selection in partial linear single index models with error-prone linear covariates
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- On the conditions used to prove oracle results for the Lasso
- Grouping strategies and thresholding for high dimensional linear models
- Regression on manifolds: estimation of the exterior derivative
- Lazy lasso for local regression
- Least squares after model selection in high-dimensional sparse models
- Sign-constrained least squares estimation for high-dimensional regression
- Adaptive Dantzig density estimation
- Influence measures and stability for graphical models
- Non-asymptotic oracle inequalities for the Lasso and group Lasso in high dimensional logistic model
- Sub-optimality of some continuous shrinkage priors
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- Simultaneous analysis of Lasso and Dantzig selector
- Selection of variables and dimension reduction in high-dimensional non-parametric regression
- High-dimensional generalized linear models and the lasso
- An analysis of penalized interaction models
- Multiresolution functional ANOVA for large-scale, many-input computer experiments
- The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Graphical-model based high dimensional generalized linear models
- Minimax-optimal nonparametric regression in high dimensions
- A two-step method for estimating high-dimensional Gaussian graphical models
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- Estimation and variable selection with exponential weights
- High-dimensional variable screening and bias in subsequent inference, with an empirical comparison
- Near-ideal model selection by \(\ell _{1}\) minimization
- A comparison of the Lasso and marginal regression
- Variable selection in high-dimensional partly linear additive models
- Estimation for high-dimensional linear mixed-effects models using \(\ell_1\)-penalization
- \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities
- A new perspective on least squares under convex constraint
- On the residual empirical process based on the ALASSO in high dimensions and its functional oracle property
- Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization
- On the asymptotic properties of the group lasso estimator for linear models
- Thresholding-based iterative selection procedures for model selection and shrinkage
- AIC for the Lasso in generalized linear models
- Consistency of Bayesian linear model selection with a growing number of parameters
- \(L_1\)-regularized least squares for support recovery of high dimensional single index models with Gaussian designs
- Two tales of variable selection for high dimensional regression: Screening and model building
- Nearly unbiased variable selection under minimax concave penalty
- Structured estimation for the nonparametric Cox model
- Regression analysis of locality preserving projections via sparse penalty
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- Stability Selection
- Extensions of stability selection using subsamples of observations and covariates
- The Lasso as an \(\ell _{1}\)-ball model selection procedure
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Correlated variables in regression: clustering and sparse estimation
- Strong consistency of Lasso estimators
- Calibrating nonconvex penalized regression in ultra-high dimension
- Multiscale Change Point Inference
- Lasso Inference for High-Dimensional Time Series
- High-dimensional additive modeling
- Rates of convergence of the adaptive LASSO estimators to the oracle distribution and higher order refinements by the bootstrap
- Statistical significance in high-dimensional linear models
- Oracle efficient variable selection in random and fixed effects panel data models
- Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models
- Regularization for Cox's proportional hazards model with NP-dimensionality
- General nonexact oracle inequalities for classes with a subexponential envelope
- High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Pivotal estimation via square-root lasso in nonparametric regression
- \(\ell_{1}\)-penalization for mixture regression models
- Autoregressive process modeling via the Lasso procedure
- Variable selection in high-dimensional quantile varying coefficient models
- Performance bounds for parameter estimates of high-dimensional linear models with correlated errors
- Estimation in high-dimensional linear models with deterministic design matrices
- High-dimensional variable selection
- Support union recovery in high-dimensional multivariate regression
- \(\ell_1\)-regularization of high-dimensional time-series models with non-Gaussian and heteroskedastic errors
- Oracle inequalities for the lasso in the Cox model
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Least angle and \(\ell _{1}\) penalized regression: a review
- A cluster elastic net for multivariate regression
- Semiparametric efficiency bounds for high-dimensional models
- Semi-analytic resampling in Lasso
- Discussion: One-step sparse estimates in nonconcave penalized likelihood models
- Variable selection in nonparametric additive models
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Discussion of: ``Grouping strategies and thresholding for high dimension linear models
- Recovery of partly sparse and dense signals
- The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods
- Regularization and the small-ball method. I: Sparse recovery
- Model selection consistency of Lasso for empirical data
- Generalized Kalman smoothing: modeling and algorithms
- Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization
- Greedy variance estimation for the LASSO
- An \(L_1\)-regularized logistic model for detecting short-term neuronal interactions
- Penalised robust estimators for sparse and high-dimensional linear models
- High dimensional single index models
- Cross-validation with confidence
- Robust machine learning by median-of-means: theory and practice
- Minimization of $L_1$ Over $L_2$ for Sparse Signal Recovery with Convergence Guarantee
- The Lasso for High Dimensional Regression with a Possible Change Point
- Convex biclustering
- A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates
This page was built for publication: Lasso-type recovery of sparse representations for high-dimensional data
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1002157)