Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression
From MaRDI portal
(Redirected from Publication:389956)
Abstract: We study the asymptotic properties of Lasso+mLS and Lasso+Ridge under the sparse high-dimensional linear regression model: Lasso selecting predictors and then modified Least Squares (mLS) or Ridge estimating their coefficients. First, we propose a valid inference procedure for parameter estimation based on parametric residual bootstrap after Lasso+mLS and Lasso+Ridge. Second, we derive the asymptotic unbiasedness of Lasso+mLS and Lasso+Ridge. More specifically, we show that their biases decay at an exponential rate and they can achieve the oracle convergence rate of (where is the number of nonzero regression coefficients and is the sample size) for mean squared error (MSE). Third, we show that Lasso+mLS and Lasso+Ridge are asymptotically normal. They have an oracle property in the sense that they can select the true predictors with probability converging to 1 and the estimates of nonzero parameters have the same asymptotic normal distribution that they would have if the zero parameters were known in advance. In fact, our analysis is not limited to adopting Lasso in the selection stage, but is applicable to any other model selection criteria with exponentially decay rates of the probability of selecting wrong models.
Recommendations
- Adaptive Lasso for sparse high-dimensional regression models
- Least squares after model selection in high-dimensional sparse models
- Adaptive Lasso in high-dimensional settings
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Asymptotic properties of lasso in high-dimensional partially linear models
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 3841086 (Why is no real title available?)
- scientific article; zbMATH DE number 708500 (Why is no real title available?)
- scientific article; zbMATH DE number 1104922 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A perturbation method for inference on regularized regression estimates
- A unified approach to model selection and sparse recovery using regularized least squares
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Adaptive Lasso for sparse high-dimensional regression models
- Asymptotic expansions for the power of distributionfree tests in the two- sample problem
- Asymptotic properties of bridge estimators in sparse high-dimensional regression models
- Asymptotics for Lasso-type estimators.
- Bootstrap methods: another look at the jackknife
- Bootstrapping Lasso estimators
- Bootstrapping regression models
- Contributions of empirical and quantile processes to the asymptotic theory of goodness-of-fit tests. (With comments)
- Greed is Good: Algorithmic Results for Sparse Approximation
- High-dimensional generalized linear models and the lasso
- High-dimensional graphs and variable selection with the Lasso
- Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization
- Lasso-type recovery of sparse representations for high-dimensional data
- Least angle regression. (With discussion)
- Least squares after model selection in high-dimensional sparse models
- MODEL SELECTION AND INFERENCE: FACTS AND FICTION
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Model selection consistency of Dantzig selector
- On the distribution of the adaptive LASSO estimator
- Pathwise coordinate optimization
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Rates of convergence of the adaptive LASSO estimators to the oracle distribution and higher order refinements by the bootstrap
- Recovery of Exact Sparse Representations in the Presence of Bounded Noise
- Regularization and Variable Selection Via the Elastic Net
- Relaxed Lasso
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Simultaneous analysis of Lasso and Dantzig selector
- Some asymptotic theory for the bootstrap
- Sparsity oracle inequalities for the Lasso
- Stable recovery of sparse overcomplete representations in the presence of noise
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- The Adaptive Lasso and Its Oracle Properties
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
Cited in
(26)- A bootstrap Lasso+partial ridge method to construct confidence intervals for parameters in high-dimensional sparse linear models
- High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}
- Hierarchical inference for genome-wide association studies: a view on methodology with software
- Estimation in the presence of heteroskedasticity of unknown form: a Lasso-based approach
- A two-stage bridge estimator for regression models with endogeneity based on control function method
- Markov Neighborhood Regression for High-Dimensional Inference
- Projection-based Inference for High-dimensional Linear Models
- Change points detection and parameter estimation for multivariate time series
- Statistical Inference for High-Dimensional Models via Recursive Online-Score Estimation
- Post-model-selection inference in linear regression models: an integrated review
- Ridge regression revisited: debiasing, thresholding and bootstrap
- Random weighting in LASSO regression
- Exploiting Disagreement Between High-Dimensional Variable Selectors for Uncertainty Visualization
- Lasso meets horseshoe: a survey
- Asymptotically faster estimation of high-dimensional additive models using subspace learning
- Lasso in Infinite dimension: application to variable selection in functional multivariate linear regression
- Bootstrap confidence regions based on M-estimators under nonstandard conditions
- scientific article; zbMATH DE number 7626707 (Why is no real title available?)
- Kernel meets sieve: post-regularization confidence bands for sparse additive model
- Confidence intervals for parameters in high-dimensional sparse vector autoregression
- Rejoinder on: ``High-dimensional simultaneous inference with the bootstrap
- Comments on: ``High-dimensional simultaneous inference with the bootstrap
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- High-dimensional simultaneous inference with the bootstrap
- Beyond support in two-stage variable selection
- Rates of convergence of the adaptive elastic net and the post-selection procedure in ultra-high dimensional sparse models
This page was built for publication: Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q389956)