Consistent tuning parameter selection in high dimensional sparse linear regression
DOI10.1016/J.JMVA.2011.03.007zbMATH Open1216.62103OpenAlexW2078536949MaRDI QIDQ548648FDOQ548648
Publication date: 29 June 2011
Published in: Journal of Multivariate Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jmva.2011.03.007
variable selectionhigh dimensionalitysure independence screeningadaptive elastic netBayesian information criterion
Bayesian inference (62F15) Asymptotic properties of nonparametric inference (62G20) Linear regression; mixed models (62J05) Estimation in multivariate analysis (62H12)
Cites Work
- Estimating the dimension of a model
- The Adaptive Lasso and Its Oracle Properties
- Least angle regression. (With discussion)
- Extended Bayesian information criteria for model selection with large model spaces
- Ideal spatial adaptation by wavelet shrinkage
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Title not available (Why is that?)
- Lasso-type recovery of sparse representations for high-dimensional data
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- Regularization and Variable Selection Via the Elastic Net
- Shrinkage tuning parameter selection with a diverging number of parameters
- A Selective Overview of Variable Selection in High Dimensional Feature Space (Invited Review Article)
- Tuning parameter selectors for the smoothly clipped absolute deviation method
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Nonconcave penalized likelihood with a diverging number of parameters.
- The risk inflation criterion for multiple regression
- Model selection in irregular problems: Applications to mapping quantitative trait loci
- A Model Selection Approach for the Identification of Quantitative Trait Loci in Experimental Crosses
- Title not available (Why is that?)
- On the adaptive elastic net with a diverging number of parameters
- For most large underdetermined systems of linear equations the minimal π1βnorm solution is also the sparsest solution
- Forward Regression for Ultra-High Dimensional Variable Screening
Cited In (17)
- A Unified Framework for Change Point Detection in High-Dimensional Linear Models
- A robust and efficient variable selection method for linear regression
- Globally adaptive quantile regression with ultra-high dimensional data
- A modified information criterion for tuning parameter selection in 1d fused LASSO for inference on multiple change points
- Smooth predictive model fitting in regression
- Model selection in sparse high-dimensional vine copula models with an application to portfolio risk
- Penalized estimation of threshold auto-regressive models with many components and thresholds
- Tuning parameter selection in sparse regression modeling
- Variables selection using \(\mathcal{L}_0\) penalty
- Assessing Tuning Parameter Selection Variability in Penalized Regression
- Sparse group fused Lasso for model segmentation: a hybrid approach
- Regularized latent class analysis with application in cognitive diagnosis
- Variable selection and parameter estimation via WLAD-SCAD with a diverging number of parameters
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Title not available (Why is that?)
- Cross-Validation With Confidence
- Tuning parameter selection for penalised empirical likelihood with a diverging number of parameters
Recommendations
- Tuning parameter selection in sparse regression modeling π π
- The sparsity and bias of the LASSO selection in high-dimensional linear regression π π
- A stepwise regression method and consistent model selection for highdimensional sparse linear models π π
- A study on tuning parameter selection for the high-dimensional lasso π π
- Variable selection in high-dimensional sparse multiresponse linear regression models π π
- Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression π π
- Sparse high-dimensional linear regression. Estimating squared error and a phase transition π π
- Nearly optimal minimax estimator for high-dimensional sparse linear regression π π
- Estimating a sparse reduction for general regression in high dimensions π π
- Linear Regression With a Sparse Parameter Vector π π
This page was built for publication: Consistent tuning parameter selection in high dimensional sparse linear regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q548648)