A study of error variance estimation in Lasso regression
From MaRDI portal
Publication:3465093
DOI10.5705/ss.2014.042zbMath1372.62023arXiv1311.5274OpenAlexW2963121741MaRDI QIDQ3465093
Stephen Reid, Robert Tibshirani, Jerome H. Friedman
Publication date: 28 January 2016
Published in: Statistica Sinica (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1311.5274
Related Items
Sparse matrix linear models for structured high-throughput data, Bayesian jackknife empirical likelihood for the error variance in linear regression models, Doubly debiased Lasso: high-dimensional inference under hidden confounding, Model Selection With Lasso-Zero: Adding Straw to the Haystack to Better Find Needles, Projection-based Inference for High-dimensional Linear Models, Degrees of freedom for piecewise Lipschitz estimators, Projective inference in high-dimensional problems: prediction and feature selection, A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates, A sequential modeling approach for predicting clinical outcomes with repeated measures, Densely connected sub-Gaussian linear structural equation model learning via \(\ell_1\)- and \(\ell_2\)-regularized regressions, Robust variable selection and estimation via adaptive elastic net S-estimators for linear regression, High-dimensional simultaneous inference with the bootstrap, Prediction error after model search, Selective Inference for Hierarchical Clustering, Adapting to unknown noise level in sparse deconvolution, Goodness-of-Fit Tests for High Dimensional Linear Models, Variational Bayes for High-Dimensional Linear Regression With Sparse Priors, An algorithm for the multivariate group lasso with covariance estimation, A High‐dimensional Focused Information Criterion, Unnamed Item, Debiasing the Lasso: optimal sample size for Gaussian designs, Variable Selection Using a Smooth Information Criterion for Distributional Regression Models, Greedy variance estimation for the LASSO, Quasi-Bayesian estimation of large Gaussian graphical models, High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}, Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors, Inference without compatibility: using exponential weighting for inference on a parameter of a linear model, A study on tuning parameter selection for the high-dimensional lasso, A permutation approach for selecting the penalty parameter in penalized model selection, In defense of the indefensible: a very naïve approach to high-dimensional inference, Confidence intervals for parameters in high-dimensional sparse vector autoregression, Two-sample testing of high-dimensional linear regression coefficients via complementary sketching
Uses Software