High-dimensional variable selection
From MaRDI portal
Publication:834336
DOI10.1214/08-AOS646zbMath1173.62054arXiv0704.1139OpenAlexW3103643510WikidataQ37362959 ScholiaQ37362959MaRDI QIDQ834336
Kathryn Roeder, Larry Alan Wasserman
Publication date: 19 August 2009
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0704.1139
Asymptotic properties of parametric estimators (62F12) Estimation in multivariate analysis (62H12) Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05) Applications of statistics to biology and medical sciences; meta analysis (62P10) Point estimation (62F10)
Related Items
On the impact of model selection on predictor identification and parameter inference, LOL selection in high dimension, The revisited knockoffs method for variable selection in L1-penalized regressions, Gaussian Bayesian network comparisons with graph ordering unknown, Projection-based high-dimensional sign test, Iterative algorithm for discrete structure recovery, Self-semi-supervised clustering for large scale data with massive null group, Asymptotics for high dimensional regression \(M\)-estimates: fixed design results, A unified theory of confidence regions and testing for high-dimensional estimating equations, Variable Selection With Second-Generation P-Values, Post-model-selection inference in linear regression models: an integrated review, An ensemble learning method for variable selection: application to high-dimensional data and missing values, The benefit of group sparsity in group inference with de-biased scaled group Lasso, Screening-based Bregman divergence estimation with NP-dimensionality, Thresholding least-squares inference in high-dimensional regression models, Thresholding tests based on affine Lasso to achieve non-asymptotic nominal level and high power under sparse and dense alternatives in high dimension, Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization, The predictive power of the business and bank sentiment of firms: a high-dimensional Granger causality approach, Projection-based Inference for High-dimensional Linear Models, Estimation and Inference of Heterogeneous Treatment Effects using Random Forests, Robust stability best subset selection for autocorrelated data based on robust location and dispersion estimator, Unnamed Item, An optimal projection test for zero multiple correlation coefficient in high-dimensional normal data, The Holdout Randomization Test for Feature Selection in Black Box Models, Statistical significance in high-dimensional linear models, The geometry of least squares in the 21st century, Conditional Test for Ultrahigh Dimensional Linear Regression Coefficients, Estimating and testing conditional sums of means in high dimensional multivariate binary data, Hierarchical inference for genome-wide association studies: a view on methodology with software, Predictor ranking and false discovery proportion control in high-dimensional regression, Debiasing the debiased Lasso with bootstrap, Tests for high-dimensional single-index models, Variable selection procedures from multiple testing, Classifier variability: accounting for training and testing, High-dimensional simultaneous inference with the bootstrap, Testing covariates in high dimension linear regression with latent factors, Statistical learning and selective inference, SLOPE-adaptive variable selection via convex optimization, Unnamed Item, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Goodness-of-Fit Tests for High Dimensional Linear Models, Fundamental limits of exact support recovery in high dimensions, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), UPS delivers optimal phase diagram in high-dimensional variable selection, Which bridge estimator is the best for variable selection?, Exact tests via multiple data splitting, Bayesian high-dimensional screening via MCMC, Dynamic tilted current correlation for high dimensional variable screening, Empirical likelihood test for high dimensional linear models, Covariate assisted screening and estimation, Penalized weighted composite quantile regression in the linear regression model with heavy-tailed autocorrelated errors, Selecting massive variables using an iterated conditional modes/medians algorithm, High-dimensional inference in misspecified linear models, Feature selection in finite mixture of sparse normal linear models in high-dimensional feature space, Debiased Inference on Treatment Effect in a High-Dimensional Model, Accelerating a Gibbs sampler for variable selection on genomics data with summarization and variable pre-selection combining an array DBMS and R, Convex and non-convex regularization methods for spatial point processes intensity estimation, Unnamed Item, Debiasing the Lasso: optimal sample size for Gaussian designs, Selective inference with a randomized response, Controlling the false-discovery rate by procedures adapted to the length bias of RNA-seq, Beyond support in two-stage variable selection, A significance test for the lasso, Discussion: ``A significance test for the lasso, Rejoinder: ``A significance test for the lasso, Projection tests for high-dimensional spiked covariance matrices, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, Inference under Fine-Gray competing risks model with high-dimensional covariates, Two-directional simultaneous inference for high-dimensional models, A global homogeneity test for high-dimensional linear regression, On asymptotically optimal confidence regions and tests for high-dimensional models, Confidence intervals for high-dimensional inverse covariance estimation, High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}, Bootstrapping and sample splitting for high-dimensional, assumption-lean inference, Endogeneity in high dimensions, Factor-adjusted multiple testing of correlations, False Discovery Rate Control Under General Dependence By Symmetrized Data Aggregation, Detection of gene-gene interactions using multistage sparse and low-rank regression, Variable selection after screening: with or without data splitting?, Variable selection with Hamming loss, Two-sample spatial rank test using projection, Variable selection for longitudinal data with high-dimensional covariates and dropouts, Multicarving for high-dimensional post-selection inference, Tolerance intervals from ridge regression in the presence of multicollinearity and high dimension, Tight conditions for consistency of variable selection in the context of high dimensionality, Inference for \(L_2\)-boosting, Selective inference via marginal screening for high dimensional classification, Selection of the Regularization Parameter in Graphical Models Using Network Characteristics, Exact model comparisons in the plausibility framework, Sure independence screening in the presence of missing data, Principled sure independence screening for Cox models with ultra-high-dimensional covariates, A scalable nonparametric specification testing for massive data, Two-Stage Procedures for High-Dimensional Data, In defense of the indefensible: a very naïve approach to high-dimensional inference, Mining events with declassified diplomatic documents, Spatially relaxed inference on high-dimensional linear models, Network differential connectivity analysis, Spectral analysis of high-dimensional time series, A knockoff filter for high-dimensional selective inference, Markov Neighborhood Regression for High-Dimensional Inference, Covariate Information Number for Feature Screening in Ultrahigh-Dimensional Supervised Problems, Unnamed Item, Causal Interaction in Factorial Experiments: Application to Conjoint Analysis, Statistical Inference for High-Dimensional Models via Recursive Online-Score Estimation, Compositional knockoff filter for high‐dimensional regression analysis of microbiome data, Cellwise outlier detection with false discovery rate control, A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates, Automatic bias correction for testing in high‐dimensional linear models, Screening-assisted dynamic multiple testing with false discovery rate control, Threshold Selection in Feature Screening for Error Rate Control, Feature Screening with Latent Responses, A Dirichlet-Tree Multinomial Regression Model for Associating Dietary Nutrients with Gut Microorganisms, Structure learning of exponential family graphical model with false discovery rate control, Predictive quantile regression with mixed roots and increasing dimensions: the ALQR approach, Double bias correction for high-dimensional sparse additive hazards regression with covariate measurement errors, A new data adaptive elastic net predictive model using hybridized smoothed covariance estimators with information complexity, Simultaneous test for linear model via projection, Controlling False Discovery Rate Using Gaussian Mirrors, Scalable and efficient inference via CPE, Data-driven selection of the number of change-points via error rate control, Comparing dependent undirected Gaussian networks, False Discovery Rate Control via Data Splitting, Inference for high‐dimensional linear models with locally stationary error processes, A generalized knockoff procedure for FDR control in structural change detection, Inference for sparse linear regression based on the leave-one-covariate-out solution path, Derandomizing Knockoffs, Cross-Validation With Confidence, Kernel Meets Sieve: Post-Regularization Confidence Bands for Sparse Additive Model, Feature Screening for Network Autoregression Model, Consistent parameter estimation for Lasso and approximate message passing, Unnamed Item, High-dimensional linear model selection motivated by multiple testing, A stepwise regression algorithm for high-dimensional variable selection, Robust Variable and Interaction Selection for Logistic Regression and General Index Models, Unnamed Item, Covariate-Assisted Ranking and Screening for Large-Scale Two-Sample Inference, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, High-dimensional statistical inference via DATE
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The Adaptive Lasso and Its Oracle Properties
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso-type recovery of sparse representations for high-dimensional data
- Relaxed Lasso
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Least angle regression. (With discussion)
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Approximation and learning by greedy algorithms
- Boosting for high-dimensional linear models
- High-dimensional graphs and variable selection with the Lasso
- Uniform consistency in causal inference
- Greed is Good: Algorithmic Results for Sparse Approximation
- Just relax: convex programming methods for identifying sparse signals in noise
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution