Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
From MaRDI portal
Publication:2934105
zbMath1319.62145arXiv1306.3171MaRDI QIDQ2934105
Adel Javanmard, Andrea Montanari
Publication date: 8 December 2014
Full work available at URL: https://arxiv.org/abs/1306.3171
Asymptotic properties of parametric estimators (62F12) Parametric tolerance and confidence regions (62F25) Ridge regression; shrinkage estimators (Lasso) (62J07) Applications of statistics to biology and medical sciences; meta analysis (62P10) Genetics and epigenetics (92D10)
Related Items (only showing first 100 items - show all)
Powerful knockoffs via minimizing reconstructability ⋮ Worst possible sub-directions in high-dimensional models ⋮ Covariate-adjusted inference for differential analysis of high-dimensional networks ⋮ Statistical inference in sparse high-dimensional additive models ⋮ Significance testing in non-sparse high-dimensional linear models ⋮ Familywise error rate control via knockoffs ⋮ Mathematical foundations of machine learning. Abstracts from the workshop held March 21--27, 2021 (hybrid meeting) ⋮ Testability of high-dimensional linear models with nonsparse structures ⋮ Inference for low-rank tensors -- no need to debias ⋮ Confidence intervals for high-dimensional partially linear single-index models ⋮ Recent advances in statistical methodologies in evaluating program for high-dimensional data ⋮ Exact post-selection inference, with application to the Lasso ⋮ Sparse matrix linear models for structured high-throughput data ⋮ A unified theory of confidence regions and testing for high-dimensional estimating equations ⋮ Semi-supervised empirical risk minimization: using unlabeled data to improve prediction ⋮ High-dimensional sufficient dimension reduction through principal projections ⋮ De-biasing the Lasso with degrees-of-freedom adjustment ⋮ Doubly robust semiparametric inference using regularized calibrated estimation with high-dimensional data ⋮ The asymptotic distribution of the MLE in high-dimensional logistic models: arbitrary covariance ⋮ Post-model-selection inference in linear regression models: an integrated review ⋮ Meta-analytic Gaussian network aggregation ⋮ The benefit of group sparsity in group inference with de-biased scaled group Lasso ⋮ Geometric inference for general high-dimensional linear inverse problems ⋮ Thresholding tests based on affine Lasso to achieve non-asymptotic nominal level and high power under sparse and dense alternatives in high dimension ⋮ Regression analysis for microbiome compositional data ⋮ AIC for the Lasso in generalized linear models ⋮ Doubly debiased Lasso: high-dimensional inference under hidden confounding ⋮ Distributed testing and estimation under sparse high dimensional models ⋮ Hierarchical correction of \(p\)-values via an ultrametric tree running Ornstein-Uhlenbeck process ⋮ Sparse linear models and \(l_1\)-regularized 2SLS with high-dimensional endogenous regressors and instruments ⋮ Single-index composite quantile regression for ultra-high-dimensional data ⋮ A unifying framework of high-dimensional sparse estimation with difference-of-convex (DC) regularizations ⋮ Variable selection for high dimensional Gaussian copula regression model: an adaptive hypothesis testing procedure ⋮ Robust post-selection inference of high-dimensional mean regression with heavy-tailed asymmetric or heteroskedastic errors ⋮ Statistical inference for model parameters in stochastic gradient descent ⋮ Efficient estimation of linear functionals of principal components ⋮ Hierarchical inference for genome-wide association studies: a view on methodology with software ⋮ Controlling the false discovery rate via knockoffs ⋮ Inference for high-dimensional instrumental variables regression ⋮ Debiasing the debiased Lasso with bootstrap ⋮ Detangling robustness in high dimensions: composite versus model-averaged estimation ⋮ High-dimensional simultaneous inference with the bootstrap ⋮ Rejoinder on: ``High-dimensional simultaneous inference with the bootstrap ⋮ SLOPE-adaptive variable selection via convex optimization ⋮ Statistical unfolding of elementary particle spectra: empirical Bayes estimation and bias-corrected uncertainty quantification ⋮ Asymptotically honest confidence regions for high dimensional parameters by the desparsified conservative Lasso ⋮ Relaxing the assumptions of knockoffs by conditioning ⋮ Analysis of phenotypic- and estimated breeding values (EBV) to dissect the genetic architecture of complex traits in a Scots pine three-generation pedigree design ⋮ Ill-posed estimation in high-dimensional models with instrumental variables ⋮ High-dimensional inference in misspecified linear models ⋮ Robust regression with compositional covariates ⋮ Innovated scalable efficient inference for ultra-large graphical models ⋮ ROCKET: robust confidence intervals via Kendall's tau for transelliptical graphical models ⋮ Uniformly valid post-regularization confidence regions for many functional parameters in z-estimation framework ⋮ Debiasing the Lasso: optimal sample size for Gaussian designs ⋮ Adaptive estimation of high-dimensional signal-to-noise ratios ⋮ Online rules for control of false discovery rate and false discovery exceedance ⋮ On sequential confidence estimation of parameters of stochastic dynamical systems with conditionally Gaussian noises ⋮ Selective inference with a randomized response ⋮ Uniformly valid confidence sets based on the Lasso ⋮ Usage of the GO estimator in high dimensional linear models ⋮ A significance test for the lasso ⋮ Discussion: ``A significance test for the lasso ⋮ Rejoinder: ``A significance test for the lasso ⋮ Nonparametric inference via bootstrapping the debiased estimator ⋮ Group Inference in High Dimensions with Applications to Hierarchical Testing ⋮ Rate optimal estimation and confidence intervals for high-dimensional regression with missing covariates ⋮ A global homogeneity test for high-dimensional linear regression ⋮ Testing Endogeneity with High Dimensional Covariates ⋮ Model-assisted inference for treatment effects using regularized calibrated estimation with high-dimensional data ⋮ On asymptotically optimal confidence regions and tests for high-dimensional models ⋮ Lasso Inference for High-Dimensional Time Series ⋮ Distribution-Free Predictive Inference For Regression ⋮ Confidence intervals for high-dimensional inverse covariance estimation ⋮ Testing for high-dimensional network parameters in auto-regressive models ⋮ Variable selection via adaptive false negative control in linear regression ⋮ High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi} ⋮ Inference in high dimensional linear measurement error models ⋮ Optimal sparsity testing in linear regression model ⋮ Inference without compatibility: using exponential weighting for inference on a parameter of a linear model ⋮ Flexible and Interpretable Models for Survival Data ⋮ Semiparametric efficiency bounds for high-dimensional models ⋮ Non-asymptotic error controlled sparse high dimensional precision matrix estimation ⋮ Multicarving for high-dimensional post-selection inference ⋮ Honest confidence regions and optimality in high-dimensional precision matrix estimation ⋮ The de-biased group Lasso estimation for varying coefficient models ⋮ Second-order Stein: SURE for SURE and other applications in high-dimensional inference ⋮ The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning ⋮ Augmented minimax linear estimation ⋮ Scale calibration for high-dimensional robust regression ⋮ High-dimensional inference for linear model with correlated errors ⋮ In defense of the indefensible: a very naïve approach to high-dimensional inference ⋮ Some perspectives on inference in high dimensions ⋮ A convex programming solution based debiased estimator for quantile with missing response and high-dimensional covariables ⋮ Confidence intervals for parameters in high-dimensional sparse vector autoregression ⋮ Spatially relaxed inference on high-dimensional linear models ⋮ Network differential connectivity analysis ⋮ Asymptotic normality of robust \(M\)-estimators with convex penalty ⋮ Design of c-optimal experiments for high-dimensional linear models ⋮ High-dimensional linear models with many endogenous variables
This page was built for publication: Confidence Intervals and Hypothesis Testing for High-Dimensional Regression