Uniformly valid confidence intervals post-model-selection
From MaRDI portal
Abstract: We suggest general methods to construct asymptotically uniformly valid confidence intervals post-model-selection. The constructions are based on principles recently proposed by Berk et al. (2013). In particular the candidate models used can be misspecified, the target of inference is model-specific, and coverage is guaranteed for any data-driven model selection procedure. After developing a general theory we apply our methods to practically important situations where the candidate set of models, from which a working model is selected, consists of fixed design homoskedastic or heteroskedastic linear models, or of binary regression models with general link functions. In an extensive simulation study, we find that the proposed confidence intervals perform remarkably well, even when compared to existing methods that are tailored only for specific model selection procedures.
Recommendations
- Valid confidence intervals for post-model-selection predictors
- Upper bounds on the minimum coverage probability of confidence intervals in regression after model selection
- On various confidence intervals post-model-selection
- Valid post-selection inference in model-free linear regression
- Valid post-selection inference
Cites work
- scientific article; zbMATH DE number 3336465 (Why is no real title available?)
- scientific article; zbMATH DE number 3346000 (Why is no real title available?)
- scientific article; zbMATH DE number 3090543 (Why is no real title available?)
- A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity
- A Note on Infinitely Divisible Random Vectors
- Active sets of predictors for misspecified logistic regression
- Asymptotic Validity of F Tests for the Ordinary Linear Model and the Multiple Correlation Model
- CAN ONE ESTIMATE THE UNCONDITIONAL DISTRIBUTION OF POST-MODEL-SELECTION ESTIMATORS?
- Confidence intervals for low dimensional parameters in high dimensional linear models
- Confidence sets based on sparse estimators are necessarily large
- Exact post-selection inference, with application to the Lasso
- Inference on treatment effects after selection among high-dimensional controls
- Least angle regression. (With discussion)
- MODEL SELECTION AND INFERENCE: FACTS AND FICTION
- Maximum Likelihood Estimation of Misspecified Models
- Maximum likelihood estimation in misspecified generalized linear models
- Model selection principles in misspecified models
- On asymptotically optimal confidence regions and tests for high-dimensional models
- On the Large-Sample Minimal Coverage Probability of Confidence Intervals After Model Selection
- On the existence and uniqueness of the maximum likelihood estimates for certain generalized linear models
- On the length of post-model-selection confidence intervals conditional on polyhedral constraints
- On various confidence intervals post-model-selection
- PERFORMANCE LIMITS FOR ESTIMATORS OF THE RISK OR DISTRIBUTION OF SHRINKAGE-TYPE ESTIMATORS, AND SOME GENERAL LOWER RISK-BOUND RESULTS
- Post-selection inference for \(\ell_1\)-penalized likelihood models
- Robust regression: Asymptotics, conjectures and Monte Carlo
- Spherical Cap Packing Asymptotics and Rank-Extreme Detection
- Sufficient Conditions for the Consistency of Maximum Likelihood Estimation Despite Misspecification of Distribution in Multinomial Discrete Choice Models
- THE FINITE-SAMPLE DISTRIBUTION OF POST-MODEL-SELECTION ESTIMATORS AND UNIFORM VERSUS NONUNIFORM APPROXIMATIONS
- Uniform asymptotic inference and the bootstrap after model selection
- Uniformly valid confidence intervals post-model-selection
- Valid confidence intervals for post-model-selection predictors
- Valid post-selection inference
Cited in
(19)- On various confidence intervals post-model-selection
- Simultaneous high-probability bounds on the false discovery proportion in structured, regression and online settings
- UNIFORM-IN-SUBMODEL BOUNDS FOR LINEAR REGRESSION IN A MODEL-FREE FRAMEWORK
- Uniformly valid confidence intervals post-model-selection
- Post hoc confidence bounds on false positives using reference families
- Bootstrapping and sample splitting for high-dimensional, assumption-lean inference
- Asymptotic properties of the maximum likelihood and cross validation estimators for transformed Gaussian processes
- On the length of post-model-selection confidence intervals conditional on polyhedral constraints
- Post-model-selection inference in linear regression models: an integrated review
- Two sources of poor coverage of confidence intervals after model selection
- Post-selection inference via algorithmic stability
- Conditional predictive inference post model selection
- Valid post-selection inference in model-free linear regression
- Valid confidence intervals for post-model-selection predictors
- Bounds in \(L^1\) Wasserstein distance on the normal approximation of general M-estimators
- Optimal finite sample post-selection confidence distributions in generalized linear models
- Lasso Inference for High-Dimensional Time Series
- Assumption Lean Regression
- A (tight) upper bound for the length of confidence intervals with conditional coverage
This page was built for publication: Uniformly valid confidence intervals post-model-selection
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2176628)