Distributionally robust and generalizable inference
From MaRDI portal
Publication:6145146
Abstract: We discuss recently developed methods that quantify the stability and generalizability of statistical findings under distributional changes. In many practical problems, the data is not drawn i.i.d. from the target population. For example, unobserved sampling bias, batch effects, or unknown associations might inflate the variance compared to i.i.d. sampling. For reliable statistical inference, it is thus necessary to account for these types of variation. We discuss and review two methods that allow quantifying distribution stability based on a single dataset. The first method computes the sensitivity of a parameter under worst-case distributional perturbations to understand which types of shift pose a threat to external validity. The second method treats distributional shifts as random which allows assessing average robustness (instead of worst-case). Based on a stability analysis of multiple estimators on a single dataset, it integrates both sampling and distributional uncertainty into a single confidence interval.
Cites work
- scientific article; zbMATH DE number 3954047 (Why is no real title available?)
- scientific article; zbMATH DE number 3753890 (Why is no real title available?)
- scientific article; zbMATH DE number 6982327 (Why is no real title available?)
- A Robust Version of the Probability Ratio Test
- A significance test for the lasso
- A unified theory of confidence regions and testing for high-dimensional estimating equations
- Admissibility in partial conjunction testing
- Anchor Regression: Heterogeneous Data Meet Causality
- Asymptotic Statistics
- Asymptotic evaluation of certain Markov process expectations for large time—III
- Bounds on the conditional and average treatment effect with unobserved confounding factors
- Causal inference by using invariant prediction: identification and confidence intervals. With discussion and authors' reply
- Causal inference for statistics, social, and biomedical sciences. An introduction
- Causality. Models, reasoning, and inference
- Conditional variance penalties and domain shift robustness
- Confidence intervals for low dimensional parameters in high dimensional linear models
- Discussion of big Bayes stories and BayesBag
- Elements of causal inference. Foundations and learning algorithms
- Exact post-selection inference, with application to the Lasso
- High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}
- Identification of Causal Effects Using Instrumental Variables
- Instrumental variables: an econometrician's perspective
- Invariance, causality and robustness
- Making sense of sensitivity: extending omitted variable bias
- Maximin effects in inhomogeneous large-scale data
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Robust Estimation of a Location Parameter
- Robust optimization-methodology and applications
- Screening for Partial Conjunction Hypotheses
- Sensitivity analysis for certain permutation inferences in matched observational studies
- Sensitivity analysis for inverse probability weighting estimators via the percentile bootstrap
- Stability
- Stability Selection
- Theory and applications of robust optimization
- Uncertainty quantification for the horseshoe (with discussion)
- Valid post-selection inference
- Veridical data science
- \(p\)-values for high-dimensional regression
This page was built for publication: Distributionally robust and generalizable inference
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6145146)