Model assessment tools for a model false world
From MaRDI portal
Abstract: A standard goal of model evaluation and selection is to find a model that approximates the truth well while at the same time is as parsimonious as possible. In this paper we emphasize the point of view that the models under consideration are almost always false, if viewed realistically, and so we should analyze model adequacy from that point of view. We investigate this issue in large samples by looking at a model credibility index, which is designed to serve as a one-number summary measure of model adequacy. We define the index to be the maximum sample size at which samples from the model and those from the true data generating mechanism are nearly indistinguishable. We use standard notions from hypothesis testing to make this definition precise. We use data subsampling to estimate the index. We show that the definition leads us to some new ways of viewing models as flawed but useful. The concept is an extension of the work of Davies [Statist. Neerlandica 49 (1995) 185--245].
Recommendations
Cites work
- scientific article; zbMATH DE number 1817585 (Why is no real title available?)
- scientific article; zbMATH DE number 1249686 (Why is no real title available?)
- scientific article; zbMATH DE number 847272 (Why is no real title available?)
- scientific article; zbMATH DE number 3092166 (Why is no real title available?)
- scientific article; zbMATH DE number 3103824 (Why is no real title available?)
- An analysis of variance test for normality (complete samples)
- Approximation Theorems of Mathematical Statistics
- Building and using semiparametric tolerance regions for parametric multinomial models
- Data features1
- Graphical Display of Two-Way Contingency Tables
- Hypothesis testing when a nuisance parameter is present only under the alternative: Linear model case
- Model choice in generalised linear models: a Bayesian approach via Kullback-Leibler projections
- On Hadamard differentiability in \(k\)-sample semiparametric models -- with applications to the assessment of structural relationships
- One-sided inference about functionals of a density
- Science and Statistics
- Some Difficulties of Interpretation Encountered in the Application of the Chi-Square Test
- Some Methodological Aspects of Validation of Models in Nonparametric Regression
- Some properties of incomplete U-statistics
- Subsampling
- Testing for independence in a two-way table: New interpretations of the chi-square statistic
- The Focused Information Criterion
Cited in
(9)- Accuracy index of statistical models: axiomatic approach
- Improving cross-validated bandwidth selection using subsampling-extrapolation techniques
- Box-constrained monotone approximations to Lipschitz regularizations, with applications to robust testing
- Generalized Pareto processes and fund liquidity risk
- On approximate validation of models: a Kolmogorov-Smirnov-based approach
- A contamination model for the stochastic order
- Statistical properties of simple random-effects models for genetic heritability
- Optimal selection of sample-size dependent common subsets of covariates for multi-task regression prediction
- A general class of linearly extrapolated variance estimators
This page was built for publication: Model assessment tools for a model false world
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q907957)