Limits of learning about a categorical latent variable under prior near-ignorance
From MaRDI portal
Publication:962884
Abstract: In this paper, we consider the coherent theory of (epistemic) uncertainty of Walley, in which beliefs are represented through sets of probability distributions, and we focus on the problem of modeling prior ignorance about a categorical random variable. In this setting, it is a known result that a state of prior ignorance is not compatible with learning. To overcome this problem, another state of beliefs, called emph{near-ignorance}, has been proposed. Near-ignorance resembles ignorance very closely, by satisfying some principles that can arguably be regarded as necessary in a state of ignorance, and allows learning to take place. What this paper does, is to provide new and substantial evidence that also near-ignorance cannot be really regarded as a way out of the problem of starting statistical inference in conditions of very weak beliefs. The key to this result is focusing on a setting characterized by a variable of interest that is emph{latent}. We argue that such a setting is by far the most common case in practice, and we provide, for the case of categorical latent variables (and general emph{manifest} variables) a condition that, if satisfied, prevents learning to take place under prior near-ignorance. This condition is shown to be easily satisfied even in the most common statistical problems. We regard these results as a strong form of evidence against the possibility to adopt a condition of prior near-ignorance in real statistical problems.
Recommendations
Cites work
- scientific article; zbMATH DE number 48344 (Why is no real title available?)
- scientific article; zbMATH DE number 735230 (Why is no real title available?)
- scientific article; zbMATH DE number 1484400 (Why is no real title available?)
- scientific article; zbMATH DE number 2109186 (Why is no real title available?)
- scientific article; zbMATH DE number 845703 (Why is no real title available?)
- An introduction to the imprecise Dirichlet model for multinomial data
- Latent Variable Modeling of Diagnostic Accuracy
- The Selection of Prior Distributions by Formal Rules
- Theory and Applications of Models of Computation
Cited in
(10)- An aggregation framework based on coherent lower previsions: application to Zadeh's paradox and sensor networks
- The Bayesian who knew too much
- Imprecise probability models for learning multinomial distributions from data. Applications to learning credal networks
- Imprecise probabilities for representing ignorance about a parameter
- Belief function and multivalued mapping robustness in statistical estimation
- Comments on ``Imprecise probability models for learning multinomial distributions from data. applications to learning credal networks
- Rejoinder on ``Imprecise probability models for learning multinomial distributions from data. applications to learning credal networks
- Credal ensembles of classifiers
- A continuous updating rule for imprecise probabilities
- Learning imprecise probability models: conceptual and practical challenges
This page was built for publication: Limits of learning about a categorical latent variable under prior near-ignorance
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q962884)