Algorithmic identification of probabilities is hard
From MaRDI portal
Abstract: Suppose that we are given an infinite binary sequence which is random for a Bernoulli measure of parameter . By the law of large numbers, the frequency of zeros in the sequence tends to~, and thus we can get better and better approximations of as we read the sequence. We study in this paper a similar question, but from the viewpoint of inductive inference. We suppose now that is a computable real, but one asks for more: as we are reading more and more bits of our random sequence, we have to eventually guess the exact parameter (in the form of a Turing code). Can one do such a thing uniformly on all sequences that are random for computable Bernoulli measures, or even on a `large enough' fraction of them? In this paper, we give a negative answer to this question. In fact, we prove a very general negative result which extends far beyond the class of Bernoulli measures.
Recommendations
Cites work
- scientific article; zbMATH DE number 1460545 (Why is no real title available?)
- Algorithmic tests and randomness with respect to a class of measures
- An introduction to Kolmogorov complexity and its applications
- Classical recursion theory. Vol. II
- Comparison of identification criteria for machine inductive inference
- Constructive equivalence relations on computable probability measures
- Identification of probabilities
- Learning recursive functions: A survey
- Uniform test of algorithmic randomness over a general space
- Weak completeness in \(\text{E}\) and \(\text{E}_{2}\)
Cited in
(5)
This page was built for publication: Algorithmic identification of probabilities is hard
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1747492)