Algorithmic complexity bounds on future prediction errors

From MaRDI portal
Publication:865627

DOI10.1016/J.IC.2006.10.004zbMATH Open1107.68044arXivcs/0701120OpenAlexW2139750407WikidataQ58012403 ScholiaQ58012403MaRDI QIDQ865627FDOQ865627

Jürgen Schmidhuber, Marcus Hutter, Alexey Chernov

Publication date: 20 February 2007

Published in: Information and Computation (Search for Journal in Brave)

Abstract: We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor M from the true distribution mu by the algorithmic complexity of mu. Here we assume we are at a time t>1 and already observed x=x1...xt. We bound the future prediction performance on xt+1xt+2... by a new variant of algorithmic complexity of mu given x, plus the complexity of the randomness deficiency of x. The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems.


Full work available at URL: https://arxiv.org/abs/cs/0701120




Recommendations




Cites Work


Cited In (13)





This page was built for publication: Algorithmic complexity bounds on future prediction errors

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q865627)