A philosophical treatise of universal induction
DOI10.3390/E13061076zbMATH Open1296.03007arXiv1105.5721OpenAlexW2165552039WikidataQ55951231 ScholiaQ55951231MaRDI QIDQ400871FDOQ400871
Authors: Samuel Rathmanner, Marcus Hutter
Publication date: 26 August 2014
Published in: Entropy (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1105.5721
Recommendations
Bayes ruleinductive inferenceKolmogorov complexityOccam's razorsequence predictionSolomonoff inductionalgorithmic information theoryblack raven paradoxconfirmation theoryphilosophical issuesSolomonoff prior
Probability and inductive logic (03B48) Algorithmic information theory (Kolmogorov complexity, etc.) (68Q30) Logic in the philosophy of science (03A10)
Cites Work
- Merging of Opinions with Increasing Information
- A formal theory of inductive inference. Part I
- An essay towards solving a problem in the doctrine of chances. By the late Rev. Mr. Bayes, F. R. S. communicated by Mr. Price, in a letter to John Canton, A. M. F. R. S.
- Information-theoretic asymptotics of Bayes methods
- Clustering by Compression
- The Similarity Metric
- Title not available (Why is that?)
- On the Complexity of Finite Sequences
- An introduction to Kolmogorov complexity and its applications
- Complexity-based induction systems: Comparisons and convergence theorems
- Title not available (Why is that?)
- Universal artificial intelligence. Sequential decisions based on algorithmic probability.
- Adaptive online prediction by following the perturbed leader
- R. A. Fisher on the history of inverse probability. With comments by Robin L. Plackett and G. A. Barnard and a rejoinder by the author
- Open problems in universal induction \& intelligence
- A complete theory of everything (will be subjective)
- On universal prediction and Bayesian confirmation
- Convergence and loss bounds for bayesian sequence prediction
- 10.1162/1532443041827952
- The context-tree weighting method: basic properties
- Algorithmic complexity bounds on future prediction errors
Cited In (12)
- Bridging algorithmic information theory and machine learning: a new approach to kernel learning
- On Martin-Löf (non-)convergence of Solomonoff's universal mixture
- Probabilities on sentences in an expressive logic
- Solomonoff induction violates Nicod's criterion
- Hydrozip: how hydrological knowledge can be used to improve compression of hydrological data
- Logically reliable inductive inference
- On Martin-Löf Convergence of Solomonoff’s Mixture
- On the computability of Solomonoff induction and AIXI
- A generalized characterization of algorithmic probability
- On the computability of Solomonoff induction and knowledge-seeking
- Putnam's diagonal argument and the impossibility of a universal learning machine
- A circuit complexity formulation of algorithmic information theory
This page was built for publication: A philosophical treatise of universal induction
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q400871)