A quantitative Occam's razor (Q789829): Difference between revisions
From MaRDI portal
Added link to MaRDI item. |
ReferenceBot (talk | contribs) Changed an Item |
||
(4 intermediate revisions by 3 users not shown) | |||
Property / author | |||
Property / author: Rafael D. Sorkin / rank | |||
Property / reviewed by | |||
Property / reviewed by: Kurt Marti / rank | |||
Property / author | |||
Property / author: Rafael D. Sorkin / rank | |||
Normal rank | |||
Property / reviewed by | |||
Property / reviewed by: Kurt Marti / rank | |||
Normal rank | |||
Property / MaRDI profile type | |||
Property / MaRDI profile type: MaRDI publication profile / rank | |||
Normal rank | |||
Property / arXiv ID | |||
Property / arXiv ID: astro-ph/0511780 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Probability and the Interpretation of Quantum Mechanics / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Logical basis for information theory and probability theory / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q3260839 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q5588555 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Estimating the dimension of a model / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q5668779 / rank | |||
Normal rank |
Latest revision as of 11:04, 14 June 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | A quantitative Occam's razor |
scientific article |
Statements
A quantitative Occam's razor (English)
0 references
1983
0 references
Given empirical data \(D=\{(x_ i,y_ i): i=1,...,N\}\), the problem is to find a ''theory'' T, i.e. an assignment of a number P(y;x) representing the hypothetical probability of y given x to each pair (x,y)\(\in D\). If - I(T) is the log of the (unnormalized) ''prior probability'' of theory T and \(-I(D| T)=\sum^{N}_{i=1}\log P(y_ i;x_ i)\) is the log probability of D according to T, then \(p(T)=\exp(-I(D| T)-I(T))\) is the ''posterior probability'' of T given data D. Mainly for the nonlinear least-squares regression there is considered that theory T which best fits given data D, i.e. which maximizes p(T).
0 references
Occam
0 references
entropy
0 references
prior probability
0 references
posterior probability
0 references
nonlinear least-squares regression
0 references