Learning a tree-structured Ising model in order to make predictions (Q2196190)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Learning a tree-structured Ising model in order to make predictions |
scientific article |
Statements
Learning a tree-structured Ising model in order to make predictions (English)
0 references
28 August 2020
0 references
The objective of this paper is to show that learning a model that makes accurate predictions is possible even when structure learning is not. For realizing of this objective, the authors introduced a loss function to evaluate learning algorithms based on the accuracy of low-order marginals. The small-set total variation between true distribution \(P\) and learned distribution \(Q\) is used. The main result gives lower and upper bounds on the number of samples needed to learn a tree Ising model to ensure small \(L^{(2)}\) loss, which in this setting is equivalent to accurate pairwise marginals. In fact, the main result concerns the maximum likelihood tree (also called Chow-Liu tree, see [\textit{C. K. Chow} and \textit{C. N. Liu}, IEEE Trans. Inf. Theory 14, 462--467 (1968; Zbl 0165.22305)]. At the end of the paper, some interesting numerical simulations to demonstrate the performance of the Chow-Liu algorithm in terms of both the probability of incorrect recovery of underlying structure (zero-one loss) and the \(L^{(2)}\) loss are presented.
0 references
high-dimensional statistics
0 references
model selection
0 references
Markov random fields
0 references
Ising model
0 references
prediction
0 references
tree model
0 references
maximum likelihood tree
0 references
Chow-Liu tree
0 references
0 references
0 references
0 references
0 references
0 references