Statistical inference and Monte Carlo algorithms. (With discussion) (Q1372568)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Statistical inference and Monte Carlo algorithms. (With discussion)
scientific article

    Statements

    Statistical inference and Monte Carlo algorithms. (With discussion) (English)
    0 references
    0 references
    0 references
    0 references
    5 April 1998
    0 references
    Computations and statistics have always been intertwined. In particular, applied statistics has relied on computing to implement its solutions of real data problems. Here we look at another part of the relationship between statistics and computation, and examine a small part of how the theories not only are intertwined, but how they have influenced each other. With the explosion of methods based on Monte Carlo methods, particularly those using Markov chain algorithms such as the Gibbs sampler, there has been a blurring of the distinction between the statistical model and the algorithmic model. This is particularly evident in the examples of Section 3. There, the statistical model will typically be a hierarchical model, while the computational algorithm will be based on a set of conditional distributions. We will see that the manner in which we view the model can have a large impact on the validity of the statistical inference. It is therefore important to consider the statistical model that underlies the Monte Carlo algorithm. We can also turn things around. When one uses a Monte Carlo algorithm to do a calculation, it is common to process the output by taking an average. However, we should realize that the output from a Monte Carlo algorithm can be viewed as data, with the algorithm itself playing the part of a statistical model. As such, taking a naive average may not be the most effective way of processing the output. In Section 4 we look at this question, and investigate the effect of classical decision theory on output from the Accept-Reject algorithm. We consider these improvements as a post-simulation processing of a generated sample, which is statistically superior to the original estimator, although they may be computationally inferior in taking more computer time. However, this latter concern can also be addressed with estimators that offer statistical improvement while only requiring a slight increase in computational effort. We also emphasize that our approach and, in particular, the optimizations involved in the derivation of some of the improved estimators, is based on statistical rather than computational principles. The overall goal of the statistician is to process samples in an optimal way, and to make the best inference possible. To do so requires treating an algorithm as a statistical model, and (as far as possible) ignoring the computational issues. Another consideration in the interplay of statistical theory and algorithms is the prospect of using the structure of the algorithm to more efficiently construct an optimal procedure. We illustrate this in Section 5, where we look at three examples. These examples use the Gibbs sampler, and show that we can use the iterative nature of the algorithm to implement procedures that are sometimes computationally feasible and can result in an optimal inference. We end the paper with a short discussion section.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    Markov chain Monte Carlo
    0 references
    Gibbs sampling
    0 references
    Rao-Blackwell theorem
    0 references
    improper priors
    0 references
    accept-reject algorithm
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references