Monte Carlo methods in Bayesian computation (Q1968817)

From MaRDI portal
Revision as of 22:31, 29 July 2023 by Importer (talk | contribs) (‎Created a new Item)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
Monte Carlo methods in Bayesian computation
scientific article

    Statements

    Monte Carlo methods in Bayesian computation (English)
    0 references
    0 references
    0 references
    0 references
    15 March 2000
    0 references
    Bayesian computation (BC) means computation of interesting quantities from a posterior distribution as its normalizing constant, moments etc. Only in rare cases, e.g. if one has a conjugate prior, it is easy to perform it. In most cases, however, there are problems of higher computational complexity. Common, Basic Monte Carlo (BMC) methods may be rather inaccurate tools for solving numerical problems. In their further development, special advanced techniques have been found as Gibbs sampling, the Metropolis-Hastings algorithm etc. All these are joined under the notion Markov chain Monte Carlo (MCMC). MCMC is the tool for BC, and under BC, the authors understand sampling from a posterior distribution by MCMC. The book contains 10 chapters. Chapter 1 gives an outline in what follows and four interesting practical examples, the treatment of which goes through the book as a red thread. Chapter 2 gives the reader an excellent survey on MCMC with many varieties and propositions on their improvement and convergence. In Chapter 3 it is shown how BMC can be used for parameter estimation of a posterior distribution. Chapter 4 deals with estimation of posterior densities of interesting parameters (here emphasized as marginal densities), e.g. by kernel estimators. The determination of the normalizing constant of a posterior distribution is a special problem. To it, Chapter 5 is dedicated. The next Chapters 6 to 9 deal with constrained parameter problems, the calculation of credible intervals and intervals of highest probability density, comparing of nonnested models and variable selection, resp. Further interesting topics are summarized in the final Chapter 10. The authors form an excellent team of specialists in the given field. They write in a mathematically exact style, without superfluous sentences. A special advantage of the text is the permanent reference to the work of other authors and to previous chapters and formulas of the book what makes reading easy. The proofs of the theorems are collected in an appendix of the given chapter. Then follow numerous exercises, written in a precise manner, vivid language and with hints. Their solution, of course, is left to the reader him/herself. There is also a profound collection of literature, almost 300 items. Even the newest book by \textit{Gelfand} and \textit{Smith} (2000) is mentioned. Major misprints are not mentioned. Only on p. 323, 13 from below, it should be checked whether the second ``\(=\)'' sign might be a ``\(<\)''. The book, ``intended as a graduate textbook or a reference book for a one-semester course at the advanced master's or Ph.D. level'', the reviewer rather would recommand ``as a useful reference book for applied or theoretical researchers as well as practitioners'' (quotations from the authors' annotation).
    0 references
    Markov chain Monte Carlo method
    0 references
    Bayesian parameter estimation
    0 references
    textbook
    0 references
    computational complexity
    0 references
    basic Monte Carlo methods
    0 references
    convergence
    0 references
    exercises
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references