Tools for statistical inference. Observed data and data augmentation methods (Q1188819): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Changed an Item
Import240304020342 (talk | contribs)
Set profile property.
 
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank

Latest revision as of 02:28, 5 March 2024

scientific article
Language Label Description Also known as
English
Tools for statistical inference. Observed data and data augmentation methods
scientific article

    Statements

    Tools for statistical inference. Observed data and data augmentation methods (English)
    0 references
    0 references
    17 September 1992
    0 references
    The purpose of the book under review is to give a survey of methods for the Bayesian or likelihood-based analysis of data. The author distinguishes between two types of methods: the observed data methods and the data augmentation ones. The observed data methods are applied directly to the likelihood or posterior density of the observed data. The data augmentation methods make use of the special ``missing'' data structure of the problem. They rely on an augmentation of the data which simplifies the likelihood or posterior density. The book consists of VI sections. In the first one, Introduction, examples associated with censored regression data, random randomized response, latent class analysis and hierarchical models are presented as motivation of the problems, and the techniques considered in the book are mentioned. In section 2, Observed data techniques -- normal approximation, the likelihood function, the posterior density function and the maximum likelihood method are discussed and illustrated. Next, the normal based inference is considered from the point of view of both Frequentists and Bayesians. Finally, the highest posterior density region of a given content is defined and the significance level is motivated by it, from the Bayesian point of view. ``Observed data techniques'' is the title of section 3. Here approximations based on numerical integration, Laplace expansions, Monte Carlo, composition and importance sampling are studied. The method of composition, in particular, is useful for constructing samples distributed according to \(J(y)=\int f(y| x)g(x)\,dx\) where \(g(x)\) and \(f(y| x)\) are given densities. This method is illustrated by constructing the predictive distribution. The importance sampling method is used to approximate \(J(y)\) when one cannot sample directly from \(g(x)\). Sections 4--6 (whose titles are, respectively, The EM algorithm; Data augmentation; and The Gibbs sampler) review the data augmentation methods. The principle of data augmentation states: ``Augment the observed data \(Y\) with latent data \(Z\) so that the augmented posterior distribution \(p(\theta| Y,Z)\) is ``simple''. Make use of this simplicity in maximizing/marginalizing, calculating/sampling the observed posterior \(p(\theta| Y)\).'' Several algorithms are available which make use of that principle. The simplest of them is the EM algorithm which provides the mean of normal approximation to the likelihood or the posterior density, while the Louis modification specifies the scale. The Poor Man's Data Augmentation algorithm allows for a non-normal approximation to the likelihood or posterior density. The Data Augmentation and the Gibbs Sampler approaches are iterative algorithms which, under certain regularity conditions, provide a way of improving inference based on entire posterior distribution. The SIR algorithm is a noniterative algorithm based on importance sampling ideas. All stated results are illustrated by examples.
    0 references
    observed data methods
    0 references
    data augmentation methods
    0 references
    censored regression data
    0 references
    random randomized response
    0 references
    latent class analysis
    0 references
    hierarchical models
    0 references
    likelihood function
    0 references
    posterior density
    0 references
    maximum likelihood method
    0 references
    normal based inference
    0 references
    highest posterior density region
    0 references
    significance level
    0 references
    approximations
    0 references
    numerical integration
    0 references
    Laplace expansions
    0 references
    Monte Carlo
    0 references
    composition
    0 references
    importance sampling
    0 references
    predictive distribution
    0 references
    EM algorithm
    0 references
    Gibbs sampler
    0 references
    latent data
    0 references
    normal approximation
    0 references
    Poor Man's Data Augmentation algorithm
    0 references
    non-normal approximation
    0 references
    iterative algorithms
    0 references
    SIR algorithm
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references