Improving SAMC using smoothing methods: Theory and applications to Bayesian model selection problems (Q834357)

From MaRDI portal
Revision as of 14:48, 30 January 2024 by Import240129110113 (talk | contribs) (Added link to MaRDI item.)
scientific article
Language Label Description Also known as
English
Improving SAMC using smoothing methods: Theory and applications to Bayesian model selection problems
scientific article

    Statements

    Improving SAMC using smoothing methods: Theory and applications to Bayesian model selection problems (English)
    0 references
    0 references
    0 references
    19 August 2009
    0 references
    Stochastic approximation Monte Carlo (SAMC) has recently been proposed by \textit{F. Liang, C. Liu}, and \textit{R. J. Carroll} [J. Am. Stat. Assoc. 102, No. 477, 305--320 (2007; Zbl 1226.65002)] as a general simulation and optimization algorithm. Let \(f(x)=c \psi(x)\), \(x\in X\), denote the target probability density/mass function, \(E_1,\dots,E_m\) denote a partition of \(X\) and \(c\) be a constant. SAMS seeks to sample from the trial distribution, \[ f_\omega(x)\propto \sum_{i=1}^m \psi(x) \left(\pi_i/\omega_i \right) I(x \in E_i), \] with \(\omega_i=\int_{E_i} \psi(x) dx\), \(\pi_i>0\), \(\sum_{i=1}^m \pi_i=1\). The appropriate partition of the sample space and adjusting of prespecified probabilities \(\{\pi_i\}\) allow to overcome essentially the local-trap problem of reversible jump Markov chain Monte Carlo (RJMCMC) if the landscape of \(f(x)\) is rugged. Meanwhile the simulation of each unknown probability \(\omega_i\) is updated by a stochastic approximation method whenever SAMS samples from corresponding \(E_i\), \(i=1,\dots,m\). In the article the smoothing SAMC (SSMAC) is proposed which employs multiple samples and smoothed estimates of all \(\{\omega_i\}\) at each iteration. The new algorithm is tested through a change-point identification example. The numerical results indicate that SSAMC outperforms SAMC and RJMCMC significantly for model selection problems. A rigorous proof for the convergence of the general algorithm is established under verifiable conditions.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    Markov chain Monte Carlo
    0 references
    reversible jump
    0 references
    smoothing
    0 references
    stochastic approximation Monte Carlo
    0 references