Estimation and annealing for Gibbsian fields (Q1106598)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Estimation and annealing for Gibbsian fields
scientific article

    Statements

    Estimation and annealing for Gibbsian fields (English)
    0 references
    0 references
    1988
    0 references
    Consider a Gibbs distribution \(\pi_{\theta}(x)=\exp (-<\alpha (x),\theta >)Z(\theta)^{-1}\) on a finite configuration space. The maximum likelihood estimator \(\theta_*\) is given as the solution of \(E_{\theta}[\alpha]=\alpha (x)\). A direct evaluation of \(E_{\theta}[\alpha]\) or a calculation of \(\theta_*\) by the Robbins- Monro algorithm is not feasible here because no simple simulation method of \(\pi_{\theta}\) is available. The very nice main idea of the paper is to use a stochastic gradient algorithm with the so-called Gibbs-sampler which is an inhomogeneous Markov chain \(P_{\theta}^{n,n+1}\) converging weakly to \(\pi_{\theta}\). It is shown hat the sequence of iterates \(\theta\) (n) of this algorithm converges asymptotically to \(\theta_*\) if the step size is not too small. Practical aspects of implementing this algorithm are also discussed. Finally it is shown that the annealing algorithm continues to work with \(\theta_*\) replaced by \(\theta\) (n), i.e. \(P^{n,n+1}_{\theta (n)/T(n)}\) converges weakly to the uniform distribution on \(\{x| <\alpha (x),\theta_*>\) is minimal\(\}\) if T(n)\(\to 0\) slowly. These results have applications in image processing.
    0 references
    0 references
    0 references
    0 references
    0 references
    weak convergence
    0 references
    Gibbs distribution
    0 references
    finite configuration space
    0 references
    maximum likelihood estimator
    0 references
    stochastic gradient algorithm
    0 references
    Gibbs-sampler
    0 references
    inhomogeneous Markov chain
    0 references
    annealing algorithm
    0 references
    image processing
    0 references