Adapative importance sampling on discrete Markov chains (Q1305417)

From MaRDI portal
Revision as of 09:58, 29 February 2024 by SwMATHimport240215 (talk | contribs) (‎Changed an Item)
scientific article
Language Label Description Also known as
English
Adapative importance sampling on discrete Markov chains
scientific article

    Statements

    Adapative importance sampling on discrete Markov chains (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    22 March 2000
    0 references
    In modelling particle transport through the medium, the path of a particle behaves as a transient Markov chain. The authors are interested in characteristics of the particle's movement depending on its starting state, which take the form of a ``score'' accumulated with each transition. The main purpose of this work is to prove that under certain conditions adaptive importance sampling for discrete Markov chains with scoring converges exponentially. Examples presented show that this exponential convergence can occur with a reasonably small number of simulation runs. These assumptions include that the state space is finite, the vector of expected scores conform a linear model and that there are sufficiently many replications of the initial states in the simulation.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    adaptive procedures
    0 references
    exponential convergence
    0 references
    Monte Carlo method
    0 references
    particle transport
    0 references
    zero-variance solution
    0 references
    importance sampling
    0 references
    discrete Markov chains
    0 references
    numerical examples
    0 references