A sequential Monte Carlo approach to computing tail probabilities in stochastic models (Q657700): Difference between revisions
From MaRDI portal
Latest revision as of 09:44, 30 July 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | A sequential Monte Carlo approach to computing tail probabilities in stochastic models |
scientific article |
Statements
A sequential Monte Carlo approach to computing tail probabilities in stochastic models (English)
0 references
10 January 2012
0 references
In complex stochastic models, it is often difficult to evaluate probabilities of events. In cases where an analytical approach is impossible, Monte Carlo methods provide a practical alternative. In the present paper, the authors introduce a sequential importance sampling and resampling (SISR) procedure to attain a weaker form of asymptotic optimality, namely, logarithmic efficiency. The SISR procedure to compute probabilities of rare events is closely related to interacting particle systems and dynamic importance sampling methods. By making use of martingale representations of the sequential Monte Carlo estimators, it is shown how resampling weights can be chosen to yield logarithmically efficient Monte Carlo estimates of large deviation probabilities for multidimensional Markov random walks. The following is a short description of the contents of the paper. In the introduction, the problem of calculating the probabilities of rare events is presented, and a brief review of stochastic methods for solving it is given. In section 2, the \(\sigma\)-field generated by \(n\) random variables \(Y_{1}, Y_{2}, \dots, Y_{n}\) on a probability space \((\Omega, {\mathcal F},P)\) is considered. For the computation of the probability \(\alpha = P(Y_{n} \in \Gamma)\), a direct Monte Carlo method, based on i.i.d. random vectors, is proposed. This method provides the quantity \(\hat{\alpha}_{D}\) as an estimate of the probability \(\alpha\). Also, a SISR procedure is introduced to construct the quantity \(\hat{\alpha}_{B}\) as a Monte Carlo estimation of \(\alpha\). Here, an important role is played by the so-called resampling weights. The martingale decomposition of \(\hat{\alpha}_{B} - \alpha\) is obtained and used for estimating the standard error of \(\hat{\alpha}_{B}\). Also, one more estimator \(\hat{\alpha}_{R}\) of \(\alpha\) is constructed and the martingale representation of \(\hat{\alpha}_{R} - \alpha\) is given. In section 3, the logarithmic efficiency of SISR for Monte Carlo computations of small tail probabilities is studied. The notion of asymptotic optimality of an importance sampling measure is recalled. The resampling weights for SISR estimates \(\hat{\alpha}_{B}\) are chosen, and two conditions, in the terms of exceedance probabilities, which give logarithmic efficiency, are presented. In example 1, a concrete realization of the SISR procedure and its logarithmic efficiency are developed. In subsection 3.1, a heuristic principle for efficient SISR procedures is studied. In theorem 1, it is shown that the estimates \(\hat{\alpha}_{B}\) and \(\hat{\alpha}_{R}\) of \(\alpha\) are logarithmically efficient. The heuristic principle is used in example 2 to construct a logarithmically efficient SISR. In theorem 2, the resampling weights for logarithmically efficient simulation of \(\hat{\alpha}_{B}\) and \(\hat{\alpha}_{R}\) are provided. The SISR procedure is extended to Markov additive process in subsection 3.2. Example 3 is an extension of example 1 to Markov additive processes. Theorem 3 is an extension of the theorem1 and theorem 2 to Markov additive processes. In theorem 4, under a given assumption, the choice of the resampling weights which provide logarithmic efficiency is presented. In subsection 3.3, it is discussed how the basic idea of examples 1 and 2 can be extended to more general rare events and more general stochastic sequences. In section 4, two examples are used to illustrate theorems 1 and 4. The paper finishes with an appendix, in which some useful equalities are proved.
0 references
exceedance probabilities
0 references
large deviations
0 references
logarithmic efficiency
0 references
sequential importance sampling and resampling
0 references
0 references
0 references
0 references