\(\varepsilon\)-strong simulation of the Brownian path (Q1932226)

From MaRDI portal
scientific article
Language Label Description Also known as
English
\(\varepsilon\)-strong simulation of the Brownian path
scientific article

    Statements

    \(\varepsilon\)-strong simulation of the Brownian path (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    17 January 2013
    0 references
    The authors present an iterative sampling procedure to provide upper and lower simple stochastic processes (built through the \(2^n\) sub-intervals of the initial interval) enveloping \(a.s.\) Brownian paths. The bounding processes, consequently the associated Brownian paths, are simulated without any discretization error. The distance (with respect to the supremum norm or the \(L^1\)-norm) between the bounding processes is shown to go to zero as \(n\) goes to infinity and the rate of convergence in \(L^1\) is of order \(O\big((2^n)^{-1/2}\big)\). The procedure is carried out using the law of the extrema of a Brownian bridge \(X\) and requires information on these extrema, the starting and ending points of the Brownian bridge. In fact, given an interval \([s, t]\), \(t^{\star} = (s+t)/2\), the initial and terminal values \(X_s\), \(X_t\), and the range of the extrema of \(X\) on \([s,t]\) (denoted by \([L^{\downarrow}_{s,t}, L^{\uparrow}_{s,t}]\) for the minimum and \([U^{\downarrow}_{s,t}, U^{\uparrow}_{s,t}]\) for the maximum), a middle point \(x = X_{t^{\star}}(\omega)\) is sampled. The simulation of the middle point is based on acceptance-rejection sampling method using the distribution of \(X_{t^{\star}}\) given the initial and terminal values of \(X\) and the range of its extrema, which density is shown to be the density of \(X_{t^{\star}} | X_s, X_t\) (which has Gaussian distribution) times the probability that the minimum and the maximum of the Brownian bridge lie in \([L^{\downarrow}_{s,t}, L^{\uparrow}_{s,t}]\) and \([U^{\downarrow}_{s,t}, U^{\uparrow}_{s,t}]\) respectively, given \(X_{t^{\star}-s} = x\). Once \(x = X_{t^{\star}}(\omega)\) is sampled, the allowed range for the minimum on \([s,t^{\star}]\) is updated deciding whether \([L^{\downarrow}_{s,t^{\star}}, L^{\uparrow}_{s,t^{\star}}] = [L^{\downarrow}_{s,t}, L^{\uparrow}_{s,t} \wedge X_{t^{\star}} ]\) or \([L^{\uparrow}_{s,t} \wedge X_{t^{\star}}, X_s \wedge X_{t^{\star}}]\) and the range for the minimum on \([t^{\star},t]\) is updated deciding whether \([L^{\downarrow}_{t^{\star},t}, L^{\uparrow}_{t^{\star},t}] = [L^{\downarrow}_{s,t}, L^{\uparrow}_{s,t} \wedge X_{t^{\star}} ]\) or \([L^{\uparrow}_{s,t} \wedge X_{t^{\star}}, X_t \wedge X_{t^{\star}}]\). The allowed ranges for the maximum on both intervals \([s,t^{\star}]\) and \([t^{\star},t]\) are updated in a similar way. Theses ranges are refined until their width is not greater than \(\sqrt{(t-s)/2}\) to guaranty the convergence of the bounding paths with minimal computational cost. Applying this procedure to each of the \(2^n\) sub-intervals of \([s,t]\) allows to define the dominating processes. From the application viewpoint, the procedure allows to estimate (from unbiased Monte Carlo estimators) path dependent expectations without any discretization error. Numerical applications are performed for the pricing of path dependent options and the comparison with the standard Euler approximation in considered examples shows that the procedure provides unbiased and accurate estimates in reasonable amounts of time.
    0 references
    0 references
    Brownian Bridge
    0 references
    option pricing
    0 references
    path dependent options
    0 references
    iterative algorithm
    0 references
    intersection layer
    0 references
    pathwise convergence
    0 references
    unbiased sampling
    0 references
    numerical examples
    0 references
    Monte Carlo estimators
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references