Quantitative error estimates for a least-squares Monte Carlo algorithm for American option pricing (Q354190)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Quantitative error estimates for a least-squares Monte Carlo algorithm for American option pricing |
scientific article |
Statements
Quantitative error estimates for a least-squares Monte Carlo algorithm for American option pricing (English)
0 references
18 July 2013
0 references
The paper deals with the problem of error estimates for a least-squares Monte Carlo algorithm for American option pricing. Due to its effectiveness when dealing with high-dimensional problems related to approximating prices of the American financial option, least-squares Monte Carlo algorithms like the one of \textit{F. A. Longstaff} and \textit{E. S. Schwartz} [``Valuing American options by simulation: a simple least-squares approach'', Rev. Finan. Stud. 14, No. 1, 113--147 (2001; \url{doi:10.1093/rfs/14.1.113})] have become very popular. The author shows that under suitable conditions the Monte Carlo error estimates for the Longstaff-Schwartz algorithm can be established for linear as well as nonlinear approximation mechanisms. The main result of the paper shows that if the payoff process is almost surely bounded, and if the mechanism provided by the Longstaff-Schwartz algorithm is an arbitrary set of \(L^{2}\)-functions having a finite so-called Vapnik-Chervonenkis dimension, then the expected \(L^{2}\)-convergence error is \(O(\sqrt{\log(N)/N})\), where \(N\) is the number of runs of the Monte Carlo algorithm. If the underlying assets process is Markov, then the author shows that no restrictions apply. Under suitable conditions the author extends the results to the case where the payoff process is bounded in some \(L^{p}\)-norm, \(2 < p < \infty\). Additionally, the results presented in the paper admit all linear, finite-dimensional approximation schemes as well all underlying and payoff processes. Another result presented in the paper is related to the estimate of the overall error of a regression procedure. Regarding that aspect, the author presents overall estimates of the expected \(L^{2}\)-error for the Longstaff-Schwartz algorithm with finite-dimensional polynomial approximation. The results presented in the paper are very general when nonlinear approximations are considered. They also directly apply in the case of nonlinear sets of functions whose Vapnik-Chervonenkis dimension is such that numerical bounds are known. The author also presents results related to the overall error estimates for the Longstaff-Schwartz algorithm with a neural network approximation. The estimates imply that a relative growth of \(o(\sqrt{N})\) for the state space dimension is sufficient for convergence when \(N \rightarrow \infty\).
0 references
least-squares Monte Carlo
0 references
Longstaff-Schwartz algorithm
0 references
American options
0 references
dynamic programming
0 references
statistical learning
0 references
0 references