A harmonic function technique for the optimal stopping of diffusions
From MaRDI portal
Publication:3108367
DOI10.1080/17442508.2010.498915zbMath1241.60022MaRDI QIDQ3108367
Sören Christensen, Albrecht Irle
Publication date: 3 January 2012
Published in: Stochastics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/17442508.2010.498915
60G40: Stopping times; optimal stopping problems; gambling theory
60J60: Diffusion processes
62L15: Optimal stopping in statistics
Related Items
Anscombe’s model for sequential clinical trials revisited, Optimal Time to Exchange Two Baskets, A METHOD FOR PRICING AMERICAN OPTIONS USING SEMI‐INFINITE LINEAR PROGRAMMING, On the Continuous and Smooth Fit Principle for Optimal Stopping Problems in Spectrally Negative Lévy Models, A Measure Approach for Continuous Inventory Models: Discounted Cost Criterion, Value function and optimal rule on the optimal stopping problem for continuous-time Markov processes, Timing in the presence of directional predictability: optimal stopping of skew Brownian motion, Multidimensional investment problem, An optimal stopping problem for jump diffusion logistic population model, Optimal stopping with random exercise lag, Multisource Bayesian sequential binary hypothesis testing problem, Resolvent-techniques for multiple exercise problems, Optimal decision under ambiguity for diffusion processes, On the solution of general impulse control problems using superharmonic functions
Cites Work
- Existence and explicit determination of optimal stopping times
- On optimal timing of investment when cost components are additive and follow geometric diffusions
- Optimal time to invest when the price processes are geometric Brownian motions
- On the optimal stopping problem for one-dimensional diffusions.
- Construction of the Value Function and Optimal Rules in Optimal Stopping of One-Dimensional Diffusions
- Optimal Stopping of One-Dimensional Diffusions
- The generalized perpetual American exchange-option problem
- Optimal Stopping in a Markov Process