The Bellman equation for control of the running max of a diffusion and applications to look-back options
From MaRDI portal
Publication:4711143
DOI10.1080/00036819308840158zbMath0788.49027OpenAlexW1997312835WikidataQ58148358 ScholiaQ58148358MaRDI QIDQ4711143
Publication date: 25 June 1992
Published in: Applicable Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/00036819308840158
Microeconomic theory (price theory and economic markets) (91B24) Economic growth models (91B62) Optimal stochastic control (93E20) Diffusion processes (60J60) Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games (49L25)
Related Items
An approximation scheme for uncertain minimax optimal control problems ⋮ Infinite Horizon Stochastic Optimal Control Problems with Running Maximum Cost ⋮ Optimal Tracking Portfolio with a Ratcheting Capital Benchmark ⋮ Dynamic programming and error estimates for stochastic control problems with maximum cost ⋮ Approximative Policy Iteration for Exit Time Feedback Control Problems Driven by Stochastic Differential Equations using Tensor Train Format
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Fully nonlinear oblique derivative problems for nonlinear second-order elliptic PDE's
- Total risk aversion and the pricing of options
- Uniqueness for first-order Hamilton-Jacobi equations and Hopf formula
- The Bellman equation for minimizing the maximum cost
- Optimal control of diffustion processes and hamilton-jacobi-bellman equations part I: the dynamic programming principle and application
- Optimal control of diffusion processes and hamilton–jacobi–bellman equations part 2 : viscosity solutions and uniqueness
- Optimal Control of the Running Max
- A Stochastic Control Approach to the Pricing of Options