Dynamic programming and error estimates for stochastic control problems with maximum cost
DOI10.1007/s00245-014-9255-3zbMath1311.93086OpenAlexW2016107460MaRDI QIDQ2340992
Athena Picarelli, Olivier Bokanowski, Hasnaa Zidani
Publication date: 21 April 2015
Published in: Applied Mathematics and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00245-014-9255-3
error estimatesdynamic programmingstochastic optimal controlHamilton-Jacobi equationsreachable setsmaximum costlookback optionsoblique Neuman boundary condition
Stochastic ordinary differential equations (aspects of stochastic analysis) (60H10) Dynamic programming in optimal control and differential games (49L20) Nonlinear parabolic equations (35K55) Dynamic programming (90C39) Optimal stochastic control (93E20) Error bounds for initial value and initial-boundary value problems involving PDEs (65M15) Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games (49L25) Existence of optimal solutions to problems involving randomness (49J55)
Related Items (9)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Probability methods for approximations in stochastic control and for elliptic equations
- Fully nonlinear oblique derivative problems for nonlinear second-order elliptic PDE's
- Viscosity solutions of fully nonlinear second-order elliptic partial differential equations
- Error estimates for a stochastic impulse control problem
- On a discrete approximation of the Hamilton-Jacobi equation of dynamic programming
- Neumann type boundary conditions for Hamilton-Jacobi equations
- Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations
- Convergent difference schemes for nonlinear parabolic equations and mean curvature motion
- Stochastic targets with mixed diffusion processes and viscosity solutions.
- Semi-Lagrangian schemes for linear and fully non-linear diffusion equations
- Reachability and Minimal Times for State Constrained Nonlinear Problems without Any Controllability Assumption
- Weak Dynamic Programming Principle for Viscosity Solutions
- Optimal Control of the Running Max
- Error estimates for stochastic differential games: the adverse stopping case
- Error bounds for monotone approximation schemes for parabolic Hamilton-Jacobi-Bellman equations
- On oblique derivative problems for fully nonlinear second-order elliptic partial differential equations on nonsmooth domains
- Fully nonlinear Neumann type boundary conditions for first-order Hamilton–Jacobi equations
- User’s guide to viscosity solutions of second order partial differential equations
- Optimal Control on the $L^\infty $ Norm of a Diffusion Process
- The viability theorem for stochastic differential inclusions2
- Consistency of Generalized Finite Difference Schemes for the Stochastic HJB Equation
- On the convergence rate of approximation schemes for Hamilton-Jacobi-Bellman Equations
- CONVERGENCE OF NUMERICAL SCHEMES FOR PARABOLIC EQUATIONS ARISING IN FINANCE THEORY
- An approximation scheme for the optimal control of diffusion processes
- The Bellman equation for control of the running max of a diffusion and applications to look-back options
- Some Estimates for Finite Difference Approximations
- Stochastic Target Problems, Dynamic Programming, and Viscosity Solutions
- A general Hamilton-Jacobi framework for non-linear state-constrained control problems
- Error Bounds for Monotone Approximation Schemes for Hamilton--Jacobi--Bellman Equations
This page was built for publication: Dynamic programming and error estimates for stochastic control problems with maximum cost