Dynamic programming and error estimates for stochastic control problems with maximum cost
DOI10.1007/S00245-014-9255-3zbMATH Open1311.93086OpenAlexW2016107460MaRDI QIDQ2340992FDOQ2340992
Athena Picarelli, Hasnaa Zidani, Olivier Bokanowski
Publication date: 21 April 2015
Published in: Applied Mathematics and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00245-014-9255-3
Recommendations
- Infinite horizon stochastic optimal control problems with running maximum cost
- Error estimates for second order Hamilton-Jacobi-Bellman equations. Approximation of probabilistic reachable sets
- Dynamic Programming Principle for One Kind of Stochastic Recursive Optimal Control Problem and Hamilton–Jacobi–Bellman Equation
- Optimal Control of the Running Max
- Error estimation and adaptive discretization for the discrete stochastic Hamilton-Jacobi-Bellman equation
dynamic programmingerror estimatesHamilton-Jacobi equationsstochastic optimal controlreachable setsmaximum costlookback optionsoblique Neuman boundary condition
Dynamic programming (90C39) Stochastic ordinary differential equations (aspects of stochastic analysis) (60H10) Nonlinear parabolic equations (35K55) Existence of optimal solutions to problems involving randomness (49J55) Dynamic programming in optimal control and differential games (49L20) Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games (49L25) Error bounds for initial value and initial-boundary value problems involving PDEs (65M15) Optimal stochastic control (93E20)
Cites Work
- Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations
- On a discrete approximation of the Hamilton-Jacobi equation of dynamic programming
- Title not available (Why is that?)
- Semi-Lagrangian schemes for linear and fully non-linear diffusion equations
- Reachability and Minimal Times for State Constrained Nonlinear Problems without Any Controllability Assumption
- Weak Dynamic Programming Principle for Viscosity Solutions
- Title not available (Why is that?)
- Error estimates for stochastic differential games: the adverse stopping case
- Error bounds for monotone approximation schemes for parabolic Hamilton-Jacobi-Bellman equations
- User’s guide to viscosity solutions of second order partial differential equations
- Title not available (Why is that?)
- Title not available (Why is that?)
- Consistency of Generalized Finite Difference Schemes for the Stochastic HJB Equation
- On the convergence rate of approximation schemes for Hamilton-Jacobi-Bellman Equations
- An approximation scheme for the optimal control of diffusion processes
- Error Bounds for Monotone Approximation Schemes for Hamilton--Jacobi--Bellman Equations
- Error estimates for a stochastic impulse control problem
- Viscosity solutions of fully nonlinear second-order elliptic partial differential equations
- Convergent difference schemes for nonlinear parabolic equations and mean curvature motion
- Optimal Control of the Running Max
- Title not available (Why is that?)
- The viability theorem for stochastic differential inclusions2
- Stochastic Target Problems, Dynamic Programming, and Viscosity Solutions
- CONVERGENCE OF NUMERICAL SCHEMES FOR PARABOLIC EQUATIONS ARISING IN FINANCE THEORY
- Stochastic targets with mixed diffusion processes and viscosity solutions.
- Paul Wilmott on quantitative finance. 3 Vols. With CD-ROM
- Some Estimates for Finite Difference Approximations
- Title not available (Why is that?)
- Neumann type boundary conditions for Hamilton-Jacobi equations
- A general Hamilton-Jacobi framework for non-linear state-constrained control problems
- Probability methods for approximations in stochastic control and for elliptic equations
- Optimal Control on the $L^\infty $ Norm of a Diffusion Process
- On oblique derivative problems for fully nonlinear second-order elliptic partial differential equations on nonsmooth domains
- Fully nonlinear Neumann type boundary conditions for first-order Hamilton–Jacobi equations
- Fully nonlinear oblique derivative problems for nonlinear second-order elliptic PDE's
- The Bellman equation for control of the running max of a diffusion and applications to look-back options
Cited In (15)
- Error estimates for second order Hamilton-Jacobi-Bellman equations. Approximation of probabilistic reachable sets
- Error estimation and adaptive discretization for the discrete stochastic Hamilton-Jacobi-Bellman equation
- An Approximation Scheme for Semilinear Parabolic PDEs with Convex and Coercive Hamiltonians
- A level-set approach to the control of state-constrained McKean-Vlasov equations: application to renewable energy storage and portfolio selection
- Improved Dynamic Programming Methods for Optimal Control of Lumped-Parameter Stochastic Systems
- State-constrained stochastic optimal control problems via reachability approach
- Zubov's method for controlled diffusions with state constraints
- Infinite Horizon Stochastic Optimal Control Problems with Running Maximum Cost
- Some regularity and convergence results for parabolic Hamilton-Jacobi-Bellman equations in bounded domains
- Allocation of Control Points in Stochastic Dynamic-Programming Models
- A monotone scheme for \(\mathrm{G}\)-equations with application to the explicit convergence rate of robust central limit theorem
- Dynamic programming and value-function approximation in sequential decision problems: error analysis and numerical results
- Dynamic programming principle for stochastic control problems driven by general Lévy noise
- An approximation scheme for uncertain minimax optimal control problems
- Optimal Tracking Portfolio with a Ratcheting Capital Benchmark
This page was built for publication: Dynamic programming and error estimates for stochastic control problems with maximum cost
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2340992)