Discrete dynamic programming and viscosity solutions of the Bellman equation
From MaRDI portal
Publication:1121521
DOI10.1016/S0294-1449(17)30020-3zbMath0674.49028OpenAlexW2604207521MaRDI QIDQ1121521
Italo Capuzzo-Dolcetta, Maurizio Falcone
Publication date: 1989
Published in: Annales de l'Institut Henri Poincaré. Analyse Non Linéaire (Search for Journal in Brave)
Full work available at URL: http://www.numdam.org/item?id=AIHPC_1989__S6__161_0
Dynamic programming in optimal control and differential games (49L20) Numerical methods based on necessary conditions (49M05)
Related Items
Approximation of control problems involving ordinary and impulsive controls, An Algorithm to Construct Subsolutions of Convex Optimal Control Problems, A splitting algorithm for Hamilton-Jacobi-Bellman equations, Numerical schemes for investment models with singular transactions, Numerical approximation of the \(H_ \infty\) norm of nonlinear systems, Optimal times for constrained nonlinear control problems without local controllability, Nonsmooth semipermeable Barriers, Isaacs' equation, and application to a differential game with one target and two players, Galerkin approximations of the generalized Hamilton-Jacobi-Bellman equation, On the Rate of Convergence for Monotone Numerical Schemes for Nonlocal Isaacs Equations, On the Convergence of an Approximation Scheme for the Viscosity Solutions of the Bellman Equation Arising in a Stochastic Optimal Control Problem, Using dynamic programming with adaptive grid scheme for optimal control problems in economics, A comparison theorem for a piecewise Lipschitz continuous Hamiltonian and application to Shape-from-Shading problems, ON THE RATE OF CONVERGENCE OF APPROXIMATION SCHEMES FOR BELLMAN EQUATIONS ASSOCIATED WITH OPTIMAL STOPPING TIME PROBLEMS, An approximation scheme for the optimal control of diffusion processes, A tree structure algorithm for optimal control problems with state constraints, Approximate solutions to the time-invariant Hamilton-Jacobi-Bellman equation, On the relation between discounted and average optimal value functions, Discrete approximation of the viscous HJ equation, Discrete feedback stabilization of semilinear control systems, Approximation of the viability kernel, Dynamic programming using radial basis functions
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A numerical approach to the infinite horizon problem of deterministic control theory
- Approximation schemes for viscosity solutions of Hamilton-Jacobi equations
- Probability methods for approximations in stochastic control and for elliptic equations
- Approximate solutions of the Bellman equation of deterministic control theory
- On a system of first-order quasi-variational inequalities connected with the optimal switching problem
- On a discrete approximation of the Hamilton-Jacobi equation of dynamic programming
- Uniqueness of viscosity solutions of Hamilton-Jacobi equations revisited
- Infinite horizon optimal control. Theory and applications
- An approximation method for stochastic control problems with partial observation of the state - a method for constructing \(\in\)-optimal controls
- Stochastic optimal control. The discrete time case
- An Approximation Scheme for the Minimum Time Function
- Two Approximations of Solutions of Hamilton-Jacobi Equations
- On Deterministic Control Problems: an Approximation Procedure for the Optimal Cost II. The Nonstationary Case
- Differential Games, Optimal Control and Directional Derivatives of Viscosity Solutions of Bellman’s and Isaacs’ Equations
- Deterministic Impulse Control Problems
- Optimal Control with State-Space Constraint I
- Viscosity Solutions of Hamilton-Jacobi Equations
- Discontinuous solutions of deterministic optimal stopping time problems
- Optimal Switching for Ordinary Differential Equations
- Exit Time Problems in Optimal Control and Vanishing Viscosity Method
- Existence de Solution et Algorithme de Résolution Numérique, de Problème de Contrôle Optimal de Diffusion Stochastique Dégénérée ou Non
- On the dynamic programming inequalities associated with the deterministic optimal stopping problem in discrete and continuous time
- On the Convergence of Policy Iteration in Stationary Dynamic Programming
- A Boundary Value Problem for the Minimum-Time Function
- Mathematical programming and the control of Markov chains†
- An explicit procedure for discretizing continuous, optimal control problems