Approximate solutions of the Bellman equation of deterministic control theory
From MaRDI portal
Publication:802134
DOI10.1007/BF01442176zbMath0553.49024OpenAlexW2088807940MaRDI QIDQ802134
Italo Capuzzo-Dolcetta, Hitoshi Ishii
Publication date: 1984
Published in: Applied Mathematics and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/bf01442176
viscosity solutionconvergence rateinfinite horizon discounted optimal control problemtime discretized approximation
Dynamic programming in optimal control and differential games (49L20) Existence theories for optimal control problems involving partial differential equations (49J20) Discrete approximations in optimal control (49M25) Generalized solutions to partial differential equations (35D99)
Related Items
Approximation of control problems involving ordinary and impulsive controls, A splitting algorithm for Hamilton-Jacobi-Bellman equations, Error Estimates for a Tree Structure Algorithm Solving Finite Horizon Control Problems, Deterministic impulse control problems: two discrete approximations of the quasi-variational inequality, Fast computational procedure for solving multi-item single-machine lot scheduling optimzation problems, On the role of computation in economic theory, Hamilton-Jacobi Equations With Singular Boundary Conditions on a free Boundary and Applications to Differential Games, Error estimates for approximation schemes of effective Hamiltonians arising in stochastic homogenization of Hamilton-Jacobi equations, A differential game of unlimited duration, Generalized solutions of partial differential equations of the first order. The invariance of graphs relative to differential inclusions, Discrete dynamic programming and viscosity solutions of the Bellman equation, Galerkin approximations of the generalized Hamilton-Jacobi-Bellman equation, A theoretical measure technique for determining 3D symmetric nearly optimal shapes with a given center of mass, Discontinuous solutions of deterministic optimal stopping time problems, Continuous and impulse controls differential game in finite horizon with Nash-equilibrium and application, On the time discretization of stochastic optimal control problems: The dynamic programming approach, HJB-RBF based approach for the control of PDEs, Degenerate First-Order Quasi-variational Inequalities: An Approach to Approximate the Value Function, Error estimates for a finite difference scheme associated with Hamilton-Jacobi equations on a junction, Hamilton–Jacobi–Bellman Equations, HOMOGENIZATION OF HAMILTON–JACOBI EQUATIONS: NUMERICAL METHODS, Numerical methods for construction of value functions in optimal control problems on an infinite horizon, Joint time-state generalized semiconcavity of the value function of a jump diffusion optimal control problem, Characterizations of Young measures generated by gradients, Approximation and regular perturbation of optimal control problems via Hamilton-Jacobi theory, Robust Feedback Control of Nonlinear PDEs by Numerical Approximation of High-Dimensional Hamilton--Jacobi--Isaacs Equations, Meta-modeling game for deriving theory-consistent, microstructure-based traction-separation laws via deep reinforcement learning, Numerical solutions to the Bellman equation of optimal control, Error estimates for the approximation of the effective Hamiltonian, Semiconcave solutions of partial differential inclusions, Approximation of optimal feedback control: a dynamic programming approach, Error estimates for numerical approximation of Hamilton-Jacobi equations related to hybrid control systems, Fully discrete schemes for monotone optimal control problems, Approximation of Hamilton-Jacobi equations with Caputo time-fractional derivative, An approximation scheme for the optimal control of diffusion processes, Stability properties of the value function in an infinite horizon optimal control problem, Discrete time schemes for optimal control problems with monotone controls, Approximate solutions to the time-invariant Hamilton-Jacobi-Bellman equation, A numerical approach to the infinite horizon problem of deterministic control theory, Reconstruction of independent sub-domains for a class of Hamilton–Jacobi equations and application to parallel computing, On the relation between discounted and average optimal value functions, Semiconcavity and sensitivity analysis in mean-field optimal control and applications, LP Formulations of Discrete Time Long-Run Average Optimal Control Problems: The NonErgodic Case, Discrete approximation of the viscous HJ equation, Relaxation methods in control theory, Estimate for the accuracy of a backward procedure for the Hamilton-Jacobi equation in an infinite-horizon optimal control problem, Viscous solutions of the Hamilton-Jacobi-Bellman equation on time scales, Representation of solutions of Hamilton-Jacobi equations, Finite stateN-agent and mean field control problems, Numerical treatment of a class of optimal control problems arising in economics, Discrete feedback stabilization of semilinear control systems, Semigroup approach for the approximation of a control problem with unbounded dynamics, Value iteration convergence of \(\varepsilon\)-monotone schemes for stationary Hamilton-Jacobi equations, A limit theorem for Markov decision processes
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On a system of first-order quasi-variational inequalities connected with the optimal switching problem
- On a discrete approximation of the Hamilton-Jacobi equation of dynamic programming
- Optimal control theory
- Viscosity Solutions of Hamilton-Jacobi Equations
- Optimal Switching for Ordinary Differential Equations
- On the dynamic programming inequalities associated with the deterministic optimal stopping problem in discrete and continuous time
- GENERALIZED SOLUTIONS OF THE HAMILTON-JACOBI EQUATIONS OF EIKONAL TYPE. I. FORMULATION OF THE PROBLEMS; EXISTENCE, UNIQUENESS AND STABILITY THEOREMS; SOME PROPERTIES OF THE SOLUTIONS
- Discrete Approximations to Continuous Optimal Control Problems
- The continuous dependence of generalized solutions of non‐linear partial differential equations upon initial data
- An explicit procedure for discretizing continuous, optimal control problems