Approximate solutions of the Bellman equation of deterministic control theory
DOI10.1007/BF01442176zbMATH Open0553.49024OpenAlexW2088807940MaRDI QIDQ802134FDOQ802134
Italo Capuzzo Dolcetta, Hitoshi Ishii
Publication date: 1984
Published in: Applied Mathematics and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/bf01442176
Recommendations
- scientific article; zbMATH DE number 4002910
- On a discrete approximation of the Hamilton-Jacobi equation of dynamic programming
- scientific article; zbMATH DE number 1276052
- Rates of Convergence for Approximation Schemes in Optimal Control
- On the convergence of an approximation scheme for the viscosity solutions of the Bellman equation arising in a stochastic optimal control problem
convergence rateviscosity solutioninfinite horizon discounted optimal control problemtime discretized approximation
Existence theories for optimal control problems involving partial differential equations (49J20) Dynamic programming in optimal control and differential games (49L20) Discrete approximations in optimal control (49M25) Generalized solutions to partial differential equations (35D99)
Cites Work
- On a discrete approximation of the Hamilton-Jacobi equation of dynamic programming
- Viscosity Solutions of Hamilton-Jacobi Equations
- Optimal Switching for Ordinary Differential Equations
- Title not available (Why is that?)
- Title not available (Why is that?)
- GENERALIZED SOLUTIONS OF THE HAMILTON-JACOBI EQUATIONS OF EIKONAL TYPE. I. FORMULATION OF THE PROBLEMS; EXISTENCE, UNIQUENESS AND STABILITY THEOREMS; SOME PROPERTIES OF THE SOLUTIONS
- Title not available (Why is that?)
- The continuous dependence of generalized solutions of non‐linear partial differential equations upon initial data
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Optimal control theory
- On a system of first-order quasi-variational inequalities connected with the optimal switching problem
- Discrete Approximations to Continuous Optimal Control Problems
- On the dynamic programming inequalities associated with the deterministic optimal stopping problem in discrete and continuous time
- Title not available (Why is that?)
- Title not available (Why is that?)
- An explicit procedure for discretizing continuous, optimal control problems
Cited In (64)
- On the equivalence of the integral and differential Bellman equations in impulse control problems
- Deep learning in computational mechanics: a review
- Classification of discrete weak KAM solutions on linearly repetitive quasi-periodic sets
- Title not available (Why is that?)
- Markov chain approximation for Hamilton-Jacobi-Bellman equation with absorbing boundary
- HJB-RBF based approach for the control of PDEs
- Numerical treatment of a class of optimal control problems arising in economics
- Meta-modeling game for deriving theory-consistent, microstructure-based traction-separation laws via deep reinforcement learning
- Value iteration convergence of \(\varepsilon\)-monotone schemes for stationary Hamilton-Jacobi equations
- A limit theorem for Markov decision processes
- Approximation and regular perturbation of optimal control problems via Hamilton-Jacobi theory
- Finite stateN-agent and mean field control problems
- Joint time-state generalized semiconcavity of the value function of a jump diffusion optimal control problem
- Approximation of Hamilton-Jacobi equations with Caputo time-fractional derivative
- Numerical methods for construction of value functions in optimal control problems on an infinite horizon
- Semiconcavity and sensitivity analysis in mean-field optimal control and applications
- A numerical approach to the infinite horizon problem of deterministic control theory
- Error estimates for a finite difference scheme associated with Hamilton-Jacobi equations on a junction
- Error estimates for the approximation of the effective Hamiltonian
- Title not available (Why is that?)
- Hamilton–Jacobi–Bellman Equations
- On the relation between discounted and average optimal value functions
- On a discrete approximation of the Hamilton-Jacobi equation of dynamic programming
- Discrete dynamic programming and viscosity solutions of the Bellman equation
- Approximate solutions to the time-invariant Hamilton-Jacobi-Bellman equation
- Relaxation methods in control theory
- Approximation of control problems involving ordinary and impulsive controls
- A splitting algorithm for Hamilton-Jacobi-Bellman equations
- Robust Feedback Control of Nonlinear PDEs by Numerical Approximation of High-Dimensional Hamilton--Jacobi--Isaacs Equations
- On the role of computation in economic theory
- Fully discrete schemes for monotone optimal control problems
- Discrete time schemes for optimal control problems with monotone controls
- HOMOGENIZATION OF HAMILTON–JACOBI EQUATIONS: NUMERICAL METHODS
- Error estimates for numerical approximation of Hamilton-Jacobi equations related to hybrid control systems
- Degenerate First-Order Quasi-variational Inequalities: An Approach to Approximate the Value Function
- Numerical solutions to the Bellman equation of optimal control
- Viscous solutions of the Hamilton-Jacobi-Bellman equation on time scales
- Representation of solutions of Hamilton-Jacobi equations
- Deterministic impulse control problems: two discrete approximations of the quasi-variational inequality
- Galerkin approximations of the generalized Hamilton-Jacobi-Bellman equation
- Characterizations of Young measures generated by gradients
- On the time discretization of stochastic optimal control problems: The dynamic programming approach
- Semiconcave solutions of partial differential inclusions
- Discrete feedback stabilization of semilinear control systems
- Error estimates for approximation schemes of effective Hamiltonians arising in stochastic homogenization of Hamilton-Jacobi equations
- Stability properties of the value function in an infinite horizon optimal control problem
- Continuous and impulse controls differential game in finite horizon with Nash-equilibrium and application
- A theoretical measure technique for determining 3D symmetric nearly optimal shapes with a given center of mass
- Discrete approximation of the viscous HJ equation
- Hamilton-Jacobi Equations With Singular Boundary Conditions on a free Boundary and Applications to Differential Games
- Discontinuous solutions of deterministic optimal stopping time problems
- On Deterministic Control Problems: An Approximation Procedure for the Optimal Cost I. The Stationary Problem
- Error Estimates for a Tree Structure Algorithm Solving Finite Horizon Control Problems
- Fast computational procedure for solving multi-item single-machine lot scheduling optimzation problems
- Title not available (Why is that?)
- Generalized solutions of partial differential equations of the first order. The invariance of graphs relative to differential inclusions
- Reconstruction of independent sub-domains for a class of Hamilton-Jacobi equations and application to parallel computing
- An approximation scheme for the optimal control of diffusion processes
- Title not available (Why is that?)
- LP Formulations of Discrete Time Long-Run Average Optimal Control Problems: The NonErgodic Case
- Semigroup approach for the approximation of a control problem with unbounded dynamics
- Approximation of optimal feedback control: a dynamic programming approach
- Estimate for the accuracy of a backward procedure for the Hamilton-Jacobi equation in an infinite-horizon optimal control problem
- A differential game of unlimited duration
This page was built for publication: Approximate solutions of the Bellman equation of deterministic control theory
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q802134)