Hamilton-Jacobi-Bellman equations for optimal control processes with convex state constraints
DOI10.1016/J.SYSCONLE.2017.09.004zbMATH Open1377.49002OpenAlexW2765447375MaRDI QIDQ1690969FDOQ1690969
Authors: Cristopher Hermosilla, Hasnaa Zidani, Richard B. Vinter
Publication date: 12 January 2018
Published in: Systems \& Control Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.sysconle.2017.09.004
Recommendations
- scientific article; zbMATH DE number 1303272
- Hamilton-Jacobi-Bellman equation under states constraints
- scientific article; zbMATH DE number 3904473
- scientific article; zbMATH DE number 1254169
- Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations
- Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations
- The Hamiltonian–Jacobi–Bellman Equation for Time-Optimal Control
- On Hamilton-Jacobi-Bellman equations with convex gradient constraints
- Hamilton-Jacobi-Bellman equations for the optimal control of a state equation with memory
- The Bellman equation for constrained deterministic optimal control problems
Control problems involving ordinary differential equations (34H05) Existence theories for optimal control problems involving ordinary differential equations (49J15) Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games (49L25) Linear systems in control theory (93C05)
Cites Work
- Convex Analysis
- Existence of neighboring feasible trajectories: applications to dynamic programming for state-constrained optimal control problems
- Infinite horizon problems on stratifiable state-constraints sets
- Hamilton-Jacobi Equations with State Constraints
- Optimal Control with State-Space Constraint I
- Title not available (Why is that?)
- A New Formulation of State Constraint Problems for First-Order PDEs
- Normality and nondegeneracy for optimal control problems with state constraints
- \(L^{\infty }\) estimates on trajectories confined to a closed subset
- Optimal times for constrained nonlinear control problems without local controllability
- Optimal Control with State-Space Constraint. II
- Optimal control
- Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations
- Deterministic state-constrained optimal control problems without controllability assumptions
- State-constrained optimal control problems of impulsive differential equations
- A general Hamilton-Jacobi framework for non-linear state-constrained control problems
- Discontinuous solutions of Hamilton-Jacobi-Bellman equation under state constraints
- The state constrained bilateral minimal time function
- Hamilton-Jacobi characterization of the state constrained value
- Legendre transform and applications to finite and infinite optimization
- Regularity of solution maps of differential inclusions under state constraints
- Multivalued dynamics on a closed domain with absorbing boundary. Applications to optimal control problems with integral constraints
- On Nonlinear Optimal Control Problems with State Constraints
- The Mayer and minimum time problems with stratified state constraints
Cited In (27)
- Stability of solutions to Hamilton-Jacobi equations under state constraints
- A general comparison principle for Hamilton Jacobi Bellman equations on stratified domains
- Hamilton-Jacobi characterization of the state constrained value
- Optimal Control with State-Space Constraint. II
- Hamilton-Jacobi-Bellman equations for the optimal control of a state equation with memory
- Backward reachability approach to state-constrained stochastic optimal control problem for jump-diffusion models
- Stochastic optimal control in infinite dimensions with state constraints
- Semicontinuous solutions of Hamilton-Jacobi-Bellman equations with degenerate state constraints
- Hamilton-Jacobi-Bellman approach for optimal control problems of sweeping processes
- A direct approach to infinite dimensional Hamilton-Jacobi equations and applications to convex control with state constraints
- Deterministic state-constrained optimal control problems without controllability assumptions
- The Hamilton-Jacobi-Bellman equation with a gradient constraint
- Optimal control of nonlinear systems with integer‐valued control inputs and stochastic constraints
- Relationship between the maximum principle and dynamic programming for minimax problems
- Optimality conditions for linear-convex optimal control problems with mixed constraints
- Generalized Hamilton-Jacobi-Bellman equations in optimal control problems with phase constraints. I
- Generalized Hamilton-Jacobi-Bellman equations in optimal control problems with phase constraints. II
- A Max-Plus-Based Algorithm for a Hamilton--Jacobi--Bellman Equation of Nonlinear Filtering
- Optimistic planning algorithms for state-constrained optimal control problems
- A general Hamilton-Jacobi framework for non-linear state-constrained control problems
- Title not available (Why is that?)
- Application of maximum principle to optimization of production and storage costs
- Relationship between maximum principle and dynamic programming in presence of intermediate and final state constraints
- A Hamilton-Jacobi-Bellman approach for the numerical computation of probabilistic state constrained reachable sets
- On the fragility of the basis on the Hamilton-Jacobi-Bellman equation in economic dynamics
- Semicontinuous solutions of Hamilton-Jacobi-Bellman equations with state constraints
- Optimal trajectory tracking solution: fractional order viewpoint
This page was built for publication: Hamilton-Jacobi-Bellman equations for optimal control processes with convex state constraints
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1690969)