Hamilton–Jacobi Theory for Optimal Control Problems with Data Measurable in Time
From MaRDI portal
Publication:5753110
DOI10.1137/0328073zbMath0721.49029OpenAlexW2175840671MaRDI QIDQ5753110
Peter R. Wolenski, Richard B. Vinter
Publication date: 1990
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/0328073
Dynamic programming in optimal control and differential games (49L20) Nonsmooth analysis (49J52) Hamilton-Jacobi theories (49L99)
Related Items (15)
Coextremals and the value function for control problems with data measurable in time ⋮ Lipschitz continuity of the value function in optimal control ⋮ Hamilton–Jacobi theory for hereditary control problems ⋮ On the value function for nonautonomous optimal control problems with infinite horizon ⋮ Viability analysis of the first-order mean field games ⋮ Relationship between the maximum principle and dynamic programming for minimax problems ⋮ Minimax and viscosity solutions in optimization problems for hereditary systems ⋮ Solutions to the Hamilton-Jacobi equation for Bolza problems with discontinuous time dependent data ⋮ Construction of optimal feedback controls ⋮ Invariance properties of time measurable differential inclusions and dynamic programming ⋮ The Semigroup Property of Value Functions in Lagrange Problems ⋮ Nonconvex Duality and Semicontinuous Proximal Solutions of HJB Equation in Optimal Control ⋮ A probabilistic approach to Dirac concentration in nonlocal models of adaptation with several resources ⋮ Dynamic programming for free-time problems with endpoint constraints ⋮ Uniqueness of solutions to the Hamilton-Jacobi equation: A system theoretic proof
This page was built for publication: Hamilton–Jacobi Theory for Optimal Control Problems with Data Measurable in Time