Linearization techniques for L^p-control problems and dynamic programming principles in classical and L^p-control problems
DOI10.1051/COCV/2011183zbMATH Open1262.49030OpenAlexW2273213599MaRDI QIDQ3143591FDOQ3143591
Authors: Dan Goreac, Oana Silvia Serea
Publication date: 3 December 2012
Published in: ESAIM: Control, Optimisation and Calculus of Variations (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1051/cocv/2011183
Recommendations
- Discontinuous control problems with state constraints: linear formulations and dynamic programming principles
- A note on linearization methods and dynamic programming principles for stochastic discontinuous control problems
- An LP approach to dynamic programming principles for stochastic control problems with state constraints
- The linear programming approach to deterministic optimal control problems
- Linear Programming Approach to Deterministic Long Run Average Problems of Optimal Control
essential supremumdynamic programming principleoccupational measures\(L^{p}\)-approximationsHJ equations
Ordinary differential inclusions (34A60) Methods involving semicontinuity and convergence; relaxation (49J45) Dynamic programming in optimal control and differential games (49L20) Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games (49L25) Control/observation systems governed by ordinary differential equations (93C15)
Cited In (19)
- An LP approach to dynamic programming principles for stochastic control problems with state constraints
- A note on linearization methods and dynamic programming principles for stochastic discontinuous control problems
- Averaging and linear programming in some singularly perturbed problems of optimal control
- Existence of asymptotic values for nonexpansive stochastic control systems
- Linear programming estimates for Cesàro and Abel limits of optimal values in optimal control problems
- Min-max control problems via occupational measures
- Linear programming formulations of deterministic infinite horizon optimal control problems in discrete time
- Linear programming formulation of long-run average optimal control problem
- Reflected dynamics: viscosity analysis for \(\mathbb{L}^\infty\) cost, relaxation and abstract dynamic programming
- Linearization techniques for controlled piecewise deterministic Markov processes; application to Zubov's method
- Optimality conditions for \(\mathbb{L}^p\) problems with reflected dynamics
- Some applications of linear programming formulations in stochastic control
- Linear programming based optimality conditions and approximate solution of a deterministic infinite horizon discounted optimal control problem in discrete time
- Discontinuous control problems with state constraints: linear formulations and dynamic programming principles
- On average control generating families for singularly perturbed optimal control problems with long run average optimality criteria
- LP-related representations of Cesàro and Abel limits of optimal value functions
- LP Formulations of Discrete Time Long-Run Average Optimal Control Problems: The NonErgodic Case
- Ky-Fan and Sion theorems for the lexicographic order and applications to vectorial games and min-max control problems
- Infinite horizon stochastic optimal control problems with running maximum cost
This page was built for publication: Linearization techniques for \(\mathbb{L}^{p}\)-control problems and dynamic programming principles in classical and \(\mathbb{L}^{p}\)-control problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3143591)