Continuity of the value function for deterministic optimal impulse control with terminal state constraint
From MaRDI portal
Publication:5084587
Abstract: Deterministic optimal impulse control problem with terminal state constraint is considered. Due to the appearance of the terminal state constraint, the value function might be discontinuous in general. The main contribution of this paper is the introduction of an intrinsic condition under which the value function is continuous. Then by a Bellman dynamic programming method, the corresponding Hamilton-Jacobi-Bellman type quasi-variational inequality (QVI, for short) is derived for which the value function is a viscosity solution. The issue of whether the value function is characterized as the unique viscosity solution to this QVI is carefully addressed and the answer is left open challengingly.
Recommendations
- Deterministic minimax impulse control in finite horizon: the viscosity solution approach
- Deterministic Impulse Control Problems
- Deterministic impulse control problems: two discrete approximations of the quasi-variational inequality
- Finite horizon stochastic optimal switching and impulse controls with a viscosity solution approach
- An impulsive control problem with state constraint
Cites work
- scientific article; zbMATH DE number 3126094 (Why is no real title available?)
- scientific article; zbMATH DE number 3167340 (Why is no real title available?)
- scientific article; zbMATH DE number 62572 (Why is no real title available?)
- scientific article; zbMATH DE number 1325009 (Why is no real title available?)
- scientific article; zbMATH DE number 3419849 (Why is no real title available?)
- A general verification result for stochastic impulse control problems
- A tutorial on the deterministic impulse control maximum principle: necessary and sufficient optimality conditions
- Controlled Markov processes and viscosity solutions
- Degenerate first-order quasi-variational inequalities: an approach to approximate the value function
- Deterministic Impulse Control Problems
- Deterministic minimax impulse control
- Deterministic minimax impulse control in finite horizon: the viscosity solution approach
- Differential Games
- Finite horizon stochastic optimal switching and impulse controls with a viscosity solution approach
- Impulsive control in management: Prospects and applications
- Maximum principle for optimal control problems involving impulse controls with nonsmooth data
- Maximum principle for stochastic recursive optimal control problems involving impulse controls
- Necessary conditions of optimal impulse controls for distributed parameter systems
- On some impulse control problems with constraint
- On the maximum principle for deterministic impulse control problems
- On value preserving and growth optimal portfolios
- Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations
- Optimal control of continuous systems with impulse controls
- Optimal impulse and continuous control: Method of nonlinear quasi- variational inequalities
- Optimal impulse control of a SIR epidemic
- Optimal impulse control problems for degenerate diffusions with jumps
- Optimal stochastic control, stochastic target problems, and backward SDE.
- Small-time local controllability and continuity of the optimal time function for linear systems
- Stochastic Target Problems, Dynamic Programming, and Viscosity Solutions
- Stochastic impulse control with regime-switching dynamics
- Systems governed by ordinary differential equations with continuous, switching and impulse controls
- Viscosity Solutions of Hamilton-Jacobi Equations
- Viscosity solutions associated with impulse control problems for piecewise-deterministic processes
- Zero-sum differential games involving impulse controls
This page was built for publication: Continuity of the value function for deterministic optimal impulse control with terminal state constraint
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5084587)