A Connection Between the Maximum Principle and Dynamic Programming for Constrained Control Problems
From MaRDI portal
Publication:5700548
Recommendations
- On the relationship between the maximum principle and dynamic programming under state constraints
- A note on the value function for constrained control problems
- The Relationship between the Maximum Principle and Dynamic Programming
- Publication:4936264
- Relationship between maximum principle and dynamic programming in presence of intermediate and final state constraints
Cited in
(32)- Necessary conditions for infinite horizon optimal control problems with state constraints
- Normality of the maximum principle for nonconvex constrained Bolza problems
- Second-order sufficient optimality conditions for an optimal control problem with mixed constraints
- Infinite Horizon Optimal Control of Non-Convex Problems Under State Constraints
- Improved sensitivity relations in state constrained optimal control
- Normality and nondegeneracy for optimal control problems with state constraints
- Maximum under continuous-discrete-time dynamic with target and viability constraints
- Further results on subgradients of the value function to a parametric optimal control problem
- An integral-type constraint qualification to guarantee nondegeneracy of the maximum principle for optimal control problems with state constraints
- Feasible perturbations of control systems with pure state constraints and applications to second-order optimality conditions
- Differential stability of a class of convex optimal control problems
- Necessary optimality conditions for infinite dimensional state constrained control problems
- Subgradients of the value function to a parametric optimal control problem
- Subgradients of the value function in a parametric convex optimal control problem
- Normal forms of necessary conditions for dynamic optimization problems with pathwise inequality constraints
- scientific article; zbMATH DE number 2221684 (Why is no real title available?)
- Initialization of the shooting method via the Hamilton-Jacobi-Bellman approach
- Second-order necessary optimality conditions for an optimal control problem
- Necessary optimality conditions for differential-difference inclusions with state constraints
- Dynamic programming principle of control systems on manifolds and its relations to maximum principle
- Necessary optimality conditions for local minimizers of stochastic optimal control problems with state constraints
- Second-order necessary optimality conditions for an optimal control problem with nonlinear state equations
- On relations of the adjoint state to the value function for optimal control problems with state constraints
- Certain hypotheses in optimal control theory and the relationship of the maximum principle with the dynamic programming method
- Optimality conditions for reflecting boundary control problems
- On the relationship between the maximum principle and dynamic programming under state constraints
- Strong Local Minimizers in Optimal Control Problems with State Constraints: Second-Order Necessary Conditions
- A note on the value function for constrained control problems
- Optimality conditions (in Pontryagin form)
- Relationships between the maximum principle and dynamic programming for infinite dimensional stochastic control systems
- New results on the relationship between dynamic programming and the maximum principle
- scientific article; zbMATH DE number 1393069 (Why is no real title available?)
This page was built for publication: A Connection Between the Maximum Principle and Dynamic Programming for Constrained Control Problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5700548)