Numerical solutions to the Bellman equation of optimal control
From MaRDI portal
Publication:2251553
DOI10.1007/s10957-013-0403-8zbMath1301.49064OpenAlexW2071509239MaRDI QIDQ2251553
Cesar O. Aguilar, Arthur J. Krener
Publication date: 14 July 2014
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10957-013-0403-8
Hamilton-Jacobi-Bellman equationdynamic programmingnumerical methodsdiscrete-time control systemsnonlinear optimal regulation
Related Items
Asymptotic stabilization of nonlinear systems with convex-polytope input constraints by continuous input ⋮ Feedback stabilization of the three-dimensional Navier-Stokes equations using generalized Lyapunov equations ⋮ On the numerical solution of optimal control problems via Bell polynomials basis ⋮ Numerical study of polynomial feedback laws for a bilinear control problem ⋮ Some numerical tests for an alternative approach to optimal feedback control ⋮ Taylor expansions of the value function associated with a bilinear optimal control problem ⋮ Feedback stabilization of the two-dimensional Navier-Stokes equations by value function approximation ⋮ When does stabilizability imply the existence of infinite horizon optimal control in nonlinear systems?
Cites Work
- Unnamed Item
- Unnamed Item
- Approximate solutions of the Bellman equation of deterministic control theory
- Quadratic regulatory theory for analytic non-linear systems with additive controls
- Nonlinear oscillations, dynamical systems, and bifurcations of vector fields
- Optimal infinite-horizon feedback laws for a general class of constrained discrete-time systems: Stability and moving-horizon approximations
- Design of nonlinear automatic flight control systems
- Discrete time high-order schemes for viscosity solutions of Hamilton- Jacobi-Bellman equations
- Series solution of a class of nonlinear optimal regulators
- Galerkin approximations of the generalized Hamilton-Jacobi-Bellman equation
- Introduction to the mathematical theory of control processes. Vol. II:Nonlinear processes
- Optimal sensor scheduling for hidden Markov model state estimation
- Viscosity Solutions of Hamilton-Jacobi Equations
- Icosahedral Discretization of the Two-Sphere
- On the optimal stabilization of nonlinear systems
- A Schur method for solving algebraic Riccati equations
- Control designs for the nonlinear benchmark problem via the state-dependent Riccati equation method
- Patchy Vector Fields and Asymptotic Stabilization
- Optimization-Based Stabilization of Sampled-Data Nonlinear Systems via Their Approximate Discrete-Time Models
- An Iterative Algorithm for Solving Hamilton--Jacobi Type Equations
- A Patchy Dynamic Programming Scheme for a Class of Hamilton--Jacobi--Bellman Equations
- Analytical Approximation Methods for the Stabilizing Solution of the Hamilton–Jacobi Equation
- Construction of Suboptimal Control Sequences
- Optimal Regulation of Nonlinear Dynamical Systems
- Feedback control methodologies for nonlinear systems
- Sufficient conditions for stabilization of sampled-data nonlinear systems via discrete-time approximations