Stochastic optimal control via Bellman's principle. (Q1421446)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Stochastic optimal control via Bellman's principle. |
scientific article |
Statements
Stochastic optimal control via Bellman's principle. (English)
0 references
26 January 2004
0 references
Consider a stochastic nonlinear controlled continuous-time system with the dynamics \(x(t)\) given by the equation \[ dx(t)=m(x(t),u(t))+ \sigma(x(t),u(t))dB(t),\quad t\in[0,T], \] where \(dB(t)\) is an \(m\)-dimensional standard Brownian motion, \(u(t)\in \mathbb R^m\) is a control at time \(t\), the functions \(m(x,u)\) and \(\sigma(x,u)\) are nonlinear in general. The cost function is of the form \[ J(u,x_0,t_0,T)={\mathbb E}\left[ \psi(x(T),T)+\int_0^TL(x(t),u(t))\,dt\right], \] where \(\psi(x(T),T)\) is the terminal cost and \(L(x(t),u(t))\) is the Lagrangian function. The authors present a method for finding optimal controls for the considered stochastic nonlinear controlled systems based on Bellman's principle of optimality. Numerical examples demonstrate good performance.
0 references
stochastic system
0 references
nonlinear system
0 references
optimal control
0 references
Bellman's principle
0 references
0 references
0 references
0 references