Stochastic optimal control via Bellman's principle. (Q1421446): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Optimal Bounded Response Control for a Second-Order System Under a White-Noise Excitation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Solution of fixed final state optimal control problems via simple cell mapping / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic optimal control of nonlinear systems via short-time Gaussian approximation and cell mapping / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3995322 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q2703816 / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Fokker-Planck equation. Methods of solution and applications / rank
 
Normal rank
Property / cites work
 
Property / cites work: Cumulant-neglect closure method for asymmetric non-linear systems driven by Gaussian white noise / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4255599 / rank
 
Normal rank

Latest revision as of 13:18, 6 June 2024

scientific article
Language Label Description Also known as
English
Stochastic optimal control via Bellman's principle.
scientific article

    Statements

    Stochastic optimal control via Bellman's principle. (English)
    0 references
    0 references
    26 January 2004
    0 references
    Consider a stochastic nonlinear controlled continuous-time system with the dynamics \(x(t)\) given by the equation \[ dx(t)=m(x(t),u(t))+ \sigma(x(t),u(t))dB(t),\quad t\in[0,T], \] where \(dB(t)\) is an \(m\)-dimensional standard Brownian motion, \(u(t)\in \mathbb R^m\) is a control at time \(t\), the functions \(m(x,u)\) and \(\sigma(x,u)\) are nonlinear in general. The cost function is of the form \[ J(u,x_0,t_0,T)={\mathbb E}\left[ \psi(x(T),T)+\int_0^TL(x(t),u(t))\,dt\right], \] where \(\psi(x(T),T)\) is the terminal cost and \(L(x(t),u(t))\) is the Lagrangian function. The authors present a method for finding optimal controls for the considered stochastic nonlinear controlled systems based on Bellman's principle of optimality. Numerical examples demonstrate good performance.
    0 references
    stochastic system
    0 references
    nonlinear system
    0 references
    optimal control
    0 references
    Bellman's principle
    0 references
    0 references
    0 references

    Identifiers