Stochastic controls with terminal contingent conditions (Q1307260)

From MaRDI portal
Revision as of 09:43, 29 May 2024 by ReferenceBot (talk | contribs) (‎Changed an Item)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
Stochastic controls with terminal contingent conditions
scientific article

    Statements

    Stochastic controls with terminal contingent conditions (English)
    0 references
    0 references
    23 May 2000
    0 references
    For an optimal control problem \[ J(u(\cdot))={\mathbf E}g(y(0))+{\mathbf E}\int_0^T\varphi(t, y(t), z(t), u(t)) dt\to\min , \] where \(dy(t)=f(t, y(t), z(t), u(t)) dt+z(t) dw(t),\) \(t<T\), \(y(T)=\xi\), and \(u(t)\) is a control vector, a necessary condition of optimality (maximum principle) is found and it is examined when it becomes sufficient. In the linear-quadratic case of the control problem, the optimal control is obtained as a feedback from the solution to the forward-backward stochastic differential equation. The study of the nonlinear optimal control problem with additional integral constraints employs the method proposed by \textit{N. Dokuchaev} [Theor. Probab. Appl. 41, No. 4, 761-768 (1996; Zbl 0913.60038)].
    0 references
    0 references
    backward stochastic differential equation
    0 references
    adjoint equation
    0 references
    maximum principle
    0 references
    linear-quadratic control
    0 references
    Lagrangian
    0 references
    duality gap
    0 references
    0 references