Optimal trajectories of infinite-horizon deterministic control systems (Q1263077)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Optimal trajectories of infinite-horizon deterministic control systems
scientific article

    Statements

    Optimal trajectories of infinite-horizon deterministic control systems (English)
    0 references
    0 references
    0 references
    1989
    0 references
    The paper is concerned with the infinite-horizon deterministic control problem of minimizing \(\int^{T}_{0}L(z,\dot z)dt\), \(T\to \infty\), where \(L(z,\cdot)\) is convex in \(\dot z\) for fixed z but not necessary jointly convex in \((z,\dot z)\). The existence of a solution to the infinite-horizon Bellman equation is establishing and used to define a differential inclusion, which reduces in certain cases to an ordinary differential equation. Several cases are discussed where solutions of this differential inclusion (equation) provide optimal solutions (in the overtaking optimality sense) to the optimization problem. A quantity of special interest is the minimal long-run average-cost growth rate. It is computed explicitly and shown to be equal to min L(x,0) in the following two cases: one is the scalar case \(n=1\) and the other is when the integrand is in a separated form \(\ell (x)+g(\dot x)\). The solution of the infinite horizon H.J.B. equation is established while considering a finite-horizon problem, and applying known results for finite horizon control problems. It is shown how optimal trajectories can be computed as solutions of a differential inclusion (equation) once a solution to H.J.B. equation is known.
    0 references
    0 references
    0 references
    0 references
    0 references
    infinite-horizon
    0 references
    Bellman equation
    0 references
    overtaking optimality
    0 references
    H.J.B. equation
    0 references