Reduced complexity dynamic programming based on policy iteration (Q1206904)

From MaRDI portal
Revision as of 15:53, 17 May 2024 by ReferenceBot (talk | contribs) (‎Changed an Item)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
Reduced complexity dynamic programming based on policy iteration
scientific article

    Statements

    Reduced complexity dynamic programming based on policy iteration (English)
    0 references
    0 references
    1 April 1993
    0 references
    Optimal nonlinear control problems are considered. The problems are solved by using the dynamic programming approach of Bellman. Computational complexity of the usual dynamic programming formalism increases exponentially with the dimension of state. An iterative procedure for solving the dynamic programming equations is proposed. Instead of backward dynamic programming a forward method is introduced. The required computation is independent of the dimension but grows exponentially with the horizon length. A suboptimal method of reduced complexity is defined. The method is illustrated by numerical examples (optimization over a continuous variable, the traveling salesman problem and an \(N\)-Queens problem).
    0 references
    0 references
    optimal control
    0 references
    dynamic programming
    0 references
    forward method
    0 references