Reduced complexity dynamic programming based on policy iteration (Q1206904)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Reduced complexity dynamic programming based on policy iteration
scientific article

    Statements

    Reduced complexity dynamic programming based on policy iteration (English)
    0 references
    0 references
    1 April 1993
    0 references
    Optimal nonlinear control problems are considered. The problems are solved by using the dynamic programming approach of Bellman. Computational complexity of the usual dynamic programming formalism increases exponentially with the dimension of state. An iterative procedure for solving the dynamic programming equations is proposed. Instead of backward dynamic programming a forward method is introduced. The required computation is independent of the dimension but grows exponentially with the horizon length. A suboptimal method of reduced complexity is defined. The method is illustrated by numerical examples (optimization over a continuous variable, the traveling salesman problem and an \(N\)-Queens problem).
    0 references
    0 references
    optimal control
    0 references
    dynamic programming
    0 references
    forward method
    0 references