Routing problems and Markovian decision processes (Q761359)

From MaRDI portal
Revision as of 16:38, 14 June 2024 by ReferenceBot (talk | contribs) (‎Changed an Item)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
Routing problems and Markovian decision processes
scientific article

    Statements

    Routing problems and Markovian decision processes (English)
    0 references
    0 references
    1985
    0 references
    The problem of finding a routing problem through a network has been solved by dynamic programming [see \textit{R. Bellman}, ''Introduction to the mathematical theory of control processes, Vol. II'' (1971; Zbl 0214.145) and \textit{R. Bellman} and \textit{S. E. Dreyfus}, ''Applied dynamic programming'' (1962; Zbl 0106.349)]. A stochastic view of the routing problem for finding the maximum reliability has been studied by \textit{M. Roosta} [J. Math. Anal. Appl. 88, 341-347 (1982; Zbl 0491.90087)]. A different version of the problem is now introduced, where at each city there is a transition probability \(p_{ij}\) of going from city i to city j. The expected time of going from city i to N (end of destination) is then calculated. If, however, at each city there is a choice of policies (one might decide to travel from city i to city j by a different method of transport) then a new Markovian decision process is formulated. The method is easily extended for finding the maximum reliability from city i to city N using an optimal policy.
    0 references
    stochastic routing
    0 references
    stochastic direction choice
    0 references
    network
    0 references
    maximum reliability
    0 references
    optimal policy
    0 references

    Identifiers