Finite state approximations for denumerable state infinite horizon discounted Markov decision processes with unbounded rewards (Q790055): Difference between revisions
From MaRDI portal
Created a new Item |
Added link to MaRDI item. |
||
links / mardi / name | links / mardi / name | ||
Revision as of 11:04, 30 January 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Finite state approximations for denumerable state infinite horizon discounted Markov decision processes with unbounded rewards |
scientific article |
Statements
Finite state approximations for denumerable state infinite horizon discounted Markov decision processes with unbounded rewards (English)
0 references
1982
0 references
In very many Markov decision problems with denumerable states, the reward vector is unbounded. This paper deals with the approach to unbounded problems of \textit{J. M. Harrison} [Ann. Math. Staist. 43, 636-644 (1972; Zbl 0262.90064)], \textit{J. Wessels} [J. Math. Anal. Appl. 58, 326-335 (1977; Zbl 0354.90087)] and \textit{S. A. Lippman} [Manage. Sci., Theory 21, 1225-1233 (1975; Zbl 0309.90017)]. The approach is to use the results of these authors to convert the unbounded problems to bounded problems for which results in our earlier papers [e.g., J. Math. Anal. Appl. 74, 292- 295 (1980; Zbl 0428.90082); ibid. 72, 512-523 (1979; Zbl 0431.90080)] will then apply. The approximation errors are dependent on the appropriate contraction ratio factor for each case considered.
0 references
unbounded reward vector
0 references
Markov decision problems
0 references
denumerable states
0 references
262.90064
0 references
354.90087
0 references
309.90017
0 references
428.90082
0 references
431.90080
0 references
approximation errors
0 references