scientific article; zbMATH DE number 3208653
From MaRDI portal
Publication:5335719
zbMath0128.12804MaRDI QIDQ5335719
Publication date: 1964
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Related Items (24)
On the existence of relative values for undiscounted Markovian decision processes with a scalar gain rate ⋮ On undiscounted semi-Markov decision processes with absorbing states ⋮ Block-scaling of value-iteration for discounted Markov renewal programming ⋮ Replacement process decomposition for discounted Markov renewal programming ⋮ Solving Markovian decision processes by successive elimination of variables ⋮ Existence of a solution to the Markov renewal programming problem ⋮ A Brouwer fixed-point mapping approach to communicating Markov decision processes ⋮ Generalized Markovian decision processes ⋮ Application of Markov renewal theory and <scp>semi‐Markov</scp> decision processes in maintenance modeling and optimization of multi‐unit systems ⋮ Denumerable semi-Markov decision chains with small interest rates ⋮ A Policy Improvement Algorithm for Solving a Mixture Class of Perfect Information and AR-AT Semi-Markov Games ⋮ Semi-Markov decision processes with limiting ratio average rewards ⋮ On the existence of relative values for undiscounted multichain Markov decision processes ⋮ Bounds on the fixed point of a monotone contraction operator ⋮ Solutions of semi-Markov control models with recursive discount rates and approximation by $\epsilon-$optimal policies ⋮ Optimal stochastic control ⋮ On the optimal long run control of Markov renewal processes ⋮ Scalar timing and semi-Markov chains in free-operant avoidance ⋮ Controlled semi-Markov models under long-run average rewards ⋮ Optimal control of stationary Markov processes ⋮ Semi-Markov decision processes with vector pay-offs ⋮ Computation of optimal policies in discounted semi-Markov decision chains ⋮ Generalized polynomial approximations in Markovian decision processes ⋮ The variational calculus and approximation in policy space for Markovian decision processes
This page was built for publication: