On dynamic programming: Compactness of the space of policies
From MaRDI portal
Cites work
- scientific article; zbMATH DE number 3216771 (Why is no real title available?)
- scientific article; zbMATH DE number 3227138 (Why is no real title available?)
- scientific article; zbMATH DE number 3243923 (Why is no real title available?)
- scientific article; zbMATH DE number 3245885 (Why is no real title available?)
- scientific article; zbMATH DE number 3274494 (Why is no real title available?)
- scientific article; zbMATH DE number 3298490 (Why is no real title available?)
- scientific article; zbMATH DE number 3320878 (Why is no real title available?)
- scientific article; zbMATH DE number 3349650 (Why is no real title available?)
- scientific article; zbMATH DE number 3061365 (Why is no real title available?)
- scientific article; zbMATH DE number 3076961 (Why is no real title available?)
- An Extension of Wald's Theory of Statistical Decision Functions
- Bayesian dynamic programming
- Compactness and sequential compactness in spaces of measures
- Compactness in spaces of measures
- Discounted Dynamic Programming
- Instationäre dynamische Optimierung bei schwachen Voraussetzungen über die Gewinnfunktionen
- Markovian Decision Processes with Compact Action Spaces
- Negative Dynamic Programming
- On continuous dynamic programming with discrete time-parameter
Cited in
(51)- A universal dynamic program and refined existence results for decentralized stochastic control
- Large deviations principle for discrete-time mean-field games
- Maximizing the probability of visiting a set infinitely often for a countable state space Markov decision process
- An equilibrium existence result for games with incomplete information and indeterminate outcomes
- Semi-uniform Feller stochastic kernels
- Essential stability of the alpha cores of finite games with incomplete information
- Optimal learning with costly adjustment
- Optimality, equilibrium, and curb sets in decision problems without commitment
- A convex programming approach for discrete-time Markov decision processes under the expected total reward criterion
- Semicontinuous nonstationary stochastic games. II
- On compactness of the space of policies in stochastic dynamic programming
- The martingale problem method revisited
- Semicontinuous nonstationary stochastic games
- Zero-sum games involving teams against teams: existence of equilibria, and comparison and regularity in information
- Multiple objective nonatomic Markov decision processes with total reward criteria
- On the expected total reward with unbounded returns for Markov decision processes
- Perfect equilibria in games of incomplete information
- Absorbing Markov decision processes
- Constrained Markov decision processes with non-constant discount factor
- Constrained Markovian decision processes: The dynamic programming approach
- Convex analytic method revisited: further optimality results and performance of deterministic policies in average cost stochastic control
- On the existence of Nash equilibrium in Bayesian games
- Existence of optimal policy for time non-homogeneous discounted Markovian decision programming
- Multiobjective stopping problem for discrete-time Markov processes: convex analytic approach
- Strategic measures in optimal control problems for stochastic sequences
- Extreme Occupation Measures in Markov Decision Processes with an Absorbing State
- Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal
- Constrained and unconstrained optimal discounted control of piecewise deterministic Markov processes
- The expected total cost criterion for Markov decision processes under constraints: a convex analytic approach
- On Borkar and Young relaxed control topologies and continuous dependence of invariant measures on control policy
- Compactness of the space of non-randomized policies in countable-state sequential decision processes
- Nowak's Theorem on Probability Measures Induced by Strategies Revisited
- Bayesian learning and convergence to rational expectations
- Conditions for the solvability of the linear programming formulation for constrained discounted Markov decision processes
- Constrained discounted Markov decision processes with Borel state spaces
- Optimal control of piecewise deterministic Markov processes
- On maximizing the average time at a goal
- Comparison of information structures for zero-sum games and a partial converse to Blackwell ordering in standard Borel spaces
- Sufficiency of deterministic policies for atomless discounted and uniformly absorbing MDPs with multiple criteria
- Markov decision processes with incomplete information and semiuniform Feller transition probabilities
- Stationary Markov Nash equilibria for nonzero-sum constrained ARAT Markov games
- Constrained discounted stochastic games
- Equivalent conditions for weak continuity of nonlinear filters
- Equilibria in infinite games of incomplete information
- Continuity Properties of Value Functions in Information Structures for Zero-Sum and General Games and Stochastic Teams
- Markov decision processes under ambiguity
- Constrained Markov decision processes with expected total reward criteria
- Nash equilibria for total expected reward absorbing Markov games: the constrained and unconstrained cases
- Self-fulfilling expectations in stochastic processes of temporary equilibria
- Geometry of information structures, strategic measures and associated stochastic control topologies
- Maximizing the probability of visiting a set infinitely often for a Markov decision process with Borel state and action spaces
This page was built for publication: On dynamic programming: Compactness of the space of policies
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1221981)