Approachability in Stackelberg stochastic games with vector costs
From MaRDI portal
Publication:1707454
DOI10.1007/s13235-016-0198-yzbMath1391.91029arXiv1411.0728OpenAlexW2471478425MaRDI QIDQ1707454
Dileep Kalathil, Vivek S. Borkar, Rahul Jain
Publication date: 3 April 2018
Published in: Dynamic Games and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1411.0728
Stochastic approximation (62L20) Stochastic games, stochastic differential games (91A15) Markov and semi-Markov decision processes (90C40)
Related Items
Learning in games with cumulative prospect theoretic preferences, Q-learning for Markov decision processes with a satisfiability criterion
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- An analog of the minimax theorem for vector payoffs
- Approachability, regret and calibration: implications and equivalences
- Approachable sets of vector payoffs in stochastic games
- An actor-critic algorithm for constrained Markov decision processes
- Learning Algorithms for Markov Decision Processes with Average Cost
- Online Markov Decision Processes
- Markov Decision Processes with Arbitrary Reward Processes
- Survey of Measurable Selection Theorems
- The Lorenz attractor exists
- Asynchronous Stochastic Approximations
- The O.D.E. Method for Convergence of Stochastic Approximation and Reinforcement Learning
- On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems
- Guaranteed performance regions in Markovian systems with competing decision makers
- Stochastic Approximations and Differential Inclusions
- Stochastic Approximations and Differential Inclusions, Part II: Applications
- The Empirical Bayes Envelope and Regret Minimization in Competitive Markov Decision Processes
- Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations