Adapting attackers and defenders patrolling strategies: a reinforcement learning approach for Stackelberg security games
DOI10.1016/j.jcss.2017.12.004zbMath1394.91079OpenAlexW2782802862WikidataQ115041718 ScholiaQ115041718MaRDI QIDQ1747487
Kristal K. Trejo, Julio B. Clempner, Alexander S. Poznyak
Publication date: 8 May 2018
Published in: Journal of Computer and System Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jcss.2017.12.004
reinforcement learningStackelberg gamessecurity gamesmultiple playersbehavioral gamesstrong Stackelberg/Nash equilibrium
Noncooperative games (91A10) Hierarchical games (including Stackelberg games) (91A65) Learning and adaptive systems in artificial intelligence (68T05) Applications of game theory (91A80) Rationality and learning in game theory (91A26)
Related Items (2)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Computing the Stackelberg/Nash equilibria using the extraproximal method: convergence analysis and implementation details for Markov chains games
- Reinforcement learning agents
- Using the extraproximal method for computing the shortest-path mixed Lyapunov equilibrium in Stackelberg security games
- Computing the strong \(L_p\)-Nash equilibrium for Markov chains games: convergence and uniqueness
- An optimal strong equilibrium solution for cooperative multi-leader-follower Stackelberg Markov chains games
- A continuous-time Markov Stackelberg security game approach for reasoning about real patrol strategies
- Game-Theoretic Patrolling with Dynamic Execution Uncertainty and a Case Study on a Real Transit System
- Recent advances in hierarchical reinforcement learning
This page was built for publication: Adapting attackers and defenders patrolling strategies: a reinforcement learning approach for Stackelberg security games