DeepStack: expert-level artificial intelligence in heads-up no-limit poker
From MaRDI portal
Publication:4645965
Abstract: Artificial intelligence has seen several breakthroughs in recent years, with games often serving as milestones. A common feature of these games is that players have perfect information. Poker is the quintessential game of imperfect information, and a longstanding challenge problem in artificial intelligence. We introduce DeepStack, an algorithm for imperfect information settings. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition that is automatically learned from self-play using deep learning. In a study involving 44,000 hands of poker, DeepStack defeated with statistical significance professional poker players in heads-up no-limit Texas hold'em. The approach is theoretically sound and is shown to produce more difficult to exploit strategies than prior approaches.
Recommendations
Cited in
(34)- Superhuman AI for heads-up no-limit poker: Libratus beats top professionals
- HSVI can solve zero-sum partially observable stochastic games
- Computing human-understandable strategies: deducing fundamental rules of poker strategy
- Mathematical consistency and long-term behaviour of a dynamical system with a self-organising vector field
- CECMLP: new cipher-based evaluating collaborative multi-layer perceptron scheme in federated learning
- Deep reinforcement learning with emergent communication for coalitional negotiation games
- Limited lookahead in imperfect-information games
- DCENet: a dynamic correlation evolve network for short-term traffic prediction
- The challenge of poker
- Simple uncoupled no-regret learning dynamics for extensive-form correlated equilibrium
- Computing large market equilibria using abstractions
- Approximating maxmin strategies in imperfect recall games using A-loss recall property
- Analysis of Hannan consistent selection for Monte Carlo tree search in simultaneous move games
- Identifying behaviorally robust strategies for normal form games under varying forms of uncertainty
- A multivariate Riesz basis of ReLU neural networks
- Successful Nash equilibrium agent for a three-player imperfect-information game
- Distinguishing luck from skill through statistical simulation: a case study
- Multi-agent reinforcement learning: a selective overview of theories and algorithms
- Superhuman AI for multiplayer poker
- A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play
- Value functions for depth-limited solving in zero-sum imperfect-information games
- Generosity, selfishness and exploitation as optimal greedy strategies for resource sharing
- scientific article; zbMATH DE number 1784984 (Why is no real title available?)
- Committing to correlated strategies with multiple leaders
- Counterfactuals as modal conditionals, and their probability
- World-class interpretable poker
- Evaluating strategic structures in multi-agent inverse reinforcement learning
- Rethinking formal models of partially observable multiagent decision making
- Automated construction of bounded-loss imperfect-recall abstractions in extensive-form games
- Solving zero-sum one-sided partially observable stochastic games
- Robust and resource-efficient identification of two hidden layer neural networks
- Automatically designing counterfactual regret minimization algorithms for solving imperfect-information games
- The Hanabi challenge: a new frontier for AI research
- Faster algorithms for extensive-form game solving via improved smoothing functions
This page was built for publication: DeepStack: expert-level artificial intelligence in heads-up no-limit poker
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4645965)