Absolutely expedient algorithms for learning Nash equilibria (Q1322380)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Absolutely expedient algorithms for learning Nash equilibria
scientific article

    Statements

    Absolutely expedient algorithms for learning Nash equilibria (English)
    0 references
    0 references
    0 references
    0 references
    1994
    0 references
    Consider an \(n\)-person non-zero sum game \(\Gamma\) with finite pure strategy spaces \(S_ i\) for the players \((1\leq i\leq n)\), where for each pure strategy vector \(\bar s= (s_ 1,\dots, s_ n)\) in \(S_ 1\times S_ 2\times\cdots\times S_ n\), the payoff function of the \(i\)th player \(f_ i(\bar s)\) is a realization \(x_ i\) of a random variable with a probability distribution \(F(\cdot| \bar s,i)\) on \([0,1]\). It is assumed that all the distributions \(F(\cdot| \bar s, i)\) for all \(\bar s\) and \(i\) are unknown to the players, and player \(i\) knows only the realization \(x_ i\), \(1\leq i\leq n\). Consider now such a game with incomplete information, where the players repeat playing the game \(\Gamma\) infinitely many times. The author shows that there is a ``learning algorithm'', which allows all the players to improve their starting strategies in such a way that they converge to a Nash equilibrium of the game \(\Gamma\). For this purpose a team concept associated with learning automata models is employed.
    0 references
    decentralized learning
    0 references
    Nash equilibrium
    0 references
    team concept
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references