Learning in general games with Nature's moves (Q1714610)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Learning in general games with Nature's moves |
scientific article; zbMATH DE number 7010639
| Language | Label | Description | Also known as |
|---|---|---|---|
| default for all languages | No label defined |
||
| English | Learning in general games with Nature's moves |
scientific article; zbMATH DE number 7010639 |
Statements
Learning in general games with Nature's moves (English)
0 references
1 February 2019
0 references
Summary: This paper investigates simultaneous learning about both nature and others' actions in repeated games and identifies a set of sufficient conditions for which Harsanyi's doctrine holds. Players have a utility function over infinite histories that are continuous for the sup-norm topology. Nature's drawing after any history may depend on any past actions. Provided that (1) every player maximizes her expected payoff against her own beliefs, (2) every player updates her beliefs in a Bayesian manner, (3) prior beliefs about both nature and other players' strategies have a grain of truth, and (4) beliefs about nature are independent of actions chosen during the game, we construct a Nash equilibrium, that is, realization-equivalent to the actual plays, where Harsanyi's doctrine holds. Those assumptions are shown to be tight.
0 references
0.8600538969039917
0 references
0.8282702565193176
0 references
0.8269141316413879
0 references
0.8257418274879456
0 references
0.819638192653656
0 references