Time to absorption in discounted reinforcement models.
From MaRDI portal
Publication:2574614
DOI10.1016/j.spa.2003.08.003zbMath1075.60090arXivmath/0404107OpenAlexW2037587274MaRDI QIDQ2574614
Publication date: 29 November 2005
Published in: Stochastic Processes and their Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/math/0404107
networkstochastic approximationpotential wellsocial networkexponential timeurn modeltrapquasi-stationarythree-player gameFriedman urnmeta-stable
Related Items
A generalized Pólya urn and limit laws for the number of outputs in a family of random circuits ⋮ Network formation by reinforcement learning: the long and medium run ⋮ An infinite stochastic model of social network formation ⋮ Models of coalition or alliance formation ⋮ Learning to signal: Analysis of a micro-level reinforcement model
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nonconvergence to unstable points in urn models and stochastic approximations
- Learning mixed equilibria
- Attracting edge property for a class of reinforced random walks
- Network formation by reinforcement learning: the long and medium run
- Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term
- Markov processes and learning models
- Asymptotics of a matrix valued Markov chain arising in sociology.
- Learning, Local Interaction, and Coordination
- The weakness of strong ties: Collective action failure in a highly cohesive group*
- Dynamics of Morse-Smale urn processes
- A Dynamical System Approach to Stochastic Approximations
- Collective dynamics of ‘small-world’ networks
- Bernard Friedman's Urn
- A simple urn model
- A Stochastic Approximation Method
- Reinforced random walk
This page was built for publication: Time to absorption in discounted reinforcement models.