Time to absorption in discounted reinforcement models. (Q2574614): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Import recommendations run Q6534273
 
(6 intermediate revisions by 6 users not shown)
Property / DOI
 
Property / DOI: 10.1016/j.spa.2003.08.003 / rank
Normal rank
 
Property / MaRDI profile type
 
Property / MaRDI profile type: Publication / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2037587274 / rank
 
Normal rank
Property / arXiv ID
 
Property / arXiv ID: math/0404107 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3838974 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3218140 / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Dynamical System Approach to Stochastic Approximations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Dynamics of Morse-Smale urn processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Asymptotics of a matrix valued Markov chain arising in sociology. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3992965 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5848594 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Reinforced random walk / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3134548 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning, Local Interaction, and Coordination / rank
 
Normal rank
Property / cites work
 
Property / cites work: The weakness of strong ties: Collective action failure in a highly cohesive group* / rank
 
Normal rank
Property / cites work
 
Property / cites work: A simple urn model / rank
 
Normal rank
Property / cites work
 
Property / cites work: Bernard Friedman's Urn / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning mixed equilibria / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5590550 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3926044 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Attracting edge property for a class of reinforced random walks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3040961 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Markov processes and learning models / rank
 
Normal rank
Property / cites work
 
Property / cites work: Nonconvergence to unstable points in urn models and stochastic approximations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Network formation by reinforcement learning: the long and medium run / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Stochastic Approximation Method / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term / rank
 
Normal rank
Property / cites work
 
Property / cites work: Collective dynamics of ‘small-world’ networks / rank
 
Normal rank
Property / DOI
 
Property / DOI: 10.1016/J.SPA.2003.08.003 / rank
 
Normal rank
Property / Recommended article
 
Property / Recommended article: The gambler's ruin problem for a Markov chain related to the Bessel process / rank
 
Normal rank
Property / Recommended article: The gambler's ruin problem for a Markov chain related to the Bessel process / qualifier
 
Similarity Score: 0.7375341
Amount0.7375341
Unit1
Property / Recommended article: The gambler's ruin problem for a Markov chain related to the Bessel process / qualifier
 
Property / Recommended article
 
Property / Recommended article: Q3595512 / rank
 
Normal rank
Property / Recommended article: Q3595512 / qualifier
 
Similarity Score: 0.73277074
Amount0.73277074
Unit1
Property / Recommended article: Q3595512 / qualifier
 
Property / Recommended article
 
Property / Recommended article: Fast convergence in evolutionary equilibrium selection / rank
 
Normal rank
Property / Recommended article: Fast convergence in evolutionary equilibrium selection / qualifier
 
Similarity Score: 0.7317069
Amount0.7317069
Unit1
Property / Recommended article: Fast convergence in evolutionary equilibrium selection / qualifier
 
Property / Recommended article
 
Property / Recommended article: Stationarity of a stochastic population flow model / rank
 
Normal rank
Property / Recommended article: Stationarity of a stochastic population flow model / qualifier
 
Similarity Score: 0.72877234
Amount0.72877234
Unit1
Property / Recommended article: Stationarity of a stochastic population flow model / qualifier
 
Property / Recommended article
 
Property / Recommended article: Network formation by reinforcement learning: the long and medium run / rank
 
Normal rank
Property / Recommended article: Network formation by reinforcement learning: the long and medium run / qualifier
 
Similarity Score: 0.7281571
Amount0.7281571
Unit1
Property / Recommended article: Network formation by reinforcement learning: the long and medium run / qualifier
 
Property / Recommended article
 
Property / Recommended article: Large deviations and equilibrium selection in large populations / rank
 
Normal rank
Property / Recommended article: Large deviations and equilibrium selection in large populations / qualifier
 
Similarity Score: 0.72802246
Amount0.72802246
Unit1
Property / Recommended article: Large deviations and equilibrium selection in large populations / qualifier
 
Property / Recommended article
 
Property / Recommended article: On reevaluation rate in discrete time Hogg-Huberman model / rank
 
Normal rank
Property / Recommended article: On reevaluation rate in discrete time Hogg-Huberman model / qualifier
 
Similarity Score: 0.7254012
Amount0.7254012
Unit1
Property / Recommended article: On reevaluation rate in discrete time Hogg-Huberman model / qualifier
 
Property / Recommended article
 
Property / Recommended article: Q3357999 / rank
 
Normal rank
Property / Recommended article: Q3357999 / qualifier
 
Similarity Score: 0.7245063
Amount0.7245063
Unit1
Property / Recommended article: Q3357999 / qualifier
 
Property / Recommended article
 
Property / Recommended article: Q5400025 / rank
 
Normal rank
Property / Recommended article: Q5400025 / qualifier
 
Similarity Score: 0.7238293
Amount0.7238293
Unit1
Property / Recommended article: Q5400025 / qualifier
 
Property / Recommended article
 
Property / Recommended article: Three-Player Absorbing Games / rank
 
Normal rank
Property / Recommended article: Three-Player Absorbing Games / qualifier
 
Similarity Score: 0.72177196
Amount0.72177196
Unit1
Property / Recommended article: Three-Player Absorbing Games / qualifier
 
links / mardi / namelinks / mardi / name
 

Latest revision as of 19:00, 27 January 2025

scientific article
Language Label Description Also known as
English
Time to absorption in discounted reinforcement models.
scientific article

    Statements

    Time to absorption in discounted reinforcement models. (English)
    0 references
    0 references
    0 references
    29 November 2005
    0 references
    Reinforcement model is studied with the goal to establish the time to absorption when the discount \(x\) tends to zero. Consider population of at least four members where triples are formed to ``play together''. These triples change in time and it is proved by the authors [Math.\ Soc.\ Sci.\ 48, 315--327 (2004; Zbl 1091.91060)] that the process is trapped in a degenerate state such that the population is divided into subgroups of sizes 3-5 which members play within the subgroup only. The main results of the paper (Theorems 2.2 and 3.1) concern with a model where the history influence is discounted using a discount rate \(1-x\). In Theorem 2.2 it is proved that with a positive probability each player will play with each other beyond time \(\exp \{c_N x^{-1}\}\), \(N\) is the population size. Consider more specific one-dimensional case with state space \([0,1]\) and its compact subintervals \(I_x\) increasing to \((0,1)\) as \(x\) tends to zero. Then it is proved in Theorem 3.1 that the expectation of the first leave time (from \(I_x\)) of the process tends to \(\exp \{Cx^{-1}\}\) as \(x\) tends to zero.
    0 references
    network
    0 references
    social network
    0 references
    urn model
    0 references
    Friedman urn
    0 references
    stochastic approximation
    0 references
    meta-stable
    0 references
    trap
    0 references
    three-player game
    0 references
    potential well
    0 references
    exponential time
    0 references
    quasi-stationary
    0 references

    Identifiers