Markov chains for collaboration
From MaRDI portal
Publication:3103407
DOI10.4169/MATH.MAG.84.1.003zbMATH Open1227.60091arXiv1401.3022OpenAlexW3098354576MaRDI QIDQ3103407FDOQ3103407
Authors: Robert Mena, Will Murray
Publication date: 7 December 2011
Published in: Mathematics Magazine (Search for Journal in Brave)
Abstract: Consider a system of (n) players in which each initially starts on a different team. At each time step, we select an individual winner and an individual loser randomly and the loser joins the winner's team. The resulting Markov chain and stochastic matrix clearly have one absorbing state, in which all players are on the same team, but the combinatorics along the way are surprisingly elegant. The expected number of time steps until each team is eliminated is a ratio of binomial coefficients. When a team is eliminated, the probabilities that the players are configured in various partitions of (n) into (t) teams are given by multinomial coefficients. The expected value of the time to absorbtion is ((n-1)^2) steps. The results depend on elementary combinatorics, linear algebra, and the theory of Markov chains.
Full work available at URL: https://arxiv.org/abs/1401.3022
Recommendations
- A real-world Markov chain arising in recreational volleyball
- Markov chains to compute expected game lengths of ``chutes and ladders
- Markov chains competing for transitions: application to large-scale distributed systems
- A matrix-analytic approach to the N-player ruin problem
- Renewal-type behavior of absorption times in Markov chains
Markov chains (discrete-time Markov processes on discrete state spaces) (60J10) Probability theory (educational aspects) (97K50)
Cited In (2)
This page was built for publication: Markov chains for collaboration
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3103407)