Ergodic Theorems for discrete Markov chains

From MaRDI portal
Publication:6289499

arXiv1707.08827MaRDI QIDQ6289499FDOQ6289499


Authors: Nikolaos Halidias Edit this on Wikidata


Publication date: 27 July 2017

Abstract: Let Xn be a discrete time Markov chain with state space S (countably infinite, in general) and initial probability distribution mu(0)=(P(X0=i1),P(X0=i2),cdots,). What is the probability of choosing in random some kinmathbbN with kleqn such that Xk=j where jinS? This probability is the average frac1nsumk=1nmuj(k) where muj(k)=P(Xk=j). In this note we will study the limit of this average without assuming that the chain is irreducible, using elementary mathematical tools. Finally, we study the limit of the average frac1nsumk=1ng(Xk) where g is a given function for a Markov chain not necessarily irreducible.













This page was built for publication: Ergodic Theorems for discrete Markov chains

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6289499)