% vim: tw=50 % 17/11/2022 10AM \begin{flashcard}[ergodic-theorem] \begin{theorem*}[Ergodic theorem] Let $P$ be irreducible with an invariant distribution $\pi$. Suppose $X_0 \sim \lambda$. Then with probability 1 we have $\forall x \in I$, \[ \cloze{\lim_{n \to \infty} \frac{\sum_{i = 0}^{n - 1} \mathbbm{1}(X_i = x)}{n} \to \pi(x)} \] \end{theorem*} \end{flashcard} \begin{proof} Since $P$ has an invariant distribution, it follows that it is recurrent and so $T_x < \infty$ with probability 1. By the strong Markov property, \[ (X_{T_x + n})_{n \ge 0} \sim \Markov(\delta_x, P) \] and is independent of $X_0, \dots, X_{T_x}$. But since $\lim_{n \to \infty} \frac{\sum_{i = 0}^{n - 1}}{n}$ is not affected by changing the initial distribution, it suffices to consider $\lambda = \delta_x$. Write \[ v_n(x) = \sum_{i = 0}^{n - 1} \mathbbm{1}(X_i = x) = \text{number of visits to $x$ by time $n - 1$} \] Successive return times to $x$: \[ T_x^{(0)} = 0 \] \[ T_x^{(k + 1)} = \inf\{t \ge T_x^{(k)} + 1 : X_t = x\} \] These are stopping times. Define \[ S_x^{(k)} = \begin{cases} T_x^{(k)} - T_x^{(k - 1)} & \text{if $T_x^{(k - 1)} < \infty$} \\ 0 & \text{otherwise} \end{cases} \] By the strong Markov property, we see that $(S_x^{(k)})_k$ are independent identical distributions and have expectation \[ \EE[S_x^{(1)}] = \EE_x[T_x] = \frac{1}{\pi(x)} \] $T_x = T_x^{(1)}$. \[ T_x^{(v_n(x) - 1)} \le n - 1 \tag{1} \] \[ T_x^{(v_n(x))} \ge n \tag{2} \] (1) $\iff S_x^{(1)} + \cdots + S_x^{(v_n(x) - 1)} \le n - 1$ and (2) $\iff S_x^{(1)} + \cdots + S_x^{(v_n(x))} \ge n$. So \[ \boxed{S_x^{(1)} + \cdots + S_x^{(v_n(x) - 1)} \le n \le S_x^{(1)} + \cdots + S_x^{(v_n(x))}} \tag{$*$} \] Since $(S_x^{(k)})$ are IID and $\EE[S_x^{(1)}] < \infty$ then by Strong Law of Large numbers: \[ \frac{S_x^{(1)} + \cdots + S_x^{(k)}}{k} \to \EE[S_x^{(1)}] \] as $k \to \infty$ with probability 1. By recurrence, $v_n(x) \to \infty$ as $n \to \infty$ so dividing ($*$) through by $v_n(x)$ we get both the LHS and RHS converge to \[ \EE[S_x^{(1)}] = \frac{1}{\pi(x)} \] and hence \[ \lim_{n \to \infty} \frac{n}{v_n(x)} = \frac{1}{\pi(x)} \] so \[ \lim_{n \to \infty} \frac{v_n(x)}{n} = \pi(x) \qedhere \] \end{proof} \subsubsection*{Continuous time Markov Chains (non-examinable)} We defined Markov chain as ``the past and future are independent if we are given the present''. We only considered only discrete time Markov chains, but we could generalise: \begin{itemize} \item $(X_t)_{t \ge 0}$, $t \in \RR^+$ \item $S_x = \text{holding time at the state $x$}$ \begin{align*} \PP(S_x > t + s \mid S_x > s) &= \PP(X_u = x, \forall u \in [0, t + s] \mid X_u = x, \forall u \in [0, s]) \\ &= \PP(X_u = x, \forall u \in [s, t + s] \mid X_u = x, \forall u \in [0, s]) \\ &= \PP(X_u = x, \forall u \in [s, t + s] \mid X_s = x) &&\text{(Markov property)} \\ &= \PP_x(X_u = x, \forall u \in [0, t]) &&\text{(time-homogeneity)} \\ &= \PP(S_x > t) \end{align*} $S_x$ has the property: $\PP(S_x > t + s \mid S_x > s) = \PP(S_x > t)$ for all $s, t$. So $S_x$ has the memoryless property. Recall from IA probability that Memoryless property for a positive random variable $S$ is equivalent to $S$ having the exponential distribution with some parameter. \end{itemize} So the simplest example of a continuous time Markov chain is: \\ Poisson process: \[ S_1, S_2, \dots \] IID, $\sim \operatorname{Exp}(\lambda)$ \[ J_i = \sum_{j = 1}^i S_j \] $X_t = i$ if $J_i \le t < J_{i + 1}$ \begin{center} \includegraphics[width=0.6\linewidth] {images/b9265b606aa611ed.png} \end{center}