MathIsimple

Markov Chains

Essential Formulas & Mathematical References

Complete collection of formulas for Markov chain theory, transition probabilities, stationary distributions, and absorption problems

KaTeX RenderedReady to UseComprehensive Reference
1. Markov Property & Basic Definitions
Fundamental mathematical definitions and the memoryless property

Markov Property

The memoryless property for discrete-time processes:

P{Xn+1=jX0=i0,X1=i1,,Xn=i}=P{Xn+1=jXn=i}P\{X_{n+1}=j \mid X_0=i_0, X_1=i_1, \ldots, X_n=i\} = P\{X_{n+1}=j \mid X_n=i\}
One-Step Transition Probability

Time-homogeneous case:

pij=P{Xn+1=jXn=i}p_{ij} = P\{X_{n+1}=j \mid X_n=i\}
Transition Matrix

Stochastic matrix properties:

P=(pij),jpij=1\mathbf{P} = (p_{ij}), \quad \sum_{j} p_{ij} = 1
2. Chapman-Kolmogorov Equations
Multi-step transition probability calculations

n-Step Transition Probability

Probability of going from state i to state j in exactly n steps:

pij(n)=P{Xm+n=jXm=i},m0p_{ij}^{(n)} = P\{X_{m+n}=j \mid X_m=i\}, \quad \forall m \geq 0

Chapman-Kolmogorov Equation

Fundamental recurrence relation:

pij(u+v)=kIpik(u)pkj(v)p_{ij}^{(u+v)} = \sum_{k \in I} p_{ik}^{(u)} p_{kj}^{(v)}

Matrix form: 𝐏⁽ᵘ⁺ᵛ⁾ = 𝐏⁽ᵘ⁾ × 𝐏⁽ᵛ⁾

Matrix Power Formula

n-step transition matrix as matrix power:

P(n)=Pn=P×P××Pn times\mathbf{P}^{(n)} = \mathbf{P}^n = \underbrace{\mathbf{P} \times \mathbf{P} \times \cdots \times \mathbf{P}}_{n \text{ times}}
3. State Classification Formulas
Hitting probabilities, recurrence, and periodicity
First Passage Time

Time to first reach state j from i:

τij=min{n1:Xn=jX0=i}\tau_{ij} = \min\{n \geq 1 : X_n = j \mid X_0 = i\}
Hitting Probability

Probability of ever reaching j from i:

fij=P{τij<X0=i}f_{ij} = P\{\tau_{ij} < \infty \mid X_0 = i\}

Recurrence Classification

Recurrent State:

fii=1    n=1pii(n)=f_{ii} = 1 \iff \sum_{n=1}^\infty p_{ii}^{(n)} = \infty

Transient State:

fii<1    n=1pii(n)<f_{ii} < 1 \iff \sum_{n=1}^\infty p_{ii}^{(n)} < \infty
Mean Return Time

Expected return time to state i:

μi=E[τiiX0=i]\mu_i = E[\tau_{ii} \mid X_0 = i]
Period

Greatest common divisor of return times:

d(i)=gcd{n1:pii(n)>0}d(i) = \gcd\{n \geq 1 : p_{ii}^{(n)} > 0\}

Positive vs Null Recurrence

Positive Recurrent:

μᵢ < ∞ (finite mean return time)

Null Recurrent:

μᵢ = ∞ (infinite mean return time)

4. Stationary Distributions
Equilibrium behavior and long-term probabilities

Stationary Distribution Definition

Probability distribution π that satisfies:

π=πP,jπj=1,πj0\boldsymbol{\pi} = \boldsymbol{\pi} \mathbf{P}, \quad \sum_{j} \pi_j = 1, \quad \pi_j \geq 0

Balance Equations

Component-wise balance equations:

πj=iIπipij,jI\pi_j = \sum_{i \in I} \pi_i p_{ij}, \quad \forall j \in I

"Flow into state j = probability of being in state j"

Ergodic Theorem

For irreducible, aperiodic, positive recurrent chains:

πj=1μj,limnpij(n)=πj\pi_j = \frac{1}{\mu_j}, \quad \lim_{n \to \infty} p_{ij}^{(n)} = \pi_j

Existence & Uniqueness

Irreducible + Aperiodic + Positive Recurrent: Unique stationary distribution

Irreducible + Transient or Null Recurrent: No stationary distribution

Reducible: Multiple stationary distributions possible

5. Absorption Probabilities & Gambler's Ruin
Finite-time absorption and expected hitting times

Gambler's Ruin Setup

Player starts with i dollars, total capital m, win probability p per game.

• States: {0, 1, 2, ..., m} where 0 and m are absorbing

• Transitions: i → i+1 (prob. p), i → i-1 (prob. q = 1-p)

Fair Game (p = 0.5)

Ruin Probability:

hi=mimh_i = \frac{m - i}{m}

Expected Duration:

E[Ti]=i(mi)E[T_i] = i(m - i)
Unfair Game (p ≠ 0.5)

Ruin Probability:

hi=(q/p)i(q/p)m1(q/p)mh_i = \frac{(q/p)^i - (q/p)^m}{1 - (q/p)^m}

Expected Duration:

E[Ti]=iqpmqp(q/p)i1(q/p)m1E[T_i] = \frac{i}{q-p} - \frac{m}{q-p} \cdot \frac{(q/p)^i - 1}{(q/p)^m - 1}

General Absorption Probabilities

For absorbing Markov chains with transient states T and absorbing states A:

h=(IQ)1R\mathbf{h} = (\mathbf{I} - \mathbf{Q})^{-1} \mathbf{R}

where 𝐐 = transitions within T, 𝐑 = transitions from T to A

Expected Hitting Times

Mean time to absorption from transient states:

t=(IQ)11\mathbf{t} = (\mathbf{I} - \mathbf{Q})^{-1} \mathbf{1}

where 𝟏 is vector of ones, (𝐈 - 𝐐)⁻¹ is the fundamental matrix

6. Special Results & Applications
Important theorems and application-specific formulas

PageRank Algorithm

Google's PageRank as stationary distribution:

π=1dN1+dMπ\boldsymbol{\pi} = \frac{1-d}{N} \mathbf{1} + d \mathbf{M} \boldsymbol{\pi}

where d = damping factor (≈0.85), 𝐌 = link structure matrix, N = number of pages

Birth-Death Process

Stationary distribution for birth-death chains:

πj=π0k=1jλk1μk,j1\pi_j = \pi_0 \prod_{k=1}^j \frac{\lambda_{k-1}}{\mu_k}, \quad j \geq 1

where λₖ = birth rate, μₖ = death rate at state k

Detailed Balance

Sufficient condition for stationary distribution:

πipij=πjpji,i,jI\pi_i p_{ij} = \pi_j p_{ji}, \quad \forall i, j \in I

"Flow from i to j = flow from j to i"

Convergence Rate

For finite irreducible aperiodic chains:

pij(n)πjCρn\left|p_{ij}^{(n)} - \pi_j\right| \leq C \rho^n

where ρ = second largest eigenvalue of 𝐏, C = constant

Quick Reference Summary
Essential formulas at a glance
Basic Properties
  • • Markov Property: P{X(n+1)=j | X0,...,Xn} = P{X(n+1)=j | Xn}
  • • Chapman-Kolmogorov: p(u+v)[ij] = Σk p(u)[ik] p(v)[kj]
  • • Matrix Powers: P(n) = P^n
State Classification
  • • Recurrent: f[ii] = 1 ⟺ Σn p(n)[ii] = ∞
  • • Positive Recurrent: μ[i] < ∞
  • • Period: d(i) = gcd{n: p(n)[ii] > 0}
Stationary Distribution
  • • Balance: π = πP, Σj πj = 1
  • • Ergodic: πj = 1/μj
  • • Detailed Balance: πi*pij = πj*pji
Gambler's Ruin
  • • Fair: h[i] = (m-i)/m, E[T[i]] = i(m-i)
  • • Unfair: h[i] = [(q/p)^i - (q/p)^m]/[1 - (q/p)^m]

Apply These Formulas

Practice with problems and use interactive calculators to master these concepts