MathIsimple
Core Module
8-12 Hours

Discrete Time Markov Chains

Modeling systems with memoryless transitions: from random walks to Google's PageRank

Markov Transition Calculator
Calculate n-step transition probabilities
P0:
P1:
Markov Property & Definition - The Markov Property

A discrete-time stochastic process {Xn,n0}\{X_n, n \geq 0\} has the Markov property if:

P(Xn+1=jXn=i,Xn1=in1,,X0=i0)=P(Xn+1=jXn=i)P(X_{n+1} = j | X_n = i, X_{n-1} = i_{n-1}, \dots, X_0 = i_0) = P(X_{n+1} = j | X_n = i)

The future depends only on the current state, not on the history. This is the "memoryless property".

Transition Matrix

The one-step transition probabilities form a matrix P=(pij)P = (p_{ij}) where:

pij=P(Xn+1=jXn=i)p_{ij} = P(X_{n+1} = j | X_n = i)

Properties: pij0p_{ij} \geq 0 and jpij=1\sum_j p_{ij} = 1 for all ii.

Chapman-Kolmogorov Equations - n-Step Transitions

The probability of going from ii to jj in n+mn+m steps:

pij(n+m)=kSpik(n)pkj(m)p_{ij}^{(n+m)} = \sum_{k \in S} p_{ik}^{(n)} p_{kj}^{(m)}

In matrix form: P(n)=PnP^{(n)} = P^n

State Classification - Recurrent vs Transient

Recurrent: State ii is recurrent if P(return to i)=1P(\text{return to } i) = 1.

Transient: State ii is transient if P(return to i)<1P(\text{return to } i) < 1.

Periodicity

The period of state ii is the GCD of all nn such that pii(n)>0p_{ii}^{(n)} > 0. If period is 1, the state is aperiodic.

Stationary Distributions - Definition & Balance Equations

A probability distribution π=(πj)\pi = (\pi_j) is stationary if:

π=πP,jπj=1,πj0\pi = \pi P, \quad \sum_j \pi_j = 1, \quad \pi_j \geq 0

Component-wise: πj=iπipij\pi_j = \sum_i \pi_i p_{ij} (balance equations).

Fundamental Theorems - Ergodic Theorem

For an irreducible, aperiodic, positive recurrent Markov chain:

limnpij(n)=πj\lim_{n \to \infty} p_{ij}^{(n)} = \pi_j

The n-step transition probabilities converge to the unique stationary distribution, regardless of the starting state.

Worked Examples - Example 1: Two-State Chain

Problem:

Find the stationary distribution for P=(0.60.40.30.7)P = \begin{pmatrix} 0.6 & 0.4 \\ 0.3 & 0.7 \end{pmatrix}.

Solution:

Solve π=πP\pi = \pi P and π0+π1=1\pi_0 + \pi_1 = 1:

π0=0.6π0+0.3π1,π1=0.4π0+0.7π1\pi_0 = 0.6\pi_0 + 0.3\pi_1, \quad \pi_1 = 0.4\pi_0 + 0.7\pi_1

Solving: π0=3/7,π1=4/7\pi_0 = 3/7, \pi_1 = 4/7.

Example 2: Random Walk on {0,1,2}

Problem:

A particle moves on states {0,1,2} with pi,i+1=0.5p_{i,i+1} = 0.5 and pi,i1=0.5p_{i,i-1} = 0.5 (with reflecting boundaries). Classify states.

Solution:

All states communicate, so the chain is irreducible. All states are recurrent and aperiodic (period 1).

Example 3: Gambler's Ruin

Problem:

A gambler starts with $i and wins $1 with probability p, loses $1 with probability q=1-p. States 0 and N are absorbing. Find absorption probabilities.

Solution:

Let uiu_i be the probability of reaching N before 0 starting from i. Using first-step analysis:

ui=pui+1+qui1,u0=0,uN=1u_i = p u_{i+1} + q u_{i-1}, \quad u_0 = 0, u_N = 1

Solving this recurrence gives the absorption probabilities.

Practice Quiz
10
Questions
0
Correct
0%
Accuracy
1
What is the Markov property?
Not attempted
2
Given transition matrix P=(0.60.40.30.7)P = \begin{pmatrix} 0.6 & 0.4 \\ 0.3 & 0.7 \end{pmatrix}, find p01(2)p_{01}^{(2)} (2-step transition from state 0 to state 1).
Not attempted
3
What property must a transition matrix satisfy?
Not attempted
4
What is a stationary distribution π\pi?
Not attempted
5
What is the difference between a recurrent and transient state?
Not attempted
6
What does it mean for a Markov chain to be irreducible?
Not attempted
7
What is an absorbing state?
Not attempted
8
How do you calculate the n-step transition matrix P(n)P^{(n)}?
Not attempted
9
What is the period of a state?
Not attempted
10
What is the Ergodic Theorem for Markov chains?
Not attempted

Frequently Asked Questions

What exactly is the 'Memoryless Property'?

The memoryless property (Markov property) means that the future state depends only on the current state, not on the sequence of events that preceded it. Mathematically, P(X_{n+1}=j | X_n=i, X_{n-1}=i_{n-1}, ...) = P(X_{n+1}=j | X_n=i). It simplifies modeling by ignoring history.

What is the difference between a stationary distribution and a limiting distribution?

A stationary distribution π is a vector such that πP = π; if you start in this distribution, you stay in it. A limiting distribution is what the process converges to as n→∞, regardless of the starting state. For irreducible, aperiodic, positive recurrent chains, they are the same.

Can a Markov chain have multiple stationary distributions?

Yes, if the chain is reducible (has multiple closed communicating classes). For example, if you have two disconnected components, any linear combination of their stationary distributions is also stationary. However, an irreducible chain has at most one stationary distribution.

What does 'ergodic' mean in this context?

An ergodic Markov chain is one that is irreducible, aperiodic, and positive recurrent. For such chains, the time average of the process equals the state space average (stationary distribution) with probability 1. It means the system explores all states proportionally to their stationary probabilities.

How do I calculate the n-step transition matrix?

If P is the 1-step transition matrix, the n-step transition matrix P^(n) is simply the matrix power P raised to n. This is a direct consequence of the Chapman-Kolmogorov equations.

Ask AI ✨
MathIsimple – Simple, Friendly Math Tools & Learning