Modeling systems with memoryless transitions: from random walks to Google's PageRank
A discrete-time stochastic process has the Markov property if:
The future depends only on the current state, not on the history. This is the "memoryless property".
The one-step transition probabilities form a matrix where:
Properties: and for all .
The probability of going from to in steps:
In matrix form:
Recurrent: State is recurrent if .
Transient: State is transient if .
The period of state is the GCD of all such that . If period is 1, the state is aperiodic.
A probability distribution is stationary if:
Component-wise: (balance equations).
For an irreducible, aperiodic, positive recurrent Markov chain:
The n-step transition probabilities converge to the unique stationary distribution, regardless of the starting state.
Problem:
Find the stationary distribution for .
Solution:
Solve and :
Solving: .
Problem:
A particle moves on states {0,1,2} with and (with reflecting boundaries). Classify states.
Solution:
All states communicate, so the chain is irreducible. All states are recurrent and aperiodic (period 1).
Problem:
A gambler starts with $i and wins $1 with probability p, loses $1 with probability q=1-p. States 0 and N are absorbing. Find absorption probabilities.
Solution:
Let be the probability of reaching N before 0 starting from i. Using first-step analysis:
Solving this recurrence gives the absorption probabilities.
The memoryless property (Markov property) means that the future state depends only on the current state, not on the sequence of events that preceded it. Mathematically, P(X_{n+1}=j | X_n=i, X_{n-1}=i_{n-1}, ...) = P(X_{n+1}=j | X_n=i). It simplifies modeling by ignoring history.
A stationary distribution π is a vector such that πP = π; if you start in this distribution, you stay in it. A limiting distribution is what the process converges to as n→∞, regardless of the starting state. For irreducible, aperiodic, positive recurrent chains, they are the same.
Yes, if the chain is reducible (has multiple closed communicating classes). For example, if you have two disconnected components, any linear combination of their stationary distributions is also stationary. However, an irreducible chain has at most one stationary distribution.
An ergodic Markov chain is one that is irreducible, aperiodic, and positive recurrent. For such chains, the time average of the process equals the state space average (stationary distribution) with probability 1. It means the system explores all states proportionally to their stationary probabilities.
If P is the 1-step transition matrix, the n-step transition matrix P^(n) is simply the matrix power P raised to n. This is a direct consequence of the Chapman-Kolmogorov equations.