MathIsimple

Stochastic Processes – Problem 40: Prove that form a Markov chain

Question

Let β>0\beta>0 and n1n\geq1. Let X1,,XnX_{1},\ldots,X_{n} be random variables taking values in {1,1}\{-1,1\} such that

P(Xi=σi for i=1,,n)=1Zeβi=1n1σiσi+1.\mathbb{P}(X_{i}=\sigma_{i}\text{ for }i=1,\dots,n)=\frac{1}{Z}e^{\beta\sum_{i=1}^{n-1}\sigma_{i}\sigma_{i+1}}\,.

The above holds for any sequence (σi)i(\sigma_{i})_{i} taking values ±1\pm 1; here ZZ is a normalizing constant ensuring PP has total mass 1 (ZZ does not depend on (σi)i(\sigma_{i})_{i}).

(a) Prove that X1,,XnX_{1},\ldots,X_{n} form a Markov chain.

(b) Compute its transition probabilities.

(c) Compute E(X1Xn)E(X_1 X_n).

Step-by-step solution

Step 1. (a) Prove that X1,,XnX_1, \ldots, X_n form a Markov chain. To verify the Markov property: P(Xk+1=σk+1Xk=σk,,X1=σ1)=P(Xk+1=σk+1Xk=σk)\mathbb{P}(X_{k+1} = \sigma_{k+1} \mid X_k = \sigma_k, \dots, X_1 = \sigma_1) = \mathbb{P}(X_{k+1} = \sigma_{k+1} \mid X_k = \sigma_k) From the joint distribution: P(X1=σ1,,Xk=σk)=1Zkeβi=1k1σiσi+1\mathbb{P}(X_1=\sigma_1, \dots, X_k=\sigma_k) = \frac{1}{Z_k} e^{\beta \sum_{i=1}^{k-1} \sigma_i \sigma_{i+1}} The conditional probability is: P(Xk+1=σk+1X1=σ1,,Xk=σk)=ZkZk+1exp(βσkσk+1)\mathbb{P}(X_{k+1}=\sigma_{k+1} \mid X_1=\sigma_1, \dots, X_k=\sigma_k) = \frac{Z_k}{Z_{k+1}} \exp(\beta \sigma_k \sigma_{k+1}) This depends only on σk\sigma_k and σk+1\sigma_{k+1}, not on σ1,,σk1\sigma_1, \dots, \sigma_{k-1}, so the Markov property holds.

(b) The transition probability is P(σk+1σk)=C(σk)eβσkσk+1P(\sigma_{k+1} \mid \sigma_k) = C(\sigma_k) e^{\beta \sigma_k \sigma_{k+1}} where C(σk)(eβ+eβ)=1C(\sigma_k)(e^{\beta} + e^{-\beta}) = 1, giving C=12coshβC = \frac{1}{2\cosh\beta}. Thus: P(σk+1σk)=eβσkσk+12coshβP(\sigma_{k+1} \mid \sigma_k) = \frac{e^{\beta \sigma_k \sigma_{k+1}}}{2 \cosh \beta}

(c) The one-step conditional expectation is E(Xk+1Xk)=Xktanhβ\mathbb{E}(X_{k+1} \mid X_k) = X_k \tanh\beta. By iteration, E(XnX1)=X1(tanhβ)n1\mathbb{E}(X_n \mid X_1) = X_1 (\tanh\beta)^{n-1}. Since X12=1X_1^2=1: E(X1Xn)=(tanhβ)n1\mathbb{E}(X_1 X_n) = (\tanh \beta)^{n-1}

Final answer

(a) QED. (b) P(σk+1σk)=eβσkσk+12coshβP(\sigma_{k+1} | \sigma_k) = \frac{e^{\beta \sigma_k \sigma_{k+1}}}{2 \cosh \beta} (c) (tanhβ)n1(\tanh \beta)^{n-1}

Marking scheme

The following is the rubric for this discrete probability / Ising model problem.


1. Checkpoints (max 7 pts total)

(a) Prove the Markov chain property (2 pts)

  • Using the conditional probability definition, write P(X1,,Xk+1)P(X1,,Xk)\frac{P(X_1, \dots, X_{k+1})}{P(X_1, \dots, X_k)} and show terms for i=1k1i=1\dots k-1 cancel: 1 pt
  • Assert the conditional probability is independent of X1,,Xk1X_1, \dots, X_{k-1}: 1 pt

(b) Compute transition probabilities (2 pts)

  • Identify unnormalized form proportional to eβσkσk+1e^{\beta \sigma_k \sigma_{k+1}}: 1 pt
  • Compute normalizing constant C=12coshβC = \frac{1}{2\cosh \beta} and write final formula: 1 pt

(c) Compute E(X1Xn)\mathbb{E}(X_1 X_n) (3 pts)

Score exactly one chain:

  • Chain A: Matrix eigenvalue method
  • Find nontrivial eigenvalue λ2=tanhβ\lambda_2 = \tanh \beta: 1 pt
  • Derive nn-step transition property: 1 pt
  • Obtain final result (tanhβ)n1(\tanh \beta)^{n-1}: 1 pt
  • Chain B: Recursive conditional expectation
  • Compute E(Xk+1Xk)=Xktanhβ\mathbb{E}(X_{k+1} \mid X_k) = X_k \tanh \beta: 1 pt
  • Iterate to get E(XnX1)=X1(tanhβ)n1\mathbb{E}(X_n \mid X_1) = X_1 (\tanh \beta)^{n-1}: 1 pt
  • Use E(X1Xn)=E[X1E(XnX1)]\mathbb{E}(X_1 X_n) = \mathbb{E}[X_1 \mathbb{E}(X_n|X_1)] and X12=1X_1^2=1: 1 pt

Total (max 7)


2. Zero-credit items

  • (a) Merely citing a theorem name without derivation from the given joint distribution.
  • (c) Guessing the result without derivation.
  • (c) Merely listing the expectation definition without simplification.

3. Deductions

  • Exponent error (-1): Confusing nn with n1n-1 in the final result.
  • Variable confusion (-1): Confusing XiX_i (random variables) with σi\sigma_i (realizations).
  • Constant not simplified (no deduction): Using eβ,eβe^\beta, e^{-\beta} form instead of hyperbolic functions.
  • Sign error (score cap): If eigenvalue is cothβ\coth \beta or divergent, cap at 1/3.
Ask AI ✨