MathIsimple

Stochastic Processes – Problem 3: Prove

Question

Under probability measure P\mathbb{P}, let XtX_t be a Markov chain on state space Ω\Omega with transition kernel p(x,y)p(x,y) and invariant distribution π\pi. Define a subset-valued Markov chain StS_t by: given St=SΩS_t = S \subset \Omega, sample UUniform[0,1]U \sim \mathrm{Uniform}[0,1] independently and set St+1={yΩ:xSπ(x)p(x,y)π(y)U}.S_{t+1}=\left\{y\in\Omega:\frac{\sum_{x\in S}\pi(x)p(x,y)}{\pi(y)}\ge U\right\}. Let PP denote the law of StS_t, and let Pt(x,y):=P(Xt=yX0=x)P^t(x,y):=\mathbb{P}(X_t=y\mid X_0=x). Prove Pt(x,y)=π(y)π(x)P(yStS0={x}).P^t(x,y)=\frac{\pi(y)}{\pi(x)}\,P(y\in S_t\mid S_0=\{x\}).

Step-by-step solution

Step 1. For any fixed subset SΩS \subset \Omega, define aS(y):=zSπ(z)p(z,y)π(y).a_S(y):=\frac{\sum_{z\in S}\pi(z)p(z,y)}{\pi(y)}. Because π\pi is invariant, 0aS(y)10 \le a_S(y) \le 1.

Step 2. From the update rule with UUniform[0,1]U \sim \mathrm{Uniform}[0,1], P(ySt+1St=S)=aS(y)=zSπ(z)p(z,y)π(y).P(y\in S_{t+1}\mid S_t=S)=a_S(y)=\frac{\sum_{z\in S}\pi(z)p(z,y)}{\pi(y)}. Taking expectation over StS_t, P(ySt+1S0={x})\n=1π(y)zΩπ(z)p(z,y)P(zStS0={x}).P(y\in S_{t+1}\mid S_0=\{x\})\n=\frac{1}{\pi(y)}\sum_{z\in\Omega}\pi(z)p(z,y)\,P(z\in S_t\mid S_0=\{x\}).

Step 3. Define Ht(x,y):=π(y)π(x)P(yStS0={x}).H_t(x,y):=\frac{\pi(y)}{\pi(x)}P(y\in S_t\mid S_0=\{x\}). Then the relation above becomes Ht+1(x,y)=zΩHt(x,z)p(z,y).H_{t+1}(x,y)=\sum_{z\in\Omega}H_t(x,z)p(z,y). This is exactly the Chapman-Kolmogorov recursion.

Step 4. Initial condition: when t=0t=0, S0={x}S_0=\{x\}, so H0(x,y)=π(y)π(x)1{y=x}=1{y=x}=P0(x,y).H_0(x,y)=\frac{\pi(y)}{\pi(x)}\mathbf 1_{\{y=x\}}=\mathbf 1_{\{y=x\}}=P^0(x,y). Hence, by induction on tt, Ht(x,y)=Pt(x,y)H_t(x,y)=P^t(x,y) for all t0t\ge0, proving the claim.

Final answer

QED.

Marking scheme

Checkpoints (max 7 points)

Global grading policy

  • Award credit for a logically complete argument, not for isolated formulas.
  • If multiple valid derivations are provided, score the best coherent path.
  • Deduct only for genuine logical gaps (missing conditioning, unjustified limits, or incorrect use of invariance).

Part A: One-step update law (2 points)

  • Correctly define

\[

a_S(y)=\frac{\sum_{z\in S}\pi(z)p(z,y)}{\pi(y)}

\]

for fixed SΩS\subset\Omega, and justify 0aS(y)10\le a_S(y)\le 1 from stationarity of π\pi. [1 pt]

  • Derive the conditional update probability from the uniform-threshold rule:

\[

P(y\in S_{t+1}\mid S_t=S)=a_S(y).

\]

[1 pt]

Part B: Recursion for the membership probability (2 points)

  • Condition on StS_t, then average over StS_t to obtain

\[

P(y\in S_{t+1}\mid S_0=\{x\})

=\sum_{z\in\Omega} p(z,y)\,P(z\in S_t\mid S_0=\{x\}).

\]

[1 pt]

  • Introduce

\[

H_t(x,y):=\frac{\pi(y)}{\pi(x)}P(y\in S_t\mid S_0=\{x\})

\]

and derive the linear recursion

\[

H_{t+1}(x,y)=\sum_{z\in\Omega}H_t(x,z)p(z,y).

\]

[1 pt]

Part C: Identification with the tt-step transition kernel (3 points)

  • Verify the initial condition:

\[

H_0(x,y)=\mathbf 1_{\{x=y\}}=P^0(x,y).

\]

[1 pt]

  • Use induction (or uniqueness of the forward equation solution in discrete time) to conclude

\[

H_t(x,y)=P^t(x,y),\qquad t\ge0.

\]

[1 pt]

  • Rearranging gives the target identity:

\[

P(y\in S_t\mid S_0=\{x\})=\frac{\pi(x)}{\pi(y)}P^t(x,y).

\]

[1 pt]


Common deductions

  • Missing normalization by π(y)\pi(y) in aS(y)a_S(y): deduct up to 1 point.
  • Writing a recursion without conditioning/averaging correctly: deduct 1 point.
  • Claiming the final identity without checking base step or induction: deduct 1 point.
  • Algebraic slips that do not change the method: small deduction (0.25–0.5 point).
Ask AI ✨