MathIsimple
Back to Statistics Hub
Stochastic Processes

Stochastic Processes Practice 2

Advanced topics: continuous-time MC, compound Poisson, SDEs, branching processes, and limit theorems

8 Problems
Suggested: 2 hours

Instructions

  • • Try to solve each problem before viewing the solution
  • • Click "Show Solution" to reveal the answer and detailed explanation
  • • Focus on understanding the problem-solving methodology
1Continuous-Time Markov Chain: Transition Probabilities
Problem

A continuous-time Markov chain on {0, 1} has generator matrix:

Q=(2233)Q = \begin{pmatrix} -2 & 2 \\ 3 & -3 \end{pmatrix}

(1) Find the transition probability matrix P(t) = e^(Qt).

(2) Find the stationary distribution.

(3) If X(0) = 0, find P(X(1) = 1).

Answer Summary

Work from the generator or holding-time structure and solve the forward or backward equations for transition probabilities.

2Compound Poisson Process
Problem

Claims arrive at an insurance company according to a Poisson process with rate λ = 10 per day. Each claim amount is uniformly distributed on [0, 1000] dollars, independent of arrival times and other claims.

(1) What is the expected total claim amount in one day?

(2) Find the variance of the total claim amount in one day.

(3) What is the probability that total claims exceed 6000 dollars in one day?

Answer Summary

Decompose the model into a Poisson count and i.i.d. jump sizes so moments and distributions can be built in two layers.

3Ito Calculus and Stochastic Differential Equations
Problem

Consider the SDE: dX(t) = μX(t)dt + σX(t)dB(t) with X(0) = x₀.

(1) Use Ito's lemma to find the SDE for Y(t) = ln(X(t)).

(2) Solve for X(t) (geometric Brownian motion).

(3) Find E[X(t)] and Var(X(t)).

Answer Summary

Apply Ito's formula carefully, keeping both the drift and quadratic-variation terms that ordinary calculus would miss.

4Gambler's Ruin
Problem

A gambler starts with $5. Each round, they win $1 with probability p = 0.48 or lose $1 with probability q = 0.52.

They play until reaching $0 (ruin) or $10 (target).

(1) Find the probability of ruin starting from $5.

(2) Find the expected number of rounds until the game ends.

Answer Summary

Use the standard ruin recursion or closed-form formula, then distinguish clearly between ruin probability and expected duration.

5Branching Process
Problem

In a Galton-Watson branching process, each individual produces k offspring with probability pₖ:

  • p₀ = 0.2
  • p₁ = 0.3
  • p₂ = 0.5

(1) Find the mean offspring m.

(2) Will the population eventually go extinct?

(3) If extinction occurs, find the extinction probability starting with Z₀ = 1.

Answer Summary

Compute the offspring mean first, then solve the generating-function fixed-point equation for extinction probability.

6Stationary Process and Ergodicity
Problem

Let {Xₙ, n ≥ 0} be a stationary stochastic process with mean μ and autocovariance function γ(h) = Cov(Xₙ, Xₙ₊ₕ).

(1) Show that γ(h) = γ(-h).

(2) If γ(h) = σ²ρ^|h| for |ρ| < 1, find the variance of the sample mean Xˉn=1ni=1nXi\bar{X}_n = \frac{1}{n}\sum_{i=1}^n X_i.

(3) Under what conditions does Xˉnμ\bar{X}_n \to \mu as n → ∞?

Answer Summary

Separate the definitions of stationarity and ergodicity, then test whether time averages and ensemble averages align.

7Stopped Brownian Motion
Problem

Let B(t) be standard Brownian motion. Define the stopping time T = inf{t: B(t) = 2}.

(1) Show that E[B(T)] = 2.

(2) Find E[T].

(3) Is E[B(T)²] = E[T]?

Answer Summary

Use the definition of the stopping time together with careful optional-stopping conditions so you do not confuse path identities with expected values.

8Limit Theorems for Markov Chains
Problem

Consider an irreducible, aperiodic Markov chain with stationary distribution π.

(1) State the ergodic theorem for Markov chains.

(2) If f is a function on the state space, what does 1nk=0n1f(Xk)\frac{1}{n}\sum_{k=0}^{n-1} f(X_k) converge to?

(3) Describe the Central Limit Theorem for Markov chains.

Answer Summary

Check the chain assumptions behind convergence, then connect transition powers or ergodic averages to the limiting distribution.

Ask AI ✨