Question
Suppose the random variables converge in distribution to , and converge in distribution to a positive constant . Prove that the random variables converge in distribution to .
Step-by-step solution
Step 1. By hypothesis, converges in distribution to the constant , i.e., . By a standard result in probability theory, whenever a sequence of random variables converges in distribution to a constant, it also converges in probability to that constant. Hence . Step 2. Consider the sequence of bivariate random vectors . We have and . By the preliminary lemma underlying Slutsky's theorem (or by the joint convergence theorem for random vectors), when one component converges in distribution to a random variable and the other converges in probability to a constant, the joint distribution converges in distribution to the vector formed by their respective limits, i.e., . Step 3. Define the bivariate continuous function , which is continuous everywhere on . By the Continuous Mapping Theorem, if a sequence of random vectors and is continuous almost everywhere on the support of , then . Step 4. Set and . Applying the above theorem yields . Substituting the expression for gives .
Final answer
QED.
Marking scheme
The following is the detailed marking scheme for this problem (maximum 7 points).
1. Checkpoints (max 7 pts total)
Select exactly one of the following three chains that matches the student's approach; do not combine points across chains.
Chain A: Continuous Mapping Theorem Path (Official Solution)
- Conversion to convergence in probability [2 pts]: Explicitly state or prove that since converges in distribution to the constant , it follows that converges in probability to ().
- *Note: If the key condition that the limit is a constant is not mentioned, resulting in a logical gap, no credit is awarded for this item.*
- Joint distributional convergence [2 pts]: Using and , conclude that the bivariate random vector converges in distribution to , i.e., .
- Application of the Continuous Mapping Theorem (CMT) [3 pts]:
- Construct the function and note its continuity (on or on the support of the limit) [1 pt].
- Apply the CMT to obtain , i.e., [2 pts].
- *Note: If the student merely states the conclusion without mentioning continuity or the mapping theorem, no credit is awarded for this step.*
Chain B: Direct Application of Slutsky's Theorem
- Verification of conditions [3 pts]: Explicitly state that Slutsky's theorem requires one of the variables to converge in probability to a constant, and derive or assert from the hypothesis .
- *Key point: The student must demonstrate awareness of the distinction between convergence in distribution and convergence in probability; the two cannot be treated as equivalent by default.*
- Application of the theorem [4 pts]: Correctly cite Slutsky's theorem (product form) to directly conclude .
Chain C: Characteristic Functions or First-Principles Approach
- Conversion to convergence in probability [2 pts]: Establish .
- Analytical proof [5 pts]: Use characteristic functions to decompose and estimate, or employ probability metric inequalities to rigorously prove convergence of the product.
- *If the argument contains a serious logical flaw (e.g., incorrectly interchanging limits), award 0 points for this part.*
Total (max 7)
2. Zero-credit items
- Merely copying the hypotheses (e.g., "").
- Asserting the conclusion based solely on deterministic calculus limit rules ("the limit of a product equals the product of the limits") without invoking any probabilistic limit theorem (such as Slutsky's theorem or the CMT).
- Claiming that holds almost surely without proof (the problem only gives convergence in distribution; this implication is incorrect).
3. Deductions
- Incorrect independence assumption: If the proof assumes that and are independent (a condition not given in the problem), and this assumption is central to the argument (e.g., directly factoring the joint probability as ): cap the score at 3/7 (only the first step receives credit).
- Logical gap: In Chain A/B, if the student directly uses as without mentioning that the limit is a constant: deduct 1 point.
- Notational error: Confusing random variables (uppercase ) with deterministic values (lowercase ) in a way that affects semantic clarity: deduct 1 point.