Question
The random variable converges in distribution to , and converges in distribution to a positive constant . Prove that the random variable converges in distribution to .
Step-by-step solution
Step 1. By hypothesis, converges in distribution to the constant , i.e., . By a well-known property in probability theory, when a sequence of random variables converges in distribution to a constant, it also converges in probability to that constant. Hence . Step 2. Consider the sequence of two-dimensional random vectors . We have and . By the preliminary lemma of Slutsky's theorem (or the convergence theorem for multidimensional random variables), when one component converges in distribution to a random variable and the other converges in probability to a constant, their joint distribution converges in distribution to the vector formed by the respective limits, i.e., . Step 3. Define the bivariate continuous function , which is continuous everywhere on . By the Continuous Mapping Theorem, if a sequence of random vectors and the function is almost surely continuous on the support of , then . Step 4. Let and . Applying the above theorem, we have . Substituting the function expression yields .
Final answer
QED.
Marking scheme
The following is the detailed grading rubric for this problem (maximum 7 points).
1. Checkpoints (max 7 pts total)
Select exactly one of the following three paths that fully matches the student's approach; do not combine points across paths.
Chain A: Continuous Mapping Theorem Path (Official Solution)
- Convergence in probability conversion [2 pts]: Explicitly stating or proving that since converges in distribution to the constant , it follows that converges in probability to ().
- *Note: If the key condition "constant" is not mentioned, causing a logical gap, no credit for this item.*
- Joint distribution convergence [2 pts]: Using and to conclude that the two-dimensional random vector converges in distribution to , i.e., .
- Continuous Mapping Theorem (CMT) application [3 pts]:
- Constructing the function and noting its continuity (on or on the support of the limit) [1 pt].
- Applying the Continuous Mapping Theorem to obtain , i.e., [2 pts].
- *Note: If only the conclusion is stated without mentioning continuity or the mapping theorem, no credit for this step.*
Chain B: Direct Citation of Slutsky's Theorem Path
- Condition verification [3 pts]: Explicitly stating that Slutsky's theorem requires one variable to converge in probability to a constant, and deriving or asserting from the hypothesis .
- *Key point: The student must demonstrate awareness of the distinction between "convergence in distribution" and "convergence in probability"; they cannot treat the two as equivalent by default.*
- Theorem application [4 pts]: Accurately citing Slutsky's theorem (product form) to directly conclude .
Chain C: Characteristic Functions or First-Principles Approach
- Convergence in probability conversion [2 pts]: Obtaining .
- Analytical proof [5 pts]: Using characteristic functions to decompose and estimate, or rigorously proving the convergence of the product via probability metric inequalities.
- *If the argument contains serious logical flaws (e.g., incorrectly interchanging limits), this part receives 0 points.*
Total (max 7)
2. Zero-credit items
- Merely copying the problem conditions (e.g., "").
- Asserting the conclusion based solely on deterministic calculus limit rules ("the limit of a product equals the product of the limits") without citing any probabilistic limit theorem (such as Slutsky or CMT).
- Claiming is "almost sure convergence" without proof (the problem only gives convergence in distribution; this implication is incorrect).
3. Deductions
- Incorrectly assuming independence: If the proof assumes and are mutually independent (a condition not given in the problem), and this assumption is central to the proof (e.g., directly factoring the joint distribution probability ): score capped at 3/7 (only the first step receives credit).
- Logical gap: In Chain A/B, if is directly used as without any mention of "because the limit is a constant": deduct 1 point.
- Notation error: Confusing random variables (uppercase ) with specific values (lowercase ) in a way that affects semantic understanding: deduct 1 point.