Question
The random variables converge in distribution to , and converge in distribution to a positive constant . Prove that the random variables converge in distribution to .
Step-by-step solution
Step 1. By hypothesis, converges in distribution to the constant , i.e., . By a standard result in probability theory, when a sequence of random variables converges in distribution to a constant, it also converges in probability to that constant. Hence . Step 2. Consider the sequence of bivariate random vectors . We have and . By the prerequisite lemma of Slutsky's theorem (or the convergence theorem for multidimensional random variables), when one component converges in distribution to a random variable and the other converges in probability to a constant, their joint distribution converges in distribution to the vector formed by the respective limits, i.e., . Step 3. Define the bivariate continuous function , which is continuous everywhere on . By the Continuous Mapping Theorem, if a sequence of random vectors and the function is continuous almost everywhere on the support of , then . Step 4. Let and . Applying the above theorem, . Substituting the function expression yields .
Final answer
QED.
Marking scheme
The following is the detailed marking scheme for this problem (total: 7 points).
1. Checkpoints (max 7 pts total)
Choose one of the following three paths that fully matches the student's approach; do not accumulate points across paths.
Chain A: Continuous Mapping Theorem Path (Official Solution)
- Convergence in probability conversion [2 pts]: Explicitly state or prove: since converges in distribution to a constant , it follows that converges in probability to ().
- *Note: If the key condition "constant" is not mentioned, causing a logical gap, no credit is awarded for this item.*
- Joint distributional convergence [2 pts]: Using and , conclude that the bivariate random vector converges in distribution to , i.e., .
- Continuous Mapping Theorem (CMT) application [3 pts]:
- Construct the function and state its continuity (on or on the support of the limit) [1 pt].
- Apply the Continuous Mapping Theorem to conclude , i.e., [2 pts].
- *Note: Simply writing the conclusion without mentioning continuity or the mapping theorem earns no credit for this step.*
Chain B: Direct Slutsky's Theorem Path
- Condition verification [3 pts]: Explicitly state that Slutsky's theorem requires one variable to converge in probability to a constant, and derive or declare from the hypothesis .
- *Key point: The student must demonstrate awareness of the distinction between "convergence in distribution" and "convergence in probability"; they cannot assume the two are equivalent by default.*
- Theorem application [4 pts]: Accurately cite Slutsky's theorem (product form) to directly conclude .
Chain C: Characteristic Function or First-Principles Approach
- Convergence in probability conversion [2 pts]: Obtain .
- Analytic proof [5 pts]: Use characteristic functions to decompose and estimate, or rigorously prove convergence of the product using probability metric inequalities.
- *If the argument contains serious logical flaws (e.g., incorrectly interchanging limits), this part receives 0 points.*
Total (max 7)
2. Zero-credit items
- Merely copying the problem conditions (e.g., "").
- Asserting the conclusion based solely on deterministic calculus limit rules ("the limit of a product equals the product of the limits") without citing any probabilistic limit theorem (such as Slutsky or CMT).
- Claiming is "almost sure convergence" without proof (the problem only gives convergence in distribution; this is an incorrect implication).
3. Deductions
- Incorrectly assuming independence: If the proof assumes and are mutually independent (the problem does not state this), and this assumption is central to the proof (e.g., directly factoring the joint probability ): score capped at 3/7 (only the first step receives credit).
- Logical gap: In Chain A/B, directly using as without mentioning "because the limit is a constant": deduct 1 point.
- Notation error: Confusing random variables (uppercase ) with specific values (lowercase ) in a way that affects semantic understanding: deduct 1 point.