Question
Let . Using a probabilistic method, evaluate: .
Step-by-step solution
Let . Then Since , the expectation for large , so is small.
Step 1. Standardization: Let The event is equivalent to Let , so that implies .
Since , for large this quantity is negative and of order .
Step 2. More precisely: Setting , we obtain Hence this tends to at rate , where .
Step 3. For the standard normal distribution, as , the Mills ratio estimate gives: More precisely, as : Here , so Thus
Step 4. Therefore
As , the exponential decay dominates the growth, so Therefore QED.
Final answer
0
Marking scheme
The following is the undergraduate-level marking scheme for this probability limit problem (total: 7 points).
1. Key Checkpoints (max 7 pts total)
Group 1: Probabilistic Model Formulation [cumulative]
- Recognize that the summation equals the cumulative probability of a binomial distribution (or an equivalent random variable representation). (1 pt)
- *Note: If no random variable is introduced but subsequent calculations correctly follow the normal approximation framework, credit may be awarded retroactively.*
Group 2: Core Analysis and Decay Estimate (score exactly one chain)
*For different solution paths, score according to one of the following; if methods are mixed, take the highest-scoring path.*
- Path A: Normal Approximation and Asymptotic Analysis (standard solution)
- Standardization parameters: Write down or use the correct mean and variance , and attempt standardization . (1 pt)
- Boundary asymptotic behavior: Analyze the standardized upper bound and explicitly identify that its order as is (i.e., it tends to negative infinity proportionally to ). (2 pts)
- *If only the algebraic expression is written without simplification or without identifying the relationship, no credit is awarded for this item.*
- Tail probability estimate: Invoke the Mills ratio or the normal distribution tail asymptotic formula () and explicitly establish that the probability term exhibits exponential decay (e.g., or ). (2 pts)
- Path B: Large Deviations / Concentration Inequalities (Hoeffding/Chernoff Bound)
- Inequality setup: Correctly set up the inequality parameters, identifying the deviation (or noting that the distance between the expected value and is linear, i.e., ). (2 pts)
- Exponential upper bound: Apply the inequality to obtain an exponential upper bound of the form . (3 pts)
Group 3: Limit Conclusion [cumulative]
- Resolving the indeterminate form: Combine the linear growth of the prefactor with the exponential decay of the probability term to conclude that the limit is 0. (1 pt)
- *Requirement: The argument must reflect the reasoning that exponential decay dominates polynomial/linear growth. If the conclusion is drawn solely from the probability tending to 0 without comparing rates, no credit is awarded for this item.*
Total (max 7) check: 1 + 5 + 1 = 7.
2. Zero-Credit Items
- Merely copying the problem statement or listing the binomial expansion formula without any substantive computation.
- Stating only the answer "0" with no supporting work.
- Misapplying Chebyshev's Inequality to prove the limit is 0:
- Chebyshev's inequality yields only an upper bound; multiplying by gives a constant, which is insufficient to prove the limit is 0. If this path claims to have completed the proof, it constitutes a logical error, and only the model formulation points (if any) are awarded.
- Computing with a specific value such as or other particular numerical values.
3. Deductions
- Logic gap cap (capped at 2/7):
- If the student uses the Central Limit Theorem (CLT) only to conclude without analyzing the rate of convergence (i.e., without explaining why the indeterminate form resolves to 0 rather than some other value), the total score shall not exceed 2 points (model formulation and parameter points only).
- Computational/notational error (-1):
- Coefficient errors in the mean or variance computation (e.g., omitting ) that do not affect the core qualitative conclusion of exponential decay.
- Inequality direction error (-1):
- In Path B, writing the inequality in the wrong direction (e.g., ) while the subsequent reasoning assumes the correct direction.
- Maximum deduction principle: Errors within the same logical chain are not penalized more than once; the total score after deductions shall not fall below 0.