Question
In a certain game, killing a particular monster has a fixed "drop rate" for producing items . Suppose that after killing the monster, item drops with probability , item drops with probability , and no item drops with probability . Let denote the minimum number of kills required to collect both items and . Find the expectation and variance of .
Step-by-step solution
Each kill of the monster has three mutually exclusive and exhaustive outcomes: drop of item 1 (probability ), drop of item 2 (probability ), and no drop (probability ), with each kill being independent. Define as the number of kills until the first drop of either item 1 or item 2, and as the number of additional kills needed after obtaining one item to obtain the other. Clearly , and and are independent, so and . Analysis of the distribution of : is the number of kills until the first "success," where "success" means dropping item 1 or item 2, with success probability . follows a geometric distribution with parameter (defined as , ), with expectation and variance . Substituting : , . Analysis of the expectation of : We use the law of total expectation. Let denote the type of the first item obtained ( for item 1, for item 2). Then , . When , we need to obtain item 2 for the first time, with success probability per trial, giving geometric expectation ; when , we need to obtain item 1 for the first time, with success probability per trial, giving geometric expectation . Thus , . By the law of total expectation: . Analysis of the variance of : We use the law of total variance . First, the expectation of the conditional variance: the variance of a geometric distribution is , so , . Substituting: . Next, the variance of : takes values 10 and 5 with probabilities and , respectively. So , and . Hence . Therefore . , .
Final answer
Expectation: , Variance:
Marking scheme
The following is the grading rubric based on the official solution.
1. Checkpoints (max 7 pts total)
Select any one logical chain for grading | take the highest score among chains; do not add points across chains.
Chain A: Random Variable Decomposition Method (, Official Solution)
- Phase 1: (first item obtained)
- Correctly identifying that follows a geometric distribution with parameter , and obtaining . [1 pt]
- Correctly computing the variance . [1 pt]
- Phase 2: Expectation of (obtaining the remaining item)
- Correctly writing the conditional probabilities/weights for entering the second phase: probability () of first obtaining item 1, probability () of first obtaining item 2. [1 pt]
- Using the law of total expectation to compute , and hence the total expectation . [1 pt]
- Phase 2: Variance of (core difficulty)
- Correctly computing the conditional variances (90 and 20, respectively) or the conditional second moments (190 and 45, respectively). [1 pt]
- Correctly obtaining .
- *Grading criterion: Must use the law of total variance (including the term) or compute via . If only the weighted average of conditional variances is computed, no credit for this item.* [1 pt]
- Final result
- Using independence to obtain the correct answer . [1 pt]
Chain B: Markov Chain / System of Linear Equations Method
- Expectation equations
- Correctly writing the system of linear equations for the expected number of steps from each state (e.g., ). [2 pts]
- Solving the system to obtain the correct total expectation . [1 pt]
- Variance / second moment equations
- Correctly writing the system of equations for the second moments () or variances from each state. [2 pts]
- Solving for the key second moment values or intermediate variables. [1 pt]
- Final result
- Correctly computing . [1 pt]
1.1 Shared Prerequisites (Cross-Chain)
- [None] The two paths have substantially different logic; apply one of the above chains directly.
1.2 Total Score Verification
Total (max 7)
2. Zero-credit items
- Only listing the general geometric distribution formula (e.g., ) without performing specific computations using the probabilities from this problem.
- Incorrectly adding expectations directly: (ignoring mutual exclusivity and the probability of no drop).
- Merely copying the probability values from the problem with no derivation.
3. Deductions
- Arithmetic error: Each obvious arithmetic error (not a logical error): deduct 1 pt.
- Law of total variance misuse (logical flaw): When computing , if the student only computes the weighted average of conditional variances () and omits the "variance of the conditional expectation" term (), yielding or a similar value: this is a major logical flaw; no credit for that step (already reflected in the Checkpoints; if there is score overflow, deduct 1 pt, but the minimum is 0).
- Missing independence justification: Directly adding the variances of and without mentioning independence (or the Markov property), but the computation is otherwise correct. Given the undergraduate level, no deduction.