Question
Suppose that in a game, killing a certain monster has a fixed "drop rate" for equipment 1 and 2. Assume that after killing the monster, equipment 1 drops with probability , equipment 2 drops with probability , and no equipment drops with probability . Let denote the minimum number of kills needed to collect both equipment 1 and 2. Find the expectation and variance of .
Step-by-step solution
Each kill has three mutually exclusive and exhaustive outcomes: drop equipment 1 (probability ), drop equipment 2 (probability ), or drop nothing (probability ), and the kills are mutually independent. Define as the number of kills needed to first obtain either equipment 1 or equipment 2, and as the number of additional kills needed after obtaining one piece of equipment to obtain the other. Clearly , and and are independent, so and .
Analysis of the distribution of : is the number of kills until the first "success," where "success" means dropping equipment 1 or equipment 2, with success probability . follows a geometric distribution with parameter (defined as , ), with expectation and variance . Substituting : , .
Analysis of : Use the law of total expectation. Let denote the type of equipment obtained first ( for equipment 1, for equipment 2). Then , . When , we need to first obtain equipment 2, with per-trial success probability , giving geometric expectation ; when , we need equipment 1 with per-trial success probability , giving expectation . Thus , , and by the law of total expectation: .
Analysis of : Use the law of total variance . First compute the expectation of the conditional variance: , , so . Next compute the variance of : takes values 10 and 5 with probabilities and respectively, so , and , giving . Therefore .
, .
Final answer
Expectation: , Variance:
Marking scheme
The following is the rubric based on the official solution.
1. Checkpoints (max 7 pts)
Select any one logical chain for scoring | take the maximum among chains; do not accumulate across chains.
Chain A: Random Variable Decomposition (, Official Solution)
- First phase (first obtaining any equipment)
- Correctly identify that follows a geometric distribution with parameter and obtain . [1 pt]
- Correctly compute the variance . [1 pt]
- Second phase (obtaining the remaining equipment) -- expectation
- Correctly write the conditional probabilities/weights for entering the second phase: probability () of first obtaining equipment 1, probability () of first obtaining equipment 2. [1 pt]
- Use the law of total expectation to compute , and hence the total expectation . [1 pt]
- Second phase -- variance (core difficulty)
- Correctly compute the conditional variances (90 and 20 respectively) or conditional second moments (190 and 45 respectively). [1 pt]
- Correctly obtain .
- *Scoring criterion: Must use the law of total variance (including the term) or compute via . If only the weighted average of conditional variances is computed, this point is not awarded.* [1 pt]
- Final result
- Use independence to obtain the correct answer . [1 pt]
Chain B: Markov Chain / System of Linear Equations
- Expectation equations
- Correctly set up the system of linear equations for the expected number of steps from each state (e.g., ). [2 pts]
- Solve the system to obtain the correct total expectation . [1 pt]
- Variance / second moment equations
- Correctly set up the system of equations for the second moments () or variances from each state. [2 pts]
- Solve for the key second moment values or intermediate variables. [1 pt]
- Final result
- Correctly compute . [1 pt]
2. Zero-credit items
- Merely listing the geometric distribution formula (e.g., ) without performing specific calculations with the probabilities from this problem.
- Incorrectly adding expectations: (ignoring mutual exclusivity and the probability of no drop).
- Merely copying the probability values from the problem with no derivation.
3. Deductions
- Computational error: Each obvious arithmetic error (not a logical error), deduct 1 pt.
- Law of total variance misuse (logical flaw): When computing , if the student only computes the weighted average of conditional variances () and omits the "variance of the conditional expectation" term (), yielding or similar. This is a major logical gap; the step earns no credit (already reflected in Checkpoints; if there is score overflow, deduct 1 pt, but the minimum is 0).
- Independence justification missing: Directly adding the variances of and without mentioning independence (or the Markov property), but with correct calculations. Given the undergraduate level, no deduction.