Understanding the mathematical foundation and classification of random variables
Random variable transforms random outcomes into numerical values for mathematical analysis
The fundamental tool for describing probability distributions
CDF is non-decreasing
Continuous from the right
Bounds at infinity
Sum of probabilities up to x
Integral of PDF; PDF is derivative of CDF
Essential discrete probability distributions and their applications
p ∈ (0,1) (success probability)
Single trial success/failure experiment
n ∈ ℕ (trials), p ∈ (0,1) (success probability)
Number of successes in n independent Bernoulli trials
λ > 0 (rate parameter)
Rare events occurrence (defects, arrivals, accidents)
p ∈ (0,1) (success probability)
Number of trials until first success
n (sample size), M (success states), N (population size)
Sampling without replacement (defective items, card draws)
Fundamental continuous probability distributions and their properties
Equal probability over an interval (random timing, rounding errors)
Natural phenomena (heights, measurement errors, test scores)
Lifetime modeling, waiting times between events
Sum of independent exponential variables, reliability modeling
Sum of squares of independent standard normal variables
Understanding joint distributions and independence of multiple random variables
Methods for determining the distribution of functions of random variables
Essential distributions for statistical inference
Step-by-step solutions to typical random variable problems
In 10 independent trials with success probability 0.3, find P(X = 4)
Binomial distribution applies to fixed number of independent trials with constant success probability
If X ~ N(100, 16), find P(96 < X < 108)
Standardization allows us to use the standard normal table for any normal distribution
For X ~ Exp(λ), prove that P(X > s+t|X > s) = P(X > t)
Memoryless property means past waiting time doesn't affect future waiting time
PMF (Probability Mass Function) is for discrete random variables and gives exact probabilities: P(X = x). PDF (Probability Density Function) is for continuous random variables and gives probability density, where probabilities are found by integration over intervals: P(a < X < b) = ∫ₐᵇ p(x)dx. For continuous variables, P(X = x) = 0.
Match the problem context to distribution characteristics: Fixed trials with success/failure → Binomial; Time until first success → Geometric; Rare events over time/space → Poisson; Waiting time → Exponential; Measurement errors, natural phenomena → Normal. Look for key words and understand what each parameter represents.
Random variables X and Y are independent if knowing the value of one provides no information about the other. Mathematically: P(X=x, Y=y) = P(X=x)×P(Y=y) (discrete) or p(x,y) = pₓ(x)×pᵧ(y) (continuous). Independence implies uncorrelatedness (Cov(X,Y) = 0), but the converse is not generally true.
Sampling distributions describe the behavior of statistics computed from random samples. χ² distribution is used for variance testing and goodness-of-fit; t-distribution for small sample means and confidence intervals; F-distribution for comparing variances and ANOVA. They form the foundation of statistical inference.
For discrete: P(Y=y) = Σ_{x:g(x)=y} P(X=x). For continuous monotonic transformations: find inverse x=h(y), then pᵧ(y) = pₓ(h(y))|h'(y)|. For non-monotonic or multivariate transformations, use Jacobian methods or CDF approach. Always check the domain of the transformed variable.