Complete mathematical reference for probability theory foundations including axioms, conditional probability, Bayes' theorem, and independence
Fundamental axioms and basic properties that define probability measures
These three axioms completely define probability as a mathematical concept. All other probability properties derive from these axioms.
The probability of the impossible event (empty set) is zero. This follows from the axioms.
The probability of the complement of event A equals one minus the probability of A.
If event A is contained in event B, then A cannot have higher probability than B.
All probabilities lie between 0 and 1, inclusive.
Formulas for computing probabilities of unions and differences of events
Probability of union equals sum of individual probabilities minus their intersection to avoid double counting.
For mutually exclusive events, the union probability is simply the sum of individual probabilities.
Inclusion-exclusion principle for three events. This generalizes to n events.
Probability of A but not B equals probability of A minus probability of both A and B.
Probability of exactly one of A or B occurring (but not both).
Formulas involving conditional probability and related concepts
Conditional probability of B given A is the ratio of their intersection to the probability of A.
Probability of both events equals probability of one times conditional probability of the other.
Generalization of multiplication rule to multiple events using sequential conditioning.
Conditional probability of complement follows the same rule as unconditional complements.
Addition rule applies to conditional probabilities with the same conditioning event.
Formulas for computing probabilities using partitions of the sample space
Any probability can be computed by conditioning on a partition of the sample space.
Simplest form using event B and its complement as the partition.
When conditioning on a continuous random variable B with density f_B.
Law of total probability can be applied within conditional probability framework.
Formulas for updating probabilities based on new evidence
Updates prior probability P(B) to posterior probability P(B|A) using likelihood P(A|B).
Full form when events {B₁, B₂, ..., Bₙ} form a partition of the sample space.
Bayes' theorem expressed in odds form: posterior odds = likelihood ratio × prior odds.
Updating probability sequentially as new evidence becomes available.
Ratio of likelihoods comparing two hypotheses B₁ and B₂ given evidence A.
Formulas defining and working with independent events
Independence means the occurrence of one event doesn't affect the probability of the other.
Alternative characterization: A and B are independent if conditioning on A doesn't change P(B).
Independence is preserved under complementation.
Requires independence of all possible intersections, not just pairwise independence.
Weaker than mutual independence - only requires pairwise products to factor.
Formula for union probability when events are independent.
See how probability theory prerequisites are applied in various fields
Example: Accounting for disease prevalence, test sensitivity, and specificity
Example: Using law of total probability across production facilities
Example: When transmission paths fail independently
Example: When component failures are independent events
Essential formulas you'll use most frequently in probability theory
These probability theory prerequisites are essential for understanding stochastic processes. Practice applying them to build strong foundations.
Apply these formulas with interactive exercises and step-by-step solutions.
Start PracticePractice calculations with interactive probability calculators and tools.
Try Calculators