Complete collection of statistical formulas for descriptive statistics, probability, hypothesis testing, and statistical inference
Essential statistical formulas for quick lookup and reference
Mean (Average)
x̄ = Σx / n
Sample Variance
s² = Σ(x - x̄)² / (n-1)
Standard Deviation
s = √(s²)
Population Variance
σ² = Σ(x - μ)² / N
Basic Probability
P(A) = favorable outcomes / total outcomes
Complement Rule
P(A') = 1 - P(A)
Addition Rule
P(A ∪ B) = P(A) + P(B) - P(A ∩ B)
Multiplication Rule
P(A ∩ B) = P(A) × P(B|A)
Z-Score
z = (x - μ) / σ
Standard Normal PDF
φ(z) = (1/√2π) × e^(-z²/2)
Normal Distribution PDF
f(x) = (1/(σ√2π)) × e^(-(x-μ)²/2σ²)
Central Limit Theorem
x̄ ~ N(μ, σ²/n)
Mean (σ known)
x̄ ± z(α/2) × (σ/√n)
Mean (σ unknown)
x̄ ± t(α/2) × (s/√n)
Proportion
p̂ ± z(α/2) × √(p̂(1-p̂)/n)
Margin of Error
E = critical value × standard error
Essential mathematical formulas for probability theory foundations and classical probability models
Complete mathematical reference for probability distribution families and their properties
Binomial: P(X=k) = C(n,k) p^k (1-p)^(n-k)
Poisson: P(X=k) = (λ^k e^(-λ)) / k!
Normal: f(x) = (1/(σ√(2π))) exp(-(x-μ)²/(2σ²))
Gamma: f(x) = (λ^α/Γ(α)) x^(α-1) e^(-λx)
Chi-square: f(x) = (1/(2^(n/2)Γ(n/2))) x^(n/2-1) e^(-x/2)
Fundamental formulas for statistical inference and mathematical statistics theory
Confidence Interval: estimate ± margin of error
Test Statistic: (estimate - parameter) / standard error
P-value: P(observing result | H₀ is true)
Type I Error: α = P(reject H₀ | H₀ true)
Type II Error: β = P(fail to reject H₀ | H₁ true)
Comprehensive formulas for estimation methods, efficiency theory, and statistical inference
Fisher Information: I(θ) = E[(∂log p(X;θ)/∂θ)²]
Cramér-Rao Bound: Var[ĝ] ≥ [g'(θ)]²/(nI(θ))
MLE: θ̂ = arg max L(θ;x₁,...,xₙ)
MSE Decomposition: MSE[θ̂] = Var[θ̂] + Bias²[θ̂]
Method of Moments: θ̂ⱼ = hⱼ(a_{n,1},...,a_{n,k})
Essential formulas for sufficient statistics, complete statistics, and optimal estimation theory
Factorization Theorem: p(x̃;θ) = g(T(x̃);θ) × h(x̃)
Complete Statistic: E_θ[φ(T)] = 0 ∀θ ⟹ P_θ(φ(T) = 0) = 1 ∀θ
Rao-Blackwell: Var(E[φ|T]) ≤ Var(φ)
Lehmann-Scheffé: E[φ|S] is unique UMVUE if S sufficient complete
Basu's Theorem: T ⊥ V if T sufficient complete, V ancillary
Essential formulas for confidence interval construction and interval estimation theory
Coverage Probability: P_θ{θ ∈ [θ̂_L, θ̂_U]}
Normal Mean (σ known): x̄ ± u_{α/2} × σ/√n
Normal Mean (σ unknown): x̄ ± t_{α/2}(n-1) × s/√n
Normal Variance: [(n-1)s²/χ²_{α/2}(n-1), (n-1)s²/χ²_{1-α/2}(n-1)]
Pivotal Quantity: G(X̃,θ) with known distribution
Complete formulas for hypothesis testing, test statistics, decision rules, and GLRT methods
Type I Error: α(θ) = P_θ(X̃ ∈ D | θ ∈ Θ₀)
Type II Error: β(θ) = P_θ(X̃ ∈ D̄ | θ ∈ Θ₁)
Power Function: g(θ) = P_θ(X̃ ∈ D) = 1 - β(θ)
U-Test: U = (X̄ - μ₀)/(σ/√n) ~ N(0,1)
T-Test: T = (X̄ - μ₀)/(S/√n) ~ t(n-1)
Chi-square Test: χ² = (n-1)S²/σ₀² ~ χ²(n-1)
GLRT: λ(x̃) = sup_Θ L(θ)/sup_Θ₀ L(θ)
Complete formulas for distribution-free statistical tests including sign tests, rank-based methods, and goodness-of-fit procedures
Sign Test: N^+ = I(X_i > t_0) B(n, )
Wilcoxon Rank Sum: W = R_i, E[W] = \frac{n(m+n+1)}{2}
Chi-square Test: ^2 = \frac{(O_i - E_i)^2}{E_i}
K-S Test: D_n = |F_n(x) - F_0(x)|
Independence: E_{ij} = \frac{n_{i\cdot} n_{ j}}{n}
Run Test: E[R] = \frac{2n_1 n_2}{n_1 + n_2} + 1
Complete formulas for Bayesian statistical inference including Bayes' theorem, conjugate priors, and posterior analysis
Bayes' Theorem: π(θ|x̃) = p(x̃|θ)π(θ) / p(x̃)
Beta-Binomial: Beta(a,b) + Binomial(n,x) → Beta(a+x, b+n-x)
Gamma-Poisson: Gamma(α,β) + Poisson(Σx) → Gamma(α+Σx, β+m)
Normal-Normal: τₙ⁻² = τ⁻² + nσ⁻², μₙ = (τ⁻²μ₀ + nσ⁻²x̄)τₙ²
Credible Interval: P(θ ∈ [θ_L, θ_U] | data) = 1-α
Posterior Predictive: p(z|data) = ∫ p(z|θ)π(θ|data)dθ
Additional statistical formula collections are being developed
Binomial: P(X=k) = C(n,k) × p^k × (1-p)^(n-k)
Poisson: P(X=k) = (λ^k × e^(-λ)) / k!
Exponential: f(x) = λe^(-λx), x ≥ 0
Chi-square: χ² = Σ((O-E)²/E)
Linear Regression: ŷ = a + bx
Slope: b = Σ((x-x̄)(y-ȳ)) / Σ(x-x̄)²
Correlation: r = Σ((x-x̄)(y-ȳ)) / √(Σ(x-x̄)²Σ(y-ȳ)²)
R-squared: R² = SSR / SST
Master statistical analysis with these formula application strategies and best practices
Identify the type of data (categorical, numerical), sample size, and distribution before selecting formulas.
Verify that your data meets the assumptions required for each statistical test or formula.
Always interpret statistical results in the context of your research question and practical significance.
Start learning with our available statistics content while we develop more comprehensive formula collections
Master mathematical statistics fundamentals with our comprehensive learning materials.
Start LearningExplore our complete collection of geometry formulas and mathematical references.
Geometry FormulasPractice with interactive calculators to understand how formulas work in practice.
Try Calculators