MathIsimple
Hypothesis Testing Formulas

Hypothesis Testing Formula Reference

Complete collection of formulas for statistical hypothesis testing: test statistics, critical values, decision rules, and practical applications in statistical inference.

Comprehensive ReferencePractical ApplicationsStep-by-step Solutions

Quick Formula Reference

Essential hypothesis testing formulas for quick lookup

Fundamental Concepts
Core definitions and error probabilities in hypothesis testing

Type I Error (α)

α(θ)=Pθ(X~DθΘ0)\alpha(\theta) = P_{\theta}(\tilde{X} \in D | \theta \in \Theta_0)

Probability of rejecting H₀ when it is true (false positive)

Type II Error (β)

β(θ)=Pθ(X~DθΘ1)\beta(\theta) = P_{\theta}(\tilde{X} \in \overline{D} | \theta \in \Theta_1)

Probability of accepting H₀ when H₁ is true (false negative)

Power Function

g(θ)=Pθ(X~D)=1β(θ)g(\theta) = P_{\theta}(\tilde{X} \in D) = 1 - \beta(\theta)

Probability of correctly rejecting H₀ when it is false

Significance Level

supθΘ0α(θ)α\sup_{\theta \in \Theta_0} \alpha(\theta) \leq \alpha

Maximum Type I error probability under Neyman-Pearson principle

Test Statistics
Key test statistics for different scenarios

U-Test Statistic (σ² known)

U=Xˉμ0σ/nN(0,1)U = \frac{\bar{X} - \mu_0}{\sigma/\sqrt{n}} \sim N(0,1)

Standard normal test for population mean with known variance

T-Test Statistic (σ² unknown)

T=Xˉμ0S/nt(n1)T = \frac{\bar{X} - \mu_0}{S/\sqrt{n}} \sim t(n-1)

t-distribution test for population mean with unknown variance

Chi-Square Test Statistic

χ2=(n1)S2σ02χ2(n1)\chi^2 = \frac{(n-1)S^2}{\sigma_0^2} \sim \chi^2(n-1)

Chi-square test for population variance

Two-Sample T-Test

T=XˉYˉSw1/m+1/nt(m+n2)T = \frac{\bar{X} - \bar{Y}}{S_w\sqrt{1/m + 1/n}} \sim t(m+n-2)

Compare means of two independent normal populations

Decision Rules
Critical values and rejection regions for common tests

Two-Sided U-Test

Reject H0 if U>uα/2\text{Reject } H_0 \text{ if } |U| > u_{\alpha/2}

H₀: μ = μ₀ vs H₁: μ ≠ μ₀

Right-Sided U-Test

Reject H0 if U>uα\text{Reject } H_0 \text{ if } U > u_\alpha

H₀: μ ≤ μ₀ vs H₁: μ > μ₀

Two-Sided T-Test

T>tα/2(n1)|T| > t_{\alpha/2}(n-1)

Critical region for t-test with unknown variance

P-value Decision Rule

Reject H0 if P-value<α\text{Reject } H_0 \text{ if P-value} < \alpha

General decision rule using probability values

Single Normal Population Tests

Complete formulas for testing normal population parameters

U-Test for Population Mean (σ² Known)

Hypotheses

H0:μ=μ0 vs H1:μμ0 (two-sided)H_0: \mu = \mu_0 \text{ vs } H_1: \mu \neq \mu_0 \text{ (two-sided)}
H0:μμ0 vs H1:μ>μ0 (right-sided)H_0: \mu \leq \mu_0 \text{ vs } H_1: \mu > \mu_0 \text{ (right-sided)}
H0:μμ0 vs H1:μ<μ0 (left-sided)H_0: \mu \geq \mu_0 \text{ vs } H_1: \mu < \mu_0 \text{ (left-sided)}

Assumptions

X1,X2,,XnN(μ,σ2)X_1, X_2, \ldots, X_n \sim N(\mu, \sigma^2)
σ2 is known\sigma^2 \text{ is known}
Random sample\text{Random sample}

Test Statistic

U=Xˉμ0σ/nN(0,1) under H0U = \frac{\bar{X} - \mu_0}{\sigma/\sqrt{n}} \sim N(0,1) \text{ under } H_0

Rejection Regions

Two-sided: U>uα/2\text{Two-sided: } |U| > u_{\alpha/2}
Right-sided: U>uα\text{Right-sided: } U > u_\alpha
Left-sided: U<uα\text{Left-sided: } U < -u_\alpha

P-value Formulas

Two-sided: P=2[1Φ(u)]\text{Two-sided: } P = 2[1 - \Phi(|u|)]
Right-sided: P=1Φ(u)\text{Right-sided: } P = 1 - \Phi(u)
Left-sided: P=Φ(u)\text{Left-sided: } P = \Phi(u)

where Φ(·) is the standard normal CDF

T-Test for Population Mean (σ² Unknown)

Hypotheses

H0:μ=μ0 vs H1:μμ0H_0: \mu = \mu_0 \text{ vs } H_1: \mu \neq \mu_0
H0:μμ0 vs H1:μ>μ0H_0: \mu \leq \mu_0 \text{ vs } H_1: \mu > \mu_0
H0:μμ0 vs H1:μ<μ0H_0: \mu \geq \mu_0 \text{ vs } H_1: \mu < \mu_0

Assumptions

X1,X2,,XnN(μ,σ2)X_1, X_2, \ldots, X_n \sim N(\mu, \sigma^2)
σ2 is unknown\sigma^2 \text{ is unknown}
Random sample\text{Random sample}

Test Statistic

T=Xˉμ0S/nt(n1) under H0T = \frac{\bar{X} - \mu_0}{S/\sqrt{n}} \sim t(n-1) \text{ under } H_0
Where:
S2=1n1i=1n(XiXˉ)2S^2 = \frac{1}{n-1}\sum_{i=1}^n (X_i - \bar{X})^2

Rejection Regions

Two-sided: T>tα/2(n1)\text{Two-sided: } |T| > t_{\alpha/2}(n-1)
Right-sided: T>tα(n1)\text{Right-sided: } T > t_\alpha(n-1)
Left-sided: T<tα(n1)\text{Left-sided: } T < -t_\alpha(n-1)
Chi-Square Test for Population Variance

Hypotheses

H0:σ2=σ02 vs H1:σ2σ02H_0: \sigma^2 = \sigma_0^2 \text{ vs } H_1: \sigma^2 \neq \sigma_0^2
H0:σ2σ02 vs H1:σ2>σ02H_0: \sigma^2 \leq \sigma_0^2 \text{ vs } H_1: \sigma^2 > \sigma_0^2
H0:σ2σ02 vs H1:σ2<σ02H_0: \sigma^2 \geq \sigma_0^2 \text{ vs } H_1: \sigma^2 < \sigma_0^2

Assumptions

X1,X2,,XnN(μ,σ2)X_1, X_2, \ldots, X_n \sim N(\mu, \sigma^2)
μ is unknown\mu \text{ is unknown}
Random sample\text{Random sample}

Test Statistic

χ2=(n1)S2σ02χ2(n1) under H0\chi^2 = \frac{(n-1)S^2}{\sigma_0^2} \sim \chi^2(n-1) \text{ under } H_0

Rejection Regions

Two-sided: χ2<χ1α/22(n1) or χ2>χα/22(n1)\text{Two-sided: } \chi^2 < \chi^2_{1-\alpha/2}(n-1) \text{ or } \chi^2 > \chi^2_{\alpha/2}(n-1)
Right-sided: χ2>χα2(n1)\text{Right-sided: } \chi^2 > \chi^2_\alpha(n-1)
Left-sided: χ2<χ1α2(n1)\text{Left-sided: } \chi^2 < \chi^2_{1-\alpha}(n-1)

Two Sample Comparison Tests

Formulas for comparing parameters between two populations

Two-Sample U-Test (σ₁², σ₂² Known)

Hypotheses

H0:μX=μY vs H1:μXμYH_0: \mu_X = \mu_Y \text{ vs } H_1: \mu_X \neq \mu_Y
H0:μXμY vs H1:μX>μYH_0: \mu_X \leq \mu_Y \text{ vs } H_1: \mu_X > \mu_Y
H0:μXμY vs H1:μX<μYH_0: \mu_X \geq \mu_Y \text{ vs } H_1: \mu_X < \mu_Y

Assumptions

X1,,XmN(μX,σX2),Y1,,YnN(μY,σY2)X_1, \ldots, X_m \sim N(\mu_X, \sigma_X^2), Y_1, \ldots, Y_n \sim N(\mu_Y, \sigma_Y^2)
σX2,σY2 are known\sigma_X^2, \sigma_Y^2 \text{ are known}
Independent samples\text{Independent samples}

Test Statistic

U=XˉYˉσX2/m+σY2/nN(0,1) under H0U = \frac{\bar{X} - \bar{Y}}{\sqrt{\sigma_X^2/m + \sigma_Y^2/n}} \sim N(0,1) \text{ under } H_0

Rejection Regions

Two-sided: U>uα/2\text{Two-sided: } |U| > u_{\alpha/2}
Right-sided: U>uα\text{Right-sided: } U > u_\alpha
Left-sided: U<uα\text{Left-sided: } U < -u_\alpha
Two-Sample T-Test (Equal Variances)

Hypotheses

H0:μX=μY vs H1:μXμYH_0: \mu_X = \mu_Y \text{ vs } H_1: \mu_X \neq \mu_Y

Assumptions

X1,,XmN(μX,σ2),Y1,,YnN(μY,σ2)X_1, \ldots, X_m \sim N(\mu_X, \sigma^2), Y_1, \ldots, Y_n \sim N(\mu_Y, \sigma^2)
σ2 is unknown but equal\sigma^2 \text{ is unknown but equal}
Independent samples\text{Independent samples}

Test Statistic

T=XˉYˉSw1/m+1/nt(m+n2) under H0T = \frac{\bar{X} - \bar{Y}}{S_w\sqrt{1/m + 1/n}} \sim t(m+n-2) \text{ under } H_0
Pooled Variance:
Sw2=(m1)SX2+(n1)SY2m+n2S_w^2 = \frac{(m-1)S_X^2 + (n-1)S_Y^2}{m+n-2}

Rejection Regions

Two-sided: T>tα/2(m+n2)\text{Two-sided: } |T| > t_{\alpha/2}(m+n-2)
F-Test for Variance Equality

Hypotheses

H0:σX2=σY2 vs H1:σX2σY2H_0: \sigma_X^2 = \sigma_Y^2 \text{ vs } H_1: \sigma_X^2 \neq \sigma_Y^2
H0:σX2σY2 vs H1:σX2>σY2H_0: \sigma_X^2 \leq \sigma_Y^2 \text{ vs } H_1: \sigma_X^2 > \sigma_Y^2

Assumptions

X1,,XmN(μX,σX2),Y1,,YnN(μY,σY2)X_1, \ldots, X_m \sim N(\mu_X, \sigma_X^2), Y_1, \ldots, Y_n \sim N(\mu_Y, \sigma_Y^2)
Independent samples\text{Independent samples}

Test Statistic

F=SX2SY2F(m1,n1) under H0F = \frac{S_X^2}{S_Y^2} \sim F(m-1, n-1) \text{ under } H_0

Rejection Regions

Two-sided: F<F1α/2(m1,n1) or F>Fα/2(m1,n1)\text{Two-sided: } F < F_{1-\alpha/2}(m-1,n-1) \text{ or } F > F_{\alpha/2}(m-1,n-1)
Right-sided: F>Fα(m1,n1)\text{Right-sided: } F > F_\alpha(m-1,n-1)

Generalized Likelihood Ratio Test (GLRT)

General framework for constructing hypothesis tests

Generalized Likelihood Ratio Test

Likelihood Ratio Definition

λ(x~)=supθΘL(θ;x~)supθΘ0L(θ;x~)=L(θ^;x~)L(θ^0;x~)\lambda(\tilde{x}) = \frac{\sup_{\theta \in \Theta} L(\theta; \tilde{x})}{\sup_{\theta \in \Theta_0} L(\theta; \tilde{x})} = \frac{L(\hat{\theta}; \tilde{x})}{L(\hat{\theta}_0; \tilde{x})}

Test Rule:

Reject H0 if λ(X~)>C\text{Reject } H_0 \text{ if } \lambda(\tilde{X}) > C

Critical Value:

Choose \text{Choose } C  such that \text{ such that } supθΘ0\sup_{\theta \in \Theta_0} P_θ\theta(λ\lambda(X~\tilde{X}) > C) \leq α\alpha

Large Sample Result (Wilks' Theorem)

2logλ(X~)dχ2(r) as n2\log\lambda(\tilde{X}) \xrightarrow{d} \chi^2(r) \text{ as } n \to \infty

where r = dim\dim(Θ\Theta) - dim\dim(Θ0\Theta_0)

Examples

Normal Mean (σ² unknown)
Hypotheses: H_0: μ\mu = μ0\mu_0  vs \text{ vs } H_1: μ\mu \neq μ0\mu_0
Likelihood Ratio:
λ=(1+t2n1)n/2\lambda = \left(1 + \frac{t^2}{n-1}\right)^{n/2}
Equivalent Test: Equivalent to t-test: \text{Equivalent to t-test: } |t| > t_{α\alpha/2}(n-1)
Normal Variance
Hypotheses: H_0: σ\sigma^2 = σ0\sigma_0^2  vs \text{ vs } H_1: σ\sigma^2 \neq σ0\sigma_0^2
Likelihood Ratio:
λ=(σ^2σ02)n/2exp(n2(1σ^2σ02))\lambda = \left(\frac{\hat{\sigma}^2}{\sigma_0^2}\right)^{n/2} \exp\left(\frac{n}{2}\left(1 - \frac{\hat{\sigma}^2}{\sigma_0^2}\right)\right)
Equivalent Test: Equivalent to chi-square test\text{Equivalent to chi-square test}

Single Parameter Exponential Family Tests

Tests for exponential family distributions

Binomial Distribution B(n,p)

Hypotheses:

H0:p=p0 vs H1:pp0H_0: p = p_0 \text{ vs } H_1: p \neq p_0

Test Statistic:

U=i=1nXiB(n,p0) under H0U = \sum_{i=1}^n X_i \sim B(n, p_0) \text{ under } H_0

Normal Approximation:

Z=Unp0np0(1p0)N(0,1) (large n)Z = \frac{U - np_0}{\sqrt{np_0(1-p_0)}} \sim N(0,1) \text{ (large n)}

Rejection Region:

U<C1 or U>C2U < C_1 \text{ or } U > C_2
Poisson Distribution P(λ)

Hypotheses:

H0:λ=λ0 vs H1:λλ0H_0: \lambda = \lambda_0 \text{ vs } H_1: \lambda \neq \lambda_0

Test Statistic:

U=i=1nXiPoisson(nλ0) under H0U = \sum_{i=1}^n X_i \sim \text{Poisson}(n\lambda_0) \text{ under } H_0

Normal Approximation:

Z=Unλ0nλ0N(0,1) (large n)Z = \frac{U - n\lambda_0}{\sqrt{n\lambda_0}} \sim N(0,1) \text{ (large n)}

Rejection Region:

U<C1 or U>C2U < C_1 \text{ or } U > C_2
Exponential Distribution Exp(θ⁻¹)

Hypotheses:

H0:θθ0 vs H1:θ<θ0H_0: \theta \geq \theta_0 \text{ vs } H_1: \theta < \theta_0

Test Statistic:

2nXˉθ0χ2(2n) under H0\frac{2n\bar{X}}{\theta_0} \sim \chi^2(2n) \text{ under } H_0

Rejection Region:

2nXˉθ0<χ1α2(2n)\frac{2n\bar{X}}{\theta_0} < \chi^2_{1-\alpha}(2n)
Use when testing if parameter is below threshold\text{Use when testing if parameter is below threshold}

Confidence Intervals & Hypothesis Testing Duality

Mathematical relationship between interval estimation and hypothesis testing

Duality Relationships

Test → Interval

C(x~)={θ0:x~A(θ0)}C(\tilde{x}) = \{\theta_0 : \tilde{x} \in A(\theta_0)\}

Confidence set contains all parameter values that would not be rejected

Interval → Test

A(θ0)={x~:θ0C(x~)}A(\theta_0) = \{\tilde{x} : \theta_0 \in C(\tilde{x})\}

Accept H₀: θ = θ₀ if θ₀ lies within confidence interval

Examples

Normal Mean (σ unknown)
Confidence Interval:
[xˉtα/2(n1)sn,xˉ+tα/2(n1)sn][\bar{x} - t_{\alpha/2}(n-1)\frac{s}{\sqrt{n}}, \bar{x} + t_{\alpha/2}(n-1)\frac{s}{\sqrt{n}}]
Hypothesis Test:
Reject \text{Reject } H_0: μ\mu = μ0\mu_0  if \text{ if } μ0\mu_0 \notin CI\text{CI}
Equivalence: 95\% \text{ CI corresponds to } \alpha = 0.05 \text{ test}
Normal Variance
Confidence Interval:
[(n1)s2χα/22(n1),(n1)s2χ1α/22(n1)]\left[\frac{(n-1)s^2}{\chi^2_{\alpha/2}(n-1)}, \frac{(n-1)s^2}{\chi^2_{1-\alpha/2}(n-1)}\right]
Hypothesis Test:
Reject \text{Reject } H_0: σ\sigma^2 = σ0\sigma_0^2  if \text{ if } σ0\sigma_0^2 \notin CI\text{CI}
Equivalence: \text{Same } \alpha \text{ level for both procedures}

Practical Applications

Essential formulas for sample size planning and multiple testing corrections

Sample Size Determination
Formulas for determining adequate sample sizes

One-Sample Mean (Known σ)

n=((uα/2+uβ)σδ)2n = \left(\frac{(u_{\alpha/2} + u_\beta)\sigma}{\delta}\right)^2

δ\delta = |μ1\mu_1 - μ0\mu_0|  (effect size)\text{ (effect size)}, β\beta  (Type II error)\text{ (Type II error)}

Two-Sample Mean Comparison

n=2(uα/2+uβ)2σ2(μ1μ2)2n = \frac{2(u_{\alpha/2} + u_\beta)^2\sigma^2}{(\mu_1 - \mu_2)^2}

Equal sample sizes per group\text{Equal sample sizes per group}

One-Sample Proportion

n=(uα/2+uβ)2p0(1p0)(p1p0)2n = \frac{(u_{\alpha/2} + u_\beta)^2 p_0(1-p_0)}{(p_1 - p_0)^2}

p_0  (null proportion)\text{ (null proportion)}, p_1  (alternative proportion)\text{ (alternative proportion)}

Multiple Testing Corrections
Adjustments for multiple hypothesis tests

Bonferroni Correction

αadj=αm\alpha_{\text{adj}} = \frac{\alpha}{m}

m  = number of tests\text{ = number of tests}

Holm-Bonferroni Method

αi=αmi+1\alpha_i = \frac{\alpha}{m-i+1}

Applied to ordered p-values \text{Applied to ordered p-values } p_1 \leq p_2 \leq \cdots \leq p_m

False Discovery Rate (FDR)

FDR=E[VR]\text{FDR} = E\left[\frac{V}{R}\right]

V  = false discoveries\text{ = false discoveries}, R  = total discoveries\text{ = total discoveries}

📊 How to Use These Formulas Effectively

Master hypothesis testing with these formula application strategies and best practices

Choose the Right Test

Select test based on data type, sample size, and whether population parameters are known.

Check Assumptions

Verify normality, independence, and other assumptions before applying formulas.

Interpret Results

Always interpret statistical results in context and consider practical significance.

Apply these formulas