Find exact critical values for Z, t, chi-square, and F distributions. Set confidence level, degrees of freedom, and tail type to get the cutoff for hypothesis tests.
Choose based on your test and whether σ is known
Enter 50–99.99 (e.g., 95 for α = 0.05)
Two-tailed splits α between both ends
| Confidence | α (two-tailed) | Z critical | t (df=10) | t (df=30) |
|---|---|---|---|---|
| 90% | 0.10 | 1.645 | 1.812 | 1.697 |
| 95% | 0.05 | 1.960 | 2.228 | 2.042 |
| 99% | 0.01 | 2.576 | 3.169 | 2.750 |
| 99.9% | 0.001 | 3.291 | 4.587 | 3.646 |
As df → ∞, t critical values converge to Z values. At df = 120+, they're nearly identical.
A critical value is the boundary on a probability distribution that divides the "fail to reject" zone from the "reject" zone in hypothesis testing. When your test statistic crosses this line, you have statistically significant evidence against your null hypothesis at the chosen significance level.
Think of it as setting a bar before you collect data. If you choose α = 0.05 and your Z-test produces Z = 2.3, you check: does 2.3 exceed the critical value of 1.96? It does — so you reject H₀. The critical value framework keeps your decision rule objective and reproducible.
Two-tailed tests detect differences in either direction (μ ≠ μ₀). They split α equally between both tails — so at α = 0.05, each tail carries 0.025. The two-tailed Z critical value at 95% confidence is ±1.960.
One-tailed tests detect a specific directional change (μ > μ₀ or μ < μ₀). All α goes into one tail — so the one-tailed Z critical value at 95% confidence is 1.645 (lower bar, easier to reject). Use one-tailed only when you have a strong theoretical reason to expect the effect in one direction before collecting data.
Degrees of freedom (df) represent the number of independent pieces of information in your estimate. Estimating a mean from n observations uses up 1 df for the mean itself, leaving n − 1 for estimating variance. This is why t(df=1) has extremely heavy tails and t(df=∞) = Z.
Practical rules: t-test: df = n − 1. Two-sample t: df ≈ n₁ + n₂ − 2. Chi-square goodness of fit: df = (categories − 1). F-test: df₁ = (groups − 1), df₂ = (total observations − groups).