MathIsimple
Hypothesis Testing
5-7 Hours

Inference on Mean Vectors

Hotelling's T², two-sample tests, MANOVA, and multivariate hypothesis testing

Learning Objectives
Perform one-sample Hotelling's T² test
Compare two mean vectors
Conduct paired multivariate comparisons
Apply MANOVA for multiple groups
Interpret Wilks' Lambda and alternatives
Construct confidence regions

Hotelling's T² Test

One-Sample Test

Test H0:μ=μ0H_0: \boldsymbol{\mu} = \boldsymbol{\mu}_0 vs H1:μμ0H_1: \boldsymbol{\mu} \neq \boldsymbol{\mu}_0

T2=n(xˉμ0)TS1(xˉμ0)T^2 = n(\bar{\mathbf{x}} - \boldsymbol{\mu}_0)^T\mathbf{S}^{-1}(\bar{\mathbf{x}} - \boldsymbol{\mu}_0)

Distribution under H₀

np(n1)pT2Fp,np\frac{n-p}{(n-1)p}T^2 \sim F_{p, n-p}

Reject H₀ when

T2>(n1)pnpFp,np,αT^2 > \frac{(n-1)p}{n-p}F_{p,n-p,\alpha}
Two-Sample Test

Test H0:μ1=μ2H_0: \boldsymbol{\mu}_1 = \boldsymbol{\mu}_2 (assuming equal covariance matrices)

T2=n1n2n1+n2(xˉ1xˉ2)TSpooled1(xˉ1xˉ2)T^2 = \frac{n_1 n_2}{n_1 + n_2}(\bar{\mathbf{x}}_1 - \bar{\mathbf{x}}_2)^T\mathbf{S}_{pooled}^{-1}(\bar{\mathbf{x}}_1 - \bar{\mathbf{x}}_2)

Pooled Covariance

Spooled=(n11)S1+(n21)S2n1+n22\mathbf{S}_{pooled} = \frac{(n_1-1)\mathbf{S}_1 + (n_2-1)\mathbf{S}_2}{n_1 + n_2 - 2}

F-transformation

n1+n2p1(n1+n22)pT2Fp,n1+n2p1\frac{n_1 + n_2 - p - 1}{(n_1 + n_2 - 2)p}T^2 \sim F_{p, n_1+n_2-p-1}
Paired T² Test

For matched pairs (xi1,xi2)(\mathbf{x}_{i1}, \mathbf{x}_{i2}), compute differences di=xi1xi2\mathbf{d}_i = \mathbf{x}_{i1} - \mathbf{x}_{i2}

T2=ndˉTSd1dˉT^2 = n\bar{\mathbf{d}}^T\mathbf{S}_d^{-1}\bar{\mathbf{d}}

Distribution

np(n1)pT2Fp,np\frac{n-p}{(n-1)p}T^2 \sim F_{p, n-p}

Confidence Regions

Confidence Ellipsoid for μ

A 100(1α)%100(1-\alpha)\% confidence region for μ\boldsymbol{\mu}:

n(xˉμ)TS1(xˉμ)(n1)pnpFp,np,αn(\bar{\mathbf{x}} - \boldsymbol{\mu})^T\mathbf{S}^{-1}(\bar{\mathbf{x}} - \boldsymbol{\mu}) \leq \frac{(n-1)p}{n-p}F_{p,n-p,\alpha}

Shape

Ellipsoid centered at xˉ\bar{\mathbf{x}}

Axes

Determined by eigenvectors of S\mathbf{S}

Simultaneous Confidence Intervals

For any linear combination aTμ\mathbf{a}^T\boldsymbol{\mu}:

aTxˉ±(n1)pnpFp,np,αaTSan\mathbf{a}^T\bar{\mathbf{x}} \pm \sqrt{\frac{(n-1)p}{n-p}F_{p,n-p,\alpha}} \cdot \sqrt{\frac{\mathbf{a}^T\mathbf{S}\mathbf{a}}{n}}

Property

All such intervals hold simultaneously with confidence 1α1-\alpha

Bonferroni Intervals

For mm pre-specified comparisons, use α/m\alpha/m for each:

xˉj±tn1,α/(2m)sjjn\bar{x}_j \pm t_{n-1,\alpha/(2m)} \cdot \sqrt{\frac{s_{jj}}{n}}

When to use

Bonferroni is tighter than simultaneous T² intervals when mm is small relative to pp

MANOVA

One-Way MANOVA Setup

Test H0:μ1=μ2==μgH_0: \boldsymbol{\mu}_1 = \boldsymbol{\mu}_2 = \cdots = \boldsymbol{\mu}_g for gg groups

Within-group SS

W=i=1gj=1ni(xijxˉi)(xijxˉi)T\mathbf{W} = \sum_{i=1}^g \sum_{j=1}^{n_i} (\mathbf{x}_{ij} - \bar{\mathbf{x}}_i)(\mathbf{x}_{ij} - \bar{\mathbf{x}}_i)^T

Between-group SS

B=i=1gni(xˉixˉ)(xˉixˉ)T\mathbf{B} = \sum_{i=1}^g n_i(\bar{\mathbf{x}}_i - \bar{\mathbf{x}})(\bar{\mathbf{x}}_i - \bar{\mathbf{x}})^T

Total SS

T=B+W\mathbf{T} = \mathbf{B} + \mathbf{W}
Test Statistics

Wilks' Lambda

Λ=WB+W\Lambda = \frac{|\mathbf{W}|}{|\mathbf{B} + \mathbf{W}|}

Likelihood ratio statistic. Smaller → reject H₀

Pillai's Trace

V=tr(B(B+W)1)V = \text{tr}(\mathbf{B}(\mathbf{B}+\mathbf{W})^{-1})

Most robust to violations

Lawley-Hotelling Trace

U=tr(BW1)U = \text{tr}(\mathbf{B}\mathbf{W}^{-1})

Generalization of F-statistic

Roy's Largest Root

θ=λ1\theta = \lambda_1

Largest eigenvalue of BW1\mathbf{B}\mathbf{W}^{-1}

Wilks' Lambda F-approximation
F=1Λ1/tΛ1/tdf2df1Fdf1,df2F = \frac{1-\Lambda^{1/t}}{\Lambda^{1/t}} \cdot \frac{df_2}{df_1} \approx F_{df_1, df_2}

where

t=p2(g1)24p2+(g1)25t = \sqrt{\frac{p^2(g-1)^2-4}{p^2+(g-1)^2-5}}

Degrees of freedom

df1=p(g1)df_1 = p(g-1), df2df_2 from formula

Profile Analysis

Three Hypotheses

1. Parallelism (Equal Slopes)

H0:Cμ1=Cμ2H_0: \mathbf{C}\boldsymbol{\mu}_1 = \mathbf{C}\boldsymbol{\mu}_2 where C\mathbf{C} is a contrast matrix

2. Equal Levels (Coincident Profiles)

H0:1Tμ1=1Tμ2H_0: \mathbf{1}^T\boldsymbol{\mu}_1 = \mathbf{1}^T\boldsymbol{\mu}_2 (test only if parallel)

3. Flatness

H0:Cμ=0H_0: \mathbf{C}\boldsymbol{\mu} = \mathbf{0} (test only if coincident)

Contrast Matrix

For p variables, the contrast matrix C\mathbf{C} has p-1 rows:

C=(110001100011)\mathbf{C} = \begin{pmatrix} -1 & 1 & 0 & \cdots & 0 \\ 0 & -1 & 1 & \cdots & 0 \\ \vdots & & \ddots & & \vdots \\ 0 & 0 & \cdots & -1 & 1 \end{pmatrix}

Interpretation

Cμ\mathbf{C}\boldsymbol{\mu} gives successive differences between adjacent means

Power Analysis

Non-centrality Parameter

Power of T² test depends on the non-centrality parameter:

δ2=n(μμ0)TΣ1(μμ0)\delta^2 = n(\boldsymbol{\mu} - \boldsymbol{\mu}_0)^T\boldsymbol{\Sigma}^{-1}(\boldsymbol{\mu} - \boldsymbol{\mu}_0)

Effect Size

Mahalanobis distance: D2=δ2/nD^2 = \delta^2/n

Sample Size

Larger n increases power for fixed effect size

Assumptions & Diagnostics

Key Assumptions

Multivariate Normality

Each group follows multivariate normal distribution

Homogeneity of Covariance

Σ1=Σ2==Σg\boldsymbol{\Sigma}_1 = \boldsymbol{\Sigma}_2 = \cdots = \boldsymbol{\Sigma}_g

Independence

Observations are independent within and between groups

Random Sampling

Random samples from populations

Box's M Test

Tests H0:Σ1=Σ2==ΣgH_0: \boldsymbol{\Sigma}_1 = \boldsymbol{\Sigma}_2 = \cdots = \boldsymbol{\Sigma}_g

M=(Ng)lnSpooledi=1g(ni1)lnSiM = (N-g)\ln|\mathbf{S}_{pooled}| - \sum_{i=1}^g (n_i-1)\ln|\mathbf{S}_i|

Caution: Box's M is sensitive to non-normality. A significant result may indicate non-normality rather than unequal covariances.

Worked Example

One-Sample T² Test Example

Test if mean differs from hypothesized value with n=25, p=3:

Given

T2=15.6T^2 = 15.6, n=25, p=3

Convert to F

F=253(251)(3)(15.6)=4.77F = \frac{25-3}{(25-1)(3)}(15.6) = 4.77

Decision: Compare F to F3,22,0.05=3.05F_{3,22,0.05} = 3.05. Since 4.77 > 3.05, reject H₀ at α=0.05.

Large Sample Methods

Asymptotic Results

For large samples, T² is approximately chi-squared:

T2dχp2as nT^2 \xrightarrow{d} \chi^2_p \quad \text{as } n \to \infty

Advantage

No normality assumption required (CLT)

When to use

n > 50 or when normality is questionable

Linear Combinations

Testing Linear Combinations

Test H0:Cμ=dH_0: \mathbf{C}\boldsymbol{\mu} = \mathbf{d} where C\mathbf{C} is a contrast matrix:

T2=n(Cxˉd)T(CSCT)1(Cxˉd)T^2 = n(\mathbf{C}\bar{\mathbf{x}} - \mathbf{d})^T(\mathbf{C}\mathbf{S}\mathbf{C}^T)^{-1}(\mathbf{C}\bar{\mathbf{x}} - \mathbf{d})

Applications

Testing specific contrasts, comparing subsets of means, repeated measures analysis

MANOVA Test Statistics

Four Major Test Statistics

Wilks' Lambda (Λ)

Λ=EE+H\Lambda = \frac{|\mathbf{E}|}{|\mathbf{E} + \mathbf{H}|}

Most commonly used; ratio of error to total variance

Pillai's Trace (V)

V=tr[H(H+E)1]V = \text{tr}[\mathbf{H}(\mathbf{H} + \mathbf{E})^{-1}]

Most robust to violations; sum of squared canonical correlations

Lawley-Hotelling Trace (U)

U=tr[HE1]U = \text{tr}[\mathbf{H}\mathbf{E}^{-1}]

Powerful when groups differ on one dimension

Roy's Largest Root (θ)

θ=λ1/(1+λ1)\theta = \lambda_1 / (1 + \lambda_1)

Most powerful but most sensitive to violations

Choosing a Test Statistic

Default Choice

Use Pillai's Trace for robustness, especially with unequal n or assumption violations

When All Agree

If all four statistics lead to same conclusion, report Wilks' Lambda (most common)

Profile Analysis

Three Hypotheses in Profile Analysis

Parallelism

Are profiles parallel? Test if slopes are equal across groups

Levels

Are profiles at same level? Test overall group means

Flatness

Are profiles flat? Test if all variables have same mean

Testing order: First test parallelism. If parallel, test levels. If not parallel, examine interaction.

Assumptions and Diagnostics

Key Assumptions

Multivariate Normality

Test with Mardia's skewness/kurtosis or Q-Q plots of Mahalanobis distances

Homogeneity of Covariances

Box's M test (but sensitive to non-normality); use Pillai if violated

Independence

Random sampling; observations independent within and between groups

No Multicollinearity

Variables should not be perfectly correlated; check condition number

Robustness

To Non-Normality

Fairly robust with large n (CLT); symmetric distributions less problematic than skewed

To Unequal Covariances

Less robust; use Pillai's trace or transform data

Effect Size and Power

Effect Size Measures

Partial Eta-Squared

ηp2=SSeffectSSeffect+SSerror\eta^2_p = \frac{\text{SS}_{\text{effect}}}{\text{SS}_{\text{effect}} + \text{SS}_{\text{error}}}

Multivariate Eta-Squared

η2=1Λ1/s\eta^2 = 1 - \Lambda^{1/s}

Interpretation: Small ≈ 0.01, Medium ≈ 0.06, Large ≈ 0.14 (Cohen's guidelines)

Power Analysis

Power depends on:

Factors Increasing Power

  • • Larger sample size
  • • Larger effect size
  • • Higher α level
  • • Fewer variables

Factors Decreasing Power

  • • More groups
  • • Higher correlations among DVs
  • • Heterogeneous variances
  • • Non-normality

Follow-up Analyses

After Significant MANOVA

Univariate ANOVAs

Test each DV separately with Bonferroni correction

Discriminant Analysis

Identify which linear combinations of DVs distinguish groups

Stepdown Analysis

Sequential ANCOVAs controlling for prior DVs

Contrast Analysis

Test specific hypotheses about group differences

Repeated Measures MANOVA

Within-Subjects Design

When same subjects measured at multiple times/conditions:

Advantages

More power; controls individual differences; fewer subjects needed

Sphericity Assumption

Equal variances of differences between conditions (Mauchly's test)

Corrections for Sphericity Violations

Greenhouse-Geisser

Conservative correction; adjust df by ε

Huynh-Feldt

Less conservative; use when ε > 0.75

Software Implementation

Common Software

R

Hotelling::hotelling.test(), MASS::manova()

Python

statsmodels.multivariate.manova

SPSS

Analyze → General Linear Model → MANOVA

SAS

PROC GLM with MANOVA statement

Practical Workflow

Analysis Steps
  1. Check assumptions: normality (Q-Q plots, Shapiro-Wilk), homogeneity (Box's M)
  2. Examine descriptive statistics and correlations among DVs
  3. Conduct MANOVA using appropriate test statistic
  4. If significant, perform follow-up analyses
  5. Check for multivariate outliers (Mahalanobis distance)
  6. Report effect sizes and confidence intervals
  7. Visualize group differences (boxplots, profile plots)
Reporting Results
  • • Report test statistic (Wilks' Λ, Pillai's V, etc.) with F-approximation
  • • Include degrees of freedom and p-value
  • • Report effect size (partial η² or multivariate η²)
  • • Describe follow-up tests and corrections used
  • • Present means and SDs for each group on each DV
  • • Include visualizations of group differences

Profile Analysis

Three Key Hypotheses

Profile analysis tests three questions about group profiles across repeated measures:

1. Parallelism

Do groups have similar patterns across variables?

H0:Group × Variable interaction = 0H_0: \text{Group × Variable interaction = 0}

2. Levels

Do groups differ in overall mean? (Test only if parallel)

H0:μ1=μ2H_0: \boldsymbol{\mu}_1 = \boldsymbol{\mu}_2

3. Flatness

Are all variables equal across groups?

H0:All variables equalH_0: \text{All variables equal}
Example Application

Study: Compare treatment and control groups on cognitive tests over time

  1. Test parallelism: Do groups show same pattern of change?
  2. If parallel, test levels: Does treatment group score higher overall?
  3. Test flatness: Do all time points have equal means (pooled)?

Effect Size and Power

Multivariate Effect Sizes

Multivariate η²

η2=1Λ1/s\eta^2 = 1 - \Lambda^{1/s}

Proportion of variance explained

Partial η²

Effect size for each DV separately

Cohen's Guidelines

Small: 0.01, Medium: 0.06, Large: 0.14

Mahalanobis D²

D2=(μ1μ2)TΣ1(μ1μ2)D^2 = (\boldsymbol{\mu}_1 - \boldsymbol{\mu}_2)^T \boldsymbol{\Sigma}^{-1} (\boldsymbol{\mu}_1 - \boldsymbol{\mu}_2)
Sample Size Planning

Considerations for determining sample size:

  • Number of variables (p): Larger p requires larger n
  • Number of groups (k): More groups need more observations
  • Expected effect size: Smaller effects need larger samples
  • Desired power: Typically aim for 0.80 or higher
  • Rule of thumb: n ≥ 20 + number of DVs per group

Common Issues and Solutions

Unequal Sample Sizes

Issue: Unbalanced designs reduce power

Solution: Use Type III SS; check Box's M test for homogeneity

Missing Data

Issue: Listwise deletion reduces power

Solution: Use multiple imputation or maximum likelihood estimation

Outliers

Issue: Outliers inflate error variance

Solution: Use Mahalanobis distance; consider robust methods

Non-normality

Issue: Violations affect Type I error

Solution: Use permutation tests; transformations; larger sample

Connections to Other Methods

Conceptual Framework
  • Discriminant Analysis: MANOVA and DA are mathematically equivalent; DA focuses on prediction
  • Regression: MANOVA with dummy-coded groups = multivariate multiple regression
  • ANCOVA: Add covariates to MANOVA to control for continuous variables
  • Repeated Measures: Special case where DVs represent same measure over time
  • CCA: Can view MANOVA as special case of CCA with categorical predictors

Mathematical Theory of T² Statistic

Derivation of T² Distribution

Under the null hypothesis and multivariate normality:

xˉNp(μ0,1nΣ)\bar{\mathbf{x}} \sim N_p(\boldsymbol{\mu}_0, \frac{1}{n}\boldsymbol{\Sigma})(n1)SWp(n1,Σ)(n-1)\mathbf{S} \sim W_p(n-1, \boldsymbol{\Sigma})

The statistic can be written as:

T2=n(xˉμ0)TS1(xˉμ0)T^2 = n(\bar{\mathbf{x}} - \boldsymbol{\mu}_0)^T\mathbf{S}^{-1}(\bar{\mathbf{x}} - \boldsymbol{\mu}_0)

Key Property

T² is invariant under affine transformations

Distribution

Exact finite-sample distribution known

Relationship to Mahalanobis Distance

T² is n times the squared Mahalanobis distance:

D2=(xˉμ0)TS1(xˉμ0)D^2 = (\bar{\mathbf{x}} - \boldsymbol{\mu}_0)^T\mathbf{S}^{-1}(\bar{\mathbf{x}} - \boldsymbol{\mu}_0)T2=nD2T^2 = n \cdot D^2

Interpretation: D² measures the standardized distance between sample mean and hypothesized mean, accounting for correlation structure.

Complete Numerical Example

Two-Sample T² Test Example

Problem: Compare mean vectors of two treatments with p=3 variables

Group 1: n₁=20

xˉ1=(5.23.84.1),S1=(2.10.50.30.51.80.40.30.41.5)\bar{\mathbf{x}}_1 = \begin{pmatrix} 5.2 \\ 3.8 \\ 4.1 \end{pmatrix}, \quad \mathbf{S}_1 = \begin{pmatrix} 2.1 & 0.5 & 0.3 \\ 0.5 & 1.8 & 0.4 \\ 0.3 & 0.4 & 1.5 \end{pmatrix}

Group 2: n₂=25

xˉ2=(4.54.23.8),S2=(1.90.60.20.62.00.50.20.51.6)\bar{\mathbf{x}}_2 = \begin{pmatrix} 4.5 \\ 4.2 \\ 3.8 \end{pmatrix}, \quad \mathbf{S}_2 = \begin{pmatrix} 1.9 & 0.6 & 0.2 \\ 0.6 & 2.0 & 0.5 \\ 0.2 & 0.5 & 1.6 \end{pmatrix}

Step 1: Compute pooled covariance matrix

Spooled=19S1+24S243=(2.00.550.250.551.90.450.250.451.55)\mathbf{S}_{pooled} = \frac{19\mathbf{S}_1 + 24\mathbf{S}_2}{43} = \begin{pmatrix} 2.0 & 0.55 & 0.25 \\ 0.55 & 1.9 & 0.45 \\ 0.25 & 0.45 & 1.55 \end{pmatrix}

Step 2: Calculate T² statistic

T2=20×2545(xˉ1xˉ2)TSpooled1(xˉ1xˉ2)=8.42T^2 = \frac{20 \times 25}{45}(\bar{\mathbf{x}}_1 - \bar{\mathbf{x}}_2)^T\mathbf{S}_{pooled}^{-1}(\bar{\mathbf{x}}_1 - \bar{\mathbf{x}}_2) = 8.42

Step 3: Convert to F-statistic

F=4331432)(3)(8.42)=39123(8.42)=2.67F = \frac{43-3-1}{43-2)(3)}(8.42) = \frac{39}{123}(8.42) = 2.67

Critical Value

F₃,₄₁,₀.₀₅ = 2.84

Decision

2.67 < 2.84 → Fail to reject H₀ at α=0.05

MANOVA Example with Three Groups

Scenario: Compare three teaching methods on two outcome variables

n₁=15, n₂=15, n₃=15, p=2

Within-group SS&CP matrix: |W| = 125.0

Total SS&CP matrix: |B+W| = 180.0

Calculate Wilks' Lambda:

Λ=WB+W=125.0180.0=0.694\Lambda = \frac{|\mathbf{W}|}{|\mathbf{B}+\mathbf{W}|} = \frac{125.0}{180.0} = 0.694

F-approximation:

F=10.6940.694×404=4.41,df1=4,df2=40F = \frac{1-0.694}{0.694} \times \frac{40}{4} = 4.41, \quad df_1=4, df_2=40

Conclusion: F(4,40) = 4.41, p < 0.01. Significant difference among teaching methods.

Simultaneous Inference Procedures

T² Method for All Contrasts

For any linear combination aTμ\mathbf{a}^T\boldsymbol{\mu}, a 100(1α)%100(1-\alpha)\% confidence interval:

aTxˉcaTSanaTμaTxˉ+caTSan\mathbf{a}^T\bar{\mathbf{x}} - c\sqrt{\frac{\mathbf{a}^T\mathbf{S}\mathbf{a}}{n}} \leq \mathbf{a}^T\boldsymbol{\mu} \leq \mathbf{a}^T\bar{\mathbf{x}} + c\sqrt{\frac{\mathbf{a}^T\mathbf{S}\mathbf{a}}{n}}

Critical Value

c2=(n1)pnpFp,np,αc^2 = \frac{(n-1)p}{n-p}F_{p,n-p,\alpha}

Coverage: All such intervals simultaneously contain the true values with probability 1-α

Bonferroni vs T² Method

T² Method

  • • Covers all possible contrasts
  • • Wider intervals
  • • Use when many comparisons

Bonferroni Method

  • • For m pre-specified contrasts
  • • Narrower when m < p
  • • Use α/m for each test

Robustness and Alternative Methods

When Assumptions Fail

Non-Normal Data

  • • Use permutation tests
  • • Bootstrap confidence regions
  • • Increase sample size (CLT)

Unequal Covariances

  • • Yao's modification
  • • Welch-James approximation
  • • Pillai's trace (most robust)

Small Samples

  • • Exact permutation tests
  • • Reduce number of variables
  • • Use univariate follow-ups

Outliers Present

  • • Robust Hotelling's T²
  • • Minimum volume ellipsoid
  • • Trimmed means
Permutation Tests

Distribution-free alternative to T² test:

  1. Compute observed T² statistic
  2. Randomly permute group labels many times (e.g., 10,000)
  3. Compute T² for each permutation
  4. p-value = proportion of permuted T² ≥ observed T²

Advantage: No distributional assumptions; exact Type I error control with sufficient permutations.

Advanced Topics

Multivariate Behrens-Fisher Problem

Comparing two groups with unequal covariance matrices:

Challenge

No exact solution; requires approximations

Approaches

Yao's test, Nel-van der Merwe test, bootstrap methods

High-Dimensional Settings

When p is large relative to n:

Issues

S may be singular; classical T² fails

Solutions

Regularized covariance estimation, Dempster's test, dimensionality reduction

Practice Quiz

Practice Quiz
10
Questions
0
Correct
0%
Accuracy
1
Hotelling's T2T^2 statistic for testing H0:μ=μ0H_0: \boldsymbol{\mu} = \boldsymbol{\mu}_0 is:
Not attempted
2
The relationship between T2T^2 and F-distribution is:
Not attempted
3
For two-sample T2T^2 test with equal covariances, the pooled covariance is:
Not attempted
4
MANOVA tests hypotheses about:
Not attempted
5
Wilks' Lambda (Λ\Lambda) in MANOVA is:
Not attempted
6
A 95% confidence region for μ\boldsymbol{\mu} is an:
Not attempted
7
Paired T2T^2 test analyzes:
Not attempted
8
Profile analysis tests whether:
Not attempted
9
Pillai's trace is preferred over Wilks' Lambda when:
Not attempted
10
The assumption of equal covariance matrices can be tested using:
Not attempted

FAQ

When should I use MANOVA instead of multiple ANOVAs?

Use MANOVA when you have multiple correlated dependent variables. It controls overall Type I error and accounts for correlations between variables.

What assumptions does Hotelling's T² require?

Multivariate normality, random sampling, and (for two-sample) equal covariance matrices. It's robust to mild non-normality with large samples.

How do I interpret Wilks' Lambda?

Lambda ranges from 0 to 1. Values close to 0 indicate large group differences (reject H₀); values close to 1 suggest groups are similar.

What is Box's M test?

Tests homogeneity of covariance matrices across groups. Very sensitive to non-normality. If violated with unequal n, consider robust methods or separate analyses.

How many DVs can I include in MANOVA?

Keep DVs moderate (typically <10). More DVs require larger sample sizes and may reduce power. Consider theoretical justification for each DV.

Ask AI ✨
MathIsimple – Simple, Friendly Math Tools & Learning