MathIsimple
Classification
6-8 Hours

Discriminant Analysis

Classify observations into groups using Fisher's LDA, QDA, and Bayesian classification methods

Learning Objectives
Apply Fisher's Linear Discriminant Analysis
Understand classification rules and boundaries
Compare LDA vs QDA
Use Bayesian classification with priors
Evaluate classifiers with cross-validation
Interpret ROC curves and AUC

Fisher's LDA

Fisher's Criterion

Find a\mathbf{a} that maximizes:

aTBaaTWa\frac{\mathbf{a}^T\mathbf{B}\mathbf{a}}{\mathbf{a}^T\mathbf{W}\mathbf{a}}

Two-Group Solution

aSW1(xˉ1xˉ2)\mathbf{a} \propto \mathbf{S}_W^{-1}(\bar{\mathbf{x}}_1 - \bar{\mathbf{x}}_2)

Classification Rule

Assign to group with nearest centroid in discriminant space

LDA vs QDA

LDA

Assumes equal covariances → Linear boundaries. Fewer parameters, more stable with small samples.

QDA

Allows different covariances → Quadratic boundaries. More flexible but requires more data.

Bayesian Classification

Bayesian Decision Rule

Classify to group k that maximizes posterior probability:

P(G=kX=x)=fk(x)πkj=1gfj(x)πjP(G = k | \mathbf{X} = \mathbf{x}) = \frac{f_k(\mathbf{x}) \pi_k}{\sum_{j=1}^g f_j(\mathbf{x}) \pi_j}

fk(x)f_k(\mathbf{x})

Class-conditional density for group k

πk\pi_k

Prior probability of group k

Discriminant Scores

For LDA with normal distributions, classification simplifies to linear scores:

dk(x)=xTΣ1μk12μkTΣ1μk+lnπkd_k(\mathbf{x}) = \mathbf{x}^T\boldsymbol{\Sigma}^{-1}\boldsymbol{\mu}_k - \frac{1}{2}\boldsymbol{\mu}_k^T\boldsymbol{\Sigma}^{-1}\boldsymbol{\mu}_k + \ln \pi_k

Classification

Assign x\mathbf{x} to group with largest discriminant score dk(x)d_k(\mathbf{x})

Multi-Group LDA

Canonical Discriminant Functions

For g groups, find discriminant functions by solving the eigenvalue problem:

W1Ba=λa\mathbf{W}^{-1}\mathbf{B}\mathbf{a} = \lambda \mathbf{a}

Number of Functions

At most min(g1,p)\min(g-1, p) discriminant functions

Eigenvalues

λi\lambda_i measures discriminating power of function i

Proportion of trace: λi/λj\lambda_i / \sum \lambda_j shows proportion of between-group variance explained by function i

Discriminant Plot

Project observations onto first two discriminant functions for visualization:

Scores

zij=ajTxiz_{ij} = \mathbf{a}_j^T\mathbf{x}_i for observation i on function j

Centroids

Plot group means in discriminant space

QDA Details

Quadratic Discriminant Function

With unequal covariances, the discriminant function becomes quadratic:

dk(x)=12lnΣk12(xμk)TΣk1(xμk)+lnπkd_k(\mathbf{x}) = -\frac{1}{2}\ln|\boldsymbol{\Sigma}_k| - \frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_k)^T\boldsymbol{\Sigma}_k^{-1}(\mathbf{x}-\boldsymbol{\mu}_k) + \ln \pi_k

Decision Boundary

Quadratic (conic sections: ellipses, hyperbolas, parabolas)

Parameters

More parameters to estimate: separate Σk\boldsymbol{\Sigma}_k for each group

Model Evaluation

Error Estimation

Confusion Matrix

Table of predicted vs actual classes. Shows TP, FP, TN, FN for each group.

Cross-Validation

Leave-one-out or k-fold CV gives unbiased error estimates.

ROC Curve: Plots TPR vs FPR. AUC (Area Under Curve) summarizes overall classification performance (0.5 = random, 1.0 = perfect).

Performance Metrics

Accuracy

TP+TNTP+TN+FP+FN\frac{TP + TN}{TP + TN + FP + FN}

Sensitivity (Recall)

TPTP+FN\frac{TP}{TP + FN}

Specificity

TNTN+FP\frac{TN}{TN + FP}

Precision

TPTP+FP\frac{TP}{TP + FP}

Assumptions & Diagnostics

Key Assumptions

Multivariate Normality

Each group follows multivariate normal distribution

Homoscedasticity (LDA)

Equal covariance matrices across groups

Independence

Observations are independent

No Multicollinearity

Predictors should not be perfectly correlated

Robustness: LDA is fairly robust to mild violations of normality, especially with large samples. Use QDA when homoscedasticity is violated.

Worked Example

Two-Group Classification

Given two groups with means and pooled covariance:

Group Means

xˉ1=(3,2)T,xˉ2=(1,4)T\bar{\mathbf{x}}_1 = (3, 2)^T, \bar{\mathbf{x}}_2 = (1, 4)^T

Discriminant Direction

aS1(xˉ1xˉ2)\mathbf{a} \propto \mathbf{S}^{-1}(\bar{\mathbf{x}}_1 - \bar{\mathbf{x}}_2)

Classification rule: Project new observation onto discriminant axis. Assign to group with nearest projected centroid.

Variable Selection

Stepwise Selection

Forward Selection

Add variables that most improve discrimination (based on Wilks' Lambda)

Backward Elimination

Remove variables that least contribute to discrimination

Caution: Stepwise methods can overfit. Consider cross-validation for variable selection.

Discriminant Functions

Linear Discriminant Functions

For g groups, compute discriminant scores:

dk(x)=xTSpooled1xˉk12xˉkTSpooled1xˉk+ln(πk)d_k(\mathbf{x}) = \mathbf{x}^T\mathbf{S}_{pooled}^{-1}\bar{\mathbf{x}}_k - \frac{1}{2}\bar{\mathbf{x}}_k^T\mathbf{S}_{pooled}^{-1}\bar{\mathbf{x}}_k + \ln(\pi_k)

Classification Rule

Assign to group with largest dk(x)d_k(\mathbf{x})

Number of Functions

At most min(g-1, p) discriminant functions

Canonical Discriminant Functions

Find linear combinations that maximize between-group to within-group variance ratio:

maxaaTBaaTWa\max_{\mathbf{a}} \frac{\mathbf{a}^T\mathbf{B}\mathbf{a}}{\mathbf{a}^T\mathbf{W}\mathbf{a}}

Solution: Eigenvectors of W1B\mathbf{W}^{-1}\mathbf{B} give discriminant directions.

Model Evaluation

Confusion Matrix

Summary of classification results:

Accuracy

CorrectTotal\frac{\text{Correct}}{\text{Total}}

Error Rate

1Accuracy1 - \text{Accuracy}

Sensitivity

TPTP+FN\frac{TP}{TP + FN}

Specificity

TNTN+FP\frac{TN}{TN + FP}

Cross-Validation

Leave-One-Out (LOO)

Train on n-1, test on 1; repeat for each observation

K-Fold CV

Split into K parts; train on K-1, test on 1; rotate

Why CV: Resubstitution error (training accuracy) is optimistically biased. CV provides more realistic estimate.

Regularized and Extended Methods

Regularized Discriminant Analysis (RDA)

Interpolate between LDA and QDA:

Σ^k(α)=αSk+(1α)Spooled\hat{\boldsymbol{\Sigma}}_k(\alpha) = \alpha\mathbf{S}_k + (1-\alpha)\mathbf{S}_{pooled}

α = 0

Equal to LDA (pooled covariance)

α = 1

Equal to QDA (separate covariances)

Shrinkage LDA

For high-dimensional data (p > n), shrink toward identity:

Σ^(γ)=(1γ)S+γtr(S)pI\hat{\boldsymbol{\Sigma}}(\gamma) = (1-\gamma)\mathbf{S} + \gamma\frac{\text{tr}(\mathbf{S})}{p}\mathbf{I}

Benefit: Ensures covariance matrix is positive definite and reduces estimation variance in high dimensions.

Applications

Common Use Cases

Medical Diagnosis

Disease classification from symptoms/biomarkers

Credit Scoring

Default vs non-default classification

Face Recognition

Fisherfaces method for identity verification

Species Identification

Taxonomy based on morphological measurements

LDA vs Other Classifiers

Method Comparison

LDA vs Logistic Regression

LDA assumes normality; logistic is more flexible but may need more data

LDA vs Naive Bayes

Naive Bayes assumes independence; LDA models correlations

LDA vs SVM

SVM finds optimal hyperplane; LDA uses class distributions

LDA vs kNN

kNN is non-parametric; LDA provides interpretable coefficients

LDA for Dimensionality Reduction

LDA as Feature Extraction

LDA can project data onto lower-dimensional space while preserving class separability:

Maximum Dimensions

At most min(g-1, p) discriminant dimensions for g groups and p variables

Comparison with PCA

PCA maximizes variance; LDA maximizes class separation

Use case: Preprocessing for visualization or when subsequent classifier benefits from reduced dimensionality.

Practical Considerations

Data Requirements

Sample Size

Each group should have n > p; total n should be substantial

Outliers

LDA is sensitive to outliers; check Mahalanobis distances

Missing Data

Requires complete cases; consider imputation

Multicollinearity

High collinearity can cause numerical instability

Handling Assumption Violations

Non-Normality

Transform variables or use robust methods

Unequal Covariances

Use QDA or regularized methods

Software Implementation

Common Software

R

MASS::lda(), MASS::qda()

Python

sklearn.discriminant_analysis

SPSS

Analyze → Classify → Discriminant

SAS

PROC DISCRIM

Model Evaluation

Performance Metrics

Overall Accuracy

Accuracy=CorrectTotal\text{Accuracy} = \frac{\text{Correct}}{\text{Total}}

Sensitivity

Sensitivity=TPTP+FN\text{Sensitivity} = \frac{TP}{TP + FN}

Mathematical Derivation

Fisher's Discriminant Criterion

For two groups, maximize separation between projected means relative to projected variance:

J(a)=(aTxˉ1aTxˉ2)2aTSWaJ(\mathbf{a}) = \frac{(\mathbf{a}^T\bar{\mathbf{x}}_1 - \mathbf{a}^T\bar{\mathbf{x}}_2)^2}{\mathbf{a}^T\mathbf{S}_W\mathbf{a}}

This can be rewritten as:

J(a)=aTSBaaTSWaJ(\mathbf{a}) = \frac{\mathbf{a}^T\mathbf{S}_B\mathbf{a}}{\mathbf{a}^T\mathbf{S}_W\mathbf{a}}

where SB=(xˉ1xˉ2)(xˉ1xˉ2)T\mathbf{S}_B = (\bar{\mathbf{x}}_1 - \bar{\mathbf{x}}_2)(\bar{\mathbf{x}}_1 - \bar{\mathbf{x}}_2)^T

Solution: Setting derivative to zero yields SWa(xˉ1xˉ2)\mathbf{S}_W\mathbf{a} \propto (\bar{\mathbf{x}}_1 - \bar{\mathbf{x}}_2)

Multi-Group Generalization

For g groups, define between-group scatter matrix:

B=k=1gnk(xˉkxˉ)(xˉkxˉ)T\mathbf{B} = \sum_{k=1}^g n_k(\bar{\mathbf{x}}_k - \bar{\mathbf{x}})(\bar{\mathbf{x}}_k - \bar{\mathbf{x}})^T

Within-group scatter matrix:

W=k=1giGk(xixˉk)(xixˉk)T\mathbf{W} = \sum_{k=1}^g \sum_{i \in G_k} (\mathbf{x}_i - \bar{\mathbf{x}}_k)(\mathbf{x}_i - \bar{\mathbf{x}}_k)^T

Eigenvalue Problem

W1Ba=λa\mathbf{W}^{-1}\mathbf{B}\mathbf{a} = \lambda \mathbf{a}

Number of Functions

min(g-1, p) non-zero eigenvalues

Complete Numerical Example

Two-Group LDA Example

Data: Two groups with p=2 variables

Group 1: n₁=3

xˉ1=(42),S1=(20.50.51)\bar{\mathbf{x}}_1 = \begin{pmatrix} 4 \\ 2 \end{pmatrix}, \quad \mathbf{S}_1 = \begin{pmatrix} 2 & 0.5 \\ 0.5 & 1 \end{pmatrix}

Group 2: n₂=4

xˉ2=(24),S2=(1.50.30.32)\bar{\mathbf{x}}_2 = \begin{pmatrix} 2 \\ 4 \end{pmatrix}, \quad \mathbf{S}_2 = \begin{pmatrix} 1.5 & 0.3 \\ 0.3 & 2 \end{pmatrix}

Step 1: Pooled covariance matrix

SW=2S1+3S25=(1.70.380.381.6)\mathbf{S}_W = \frac{2\mathbf{S}_1 + 3\mathbf{S}_2}{5} = \begin{pmatrix} 1.7 & 0.38 \\ 0.38 & 1.6 \end{pmatrix}

Step 2: Discriminant coefficients

aSW1(xˉ1xˉ2)=(1.221.15)\mathbf{a} \propto \mathbf{S}_W^{-1}(\bar{\mathbf{x}}_1 - \bar{\mathbf{x}}_2) = \begin{pmatrix} 1.22 \\ -1.15 \end{pmatrix}

Step 3: Classification cutoff

c=12(aTxˉ1+aTxˉ2)=0.14c = \frac{1}{2}(\mathbf{a}^T\bar{\mathbf{x}}_1 + \mathbf{a}^T\bar{\mathbf{x}}_2) = 0.14

Rule: Classify to Group 1 if aTx>0.14\mathbf{a}^T\mathbf{x} > 0.14, otherwise Group 2

Classifying New Observations

New observation: xnew=(3,3)T\mathbf{x}_{new} = (3, 3)^T

Discriminant score:

aTxnew=1.22(3)+(1.15)(3)=0.21\mathbf{a}^T\mathbf{x}_{new} = 1.22(3) + (-1.15)(3) = 0.21

Decision: 0.21 > 0.14 → Assign to Group 1

Prior Probabilities and Costs

Specifying Prior Probabilities

Equal Priors

πk=1g\pi_k = \frac{1}{g}

Use when no prior knowledge

Proportional

πk=nkn\pi_k = \frac{n_k}{n}

Based on sample sizes

Custom

Set based on domain knowledge

E.g., disease prevalence

Misclassification Costs

Incorporate asymmetric costs into classification:

Expected Cost=k=1gj=1gc(kj)P(jx)πj\text{Expected Cost} = \sum_{k=1}^g \sum_{j=1}^g c(k|j) P(j|\mathbf{x}) \pi_j

Cost Matrix

c(k|j) = cost of classifying j as k

Example

False negative in cancer detection costs more than false positive

High-Dimensional LDA

Challenges When p > n

Singular W

Within-group covariance not invertible when p > n

Overfitting

Model memorizes training data; poor generalization

Regularization Strategies
  • Diagonal LDA: Assume diagonal covariance (no correlations)
  • Shrunken Centroids: Regularize group means toward overall mean
  • Ridge LDA: Add λI\lambda\mathbf{I} to SW\mathbf{S}_W
  • Sparse LDA: Penalized formulation for feature selection
  • Dimensionality Reduction: Apply PCA before LDA
Nearest Shrunken Centroids

Shrink group centroids toward overall mean:

xˉk=xˉ+soft(xˉkxˉ,Δ)\bar{\mathbf{x}}_k' = \bar{\mathbf{x}} + \text{soft}(\bar{\mathbf{x}}_k - \bar{\mathbf{x}}, \Delta)

Threshold Δ

Controls amount of shrinkage; choose via CV

Feature Selection

Variables shrunk to zero are excluded

Interpreting Results

Discriminant Loadings vs Coefficients

Coefficients

Elements of a\mathbf{a}; weights in linear combination

Loadings (Structure Coefficients)

Correlation between variable and discriminant scores

Recommendation: Use loadings for interpretation (less affected by multicollinearity)

Visualizing Discriminant Space

Plot observations and group centroids in discriminant space:

Scatterplot

LD1 vs LD2 with group colors; shows separation

Territorial Map

Show decision boundaries in original or discriminant space

LDA and MANOVA Connection

Mathematical Equivalence

LDA and MANOVA are two sides of the same coin:

MANOVA

Tests if group means differ significantly

LDA

Finds directions that best separate groups

Key insight: Both use W1B\mathbf{W}^{-1}\mathbf{B} eigenvalues. MANOVA uses them for hypothesis testing; LDA uses eigenvectors for classification.

Wilks' Lambda

Wilks' Lambda measures discriminating power:

Λ=WW+B=i=1s11+λi\Lambda = \frac{|\mathbf{W}|}{|\mathbf{W} + \mathbf{B}|} = \prod_{i=1}^s \frac{1}{1 + \lambda_i}

Range

0 < Λ ≤ 1; smaller values indicate better separation

Testing

Transform to F-statistic for significance test

Stepwise Discriminant Analysis

Variable Selection Criteria

Forward Selection

Start with no variables; add most discriminating one at each step

Backward Elimination

Start with all variables; remove least discriminating one at each step

Stepwise

Combine forward and backward; variables can enter and leave

Selection Criterion

Wilks' Lambda, F-to-enter/remove, or partial F-test

Partial F-Statistic

Test significance of adding/removing variable:

F=(ΛreducedΛfull)/qΛfull/(ngp+q)×(ngp+q)F = \frac{(\Lambda_{reduced} - \Lambda_{full})/q}{\Lambda_{full}/(n-g-p+q)} \times (n-g-p+q)

Decision: Add variable if F > F_critical; remove if F < F_critical

Complete Workflow

Step-by-Step Guide
  1. Data Preparation: Check for missing values, outliers, multicollinearity
  2. Assumption Checking: Test multivariate normality (Shapiro-Wilk, Q-Q plots)
  3. Test Equality of Covariances: Box's M test (if significant, consider QDA)
  4. Fit Model: Choose LDA or QDA based on assumptions
  5. Variable Selection: Use stepwise methods if needed
  6. Evaluate: Confusion matrix, cross-validation error rate
  7. Interpret: Examine discriminant loadings and plots
  8. Validate: Test on independent holdout sample if available
Reporting Results
  • • Report classification accuracy (overall and per-group)
  • • Present confusion matrix showing predicted vs actual classes
  • • Show discriminant function coefficients or loadings
  • • Include canonical correlation or Wilks' Lambda for each function
  • • Visualize with discriminant plot (first two functions)
  • • Report cross-validated error rate as realistic performance estimate
  • • Discuss practical significance of classification accuracy

Common Pitfalls

Mistakes to Avoid

Using Resubstitution Error

Always optimistically biased; use CV instead

Ignoring Assumptions

Check normality and equal covariances

Too Many Variables

Overfitting when p approaches n; use regularization

Imbalanced Classes

Adjust priors or use stratified sampling

Confusing Coefficients and Loadings

Use loadings for interpretation

Stepwise Selection Overfitting

Validate selected model on independent data

Modern Extensions

Kernel LDA

Nonlinear extension using kernel trick:

Idea

Map data to high-dimensional feature space where linear separation is possible

Kernels

RBF, polynomial, sigmoid

Penalized LDA

Add penalty to objective for regularization:

maxaaTBaaTWaλa1\max_{\mathbf{a}} \frac{\mathbf{a}^T\mathbf{B}\mathbf{a}}{\mathbf{a}^T\mathbf{W}\mathbf{a}} - \lambda \|\mathbf{a}\|_1

L1 Penalty (Lasso)

Induces sparsity; automatic feature selection

L2 Penalty (Ridge)

Shrinks coefficients; improves stability

Practice Quiz

Practice Quiz
10
Questions
0
Correct
0%
Accuracy
1
Fisher's LDA finds the linear combination that maximizes:
Not attempted
2
For two-group LDA, the discriminant function coefficients are proportional to:
Not attempted
3
LDA assumes:
Not attempted
4
QDA (Quadratic Discriminant Analysis) is used when:
Not attempted
5
The classification rule in Bayesian discriminant analysis uses:
Not attempted
6
APER (Apparent Error Rate) is:
Not attempted
7
Leave-one-out cross-validation for classification:
Not attempted
8
The number of canonical discriminant functions for g groups is:
Not attempted
9
Prior probabilities in discriminant analysis:
Not attempted
10
ROC curves plot:
Not attempted

FAQ

When should I use LDA vs logistic regression?

LDA assumes normality and equal covariances; logistic regression is more flexible. LDA can be better with small samples and when assumptions hold.

How do I handle unequal group sizes?

Adjust prior probabilities to reflect true population proportions or use equal priors if classification costs are equal.

What if my data isn't normally distributed?

LDA is fairly robust to mild non-normality. For severe violations, consider logistic regression, random forests, or other non-parametric classifiers.

Ask AI ✨
MathIsimple – Simple, Friendly Math Tools & Learning