The Laplace expansion theorem provides a powerful recursive formula for computing determinants by expanding along any row or column. This method is foundational for understanding the adjugate matrix, deriving Cramer's rule, and performing symbolic determinant calculations.
The Laplace expansion (also called cofactor expansion) expresses an n×n determinant as a weighted sum of (n-1)×(n-1) minors. This recursive formula is named after Pierre-Simon Laplace, who developed it in the 18th century.
For an n×n matrix A:
For any fixed row , the determinant equals:
This is the expansion along row .
For any fixed column , the determinant equals:
This is the expansion along column .
We derive the formula from the axiomatic definition using multilinearity.
Step 1: Write column as a sum of standard basis vectors:
Step 2: By multilinearity (linearity in each column):
Step 3: The determinant equals by moving the standard basis vector to position (j,j) and computing the resulting determinant.
Compute by expanding along row 1.
Solution:
Compute each cofactor:
Result: det(A) = 2(22) + 1(-1) + 3(-4) = 44 - 1 - 12 = 31
The signs follow a checkerboard pattern:
| + | − | + | − |
| − | + | − | + |
| + | − | + | − |
| − | + | − | + |
Position (1,1) is always +. Signs alternate along rows and columns.
Compute .
Observation: Column 2 is all zeros!
Expanding along column 2: det = 0·A₁₂ + 0·A₂₂ + 0·A₃₂ = 0
This confirms: a matrix with a zero column has det = 0.
For a triangular matrix, Laplace expansion confirms that det = product of diagonal entries.
For upper triangular: expand along column 1 (only survives), then recursively.
Compute .
Using Laplace along column 1:
Only is nonzero, with cofactor sign (+).
Compute .
Strategy: Row 3 has two zeros. Expand along it:
Only two 3×3 cofactors to compute instead of four!
When choosing between rows and columns for expansion:
The Laplace expansion gives a recurrence for the number of multiplications:
with . This solves to approximately multiplications.
The alien cofactor theorem is a crucial result that explains what happens when you "mix" entries from one row with cofactors from another. This leads directly to the adjugate matrix formula.
If you expand row using cofactors from a different row :
Similarly for columns: if you expand column using cofactors from column :
Idea: The sum is the Laplace expansion of a modified matrix where row has been replaced by row .
Step 1: Construct by replacing row of with row of .
Step 2: Matrix has two identical rows (rows and ), so .
Step 3: Expanding along row gives exactly .
Conclusion: .
Combining proper and alien cofactor expansions:
where is the Kronecker delta.
The adjugate of , denoted , is the transpose of the cofactor matrix:
Note the index swap: entry (i,j) of adj(A) is the cofactor from position (j,i).
The (i,k)-entry of is:
This equals when and 0 otherwise, which is exactly .
If , then:
For :
Thus , confirming the familiar 2×2 inverse formula.
Compute adj(A) for .
Step 1: Compute all 9 cofactors:
Step 2: Transpose the cofactor matrix:
For an n×n matrix A:
From , take determinants:
If , divide by det(A) to get .
The formula holds for singular A by continuity.
For invertible A:
Cramer's rule provides an explicit formula for solving linear systems using determinants. While elegant theoretically, it is computationally expensive for large systems.
Consider the system where is n×n. If , the unique solution is:
where is the matrix with column replaced by the vector .
Method 1 (via adjugate): From , we have:
The i-th component is:
Key insight: is exactly the cofactor expansion of along column .
Solve
Step 1: Compute det(A):
Step 2: Compute det(A₁) (replace column 1 with b):
Step 3: Compute det(A₂) (replace column 2 with b):
Result: x = -10/(-5) = 2, y = -5/(-5) = 1
Solve
Coefficient matrix and RHS:
det(A) = -9 (compute by row reduction or Sarrus)
det(A₁) = -9, det(A₂) = -18, det(A₃) = -27
Solution: x = 1, y = 2, z = 3
Cramer's rule has a beautiful geometric interpretation:
Cramer's rule requires computing n+1 determinants, each of size n×n. This gives O(n · n!) complexity with naive expansion, or O(n⁴) with row reduction for determinants. Compare to O(n³) for Gaussian elimination directly on the system. Use Cramer's rule for theory or small systems only.
In a 4×4 system, find only x₃ without computing x₁, x₂, x₄.
Method: Compute det(A) and det(A₃) only. No need for det(A₁), det(A₂), det(A₄).
This is useful when only one component of the solution is needed.
For a consistent system Ax = b where A is m×n with m < n and rank(A) = m:
The solution is not unique, but Cramer-like formulas exist for the basic variables in terms of free variables.
Cramer's rule can be numerically unstable for ill-conditioned matrices. The ratio of determinants can magnify round-off errors. For numerical work, use LU decomposition or QR factorization instead.
The cofactor sign follows a checkerboard pattern. Position (1,1) is +, (1,2) is -, etc. Use to compute the sign.
The minor is the unsigned (n-1)×(n-1) determinant. The cofactor includes the sign: .
For , replace column with , not row . The system is Ax = b where columns of A multiply x.
For , delete row and column . A common error is mixing up which subscript corresponds to row vs column.
The adjugate is the transpose of the cofactor matrix: (note the swapped indices).
Cramer's rule only works when det(A) ≠ 0. If det(A) = 0, the system has either no solution or infinitely many.
The Laplace expansion can be generalized to expand along multiple rows or columns simultaneously. This is particularly useful for block matrices.
For an n×n matrix A, choose k rows . Then:
where is the k×k minor using rows and columns , and the complementary minor uses the remaining rows and columns.
For a block diagonal matrix:
Generalized Laplace expansion along the first k rows (where A is k×k) gives:
Similarly for block lower triangular matrices.
Expanding along k rows involves terms. For k = 1 (standard expansion), this is n terms. For k = n/2, this can be exponentially many.
Understanding the computational cost of cofactor expansion is important for choosing the right method.
The naive recursive cofactor expansion has complexity:
This is exponentially worse than O(n³) for row reduction.
| Matrix Size | Cofactor O(n!) | Row Reduction O(n³) |
|---|---|---|
| 3×3 | ~6 ops | ~27 ops |
| 5×5 | ~120 ops | ~125 ops |
| 10×10 | ~3.6M ops | ~1,000 ops |
| 20×20 | ~2.4×10¹⁸ ops | ~8,000 ops |
Despite poor complexity, cofactor expansion is preferred for:
For :
Expanding along row 1: only is nonzero, giving one 3×3 minor.
This is much faster than full 4×4 row reduction!
Compute .
Observation: This is block diagonal!
When computing determinants via cofactor expansion, identical minors may appear multiple times. Storing computed minors (memoization) can reduce redundant computation.
This optimization is used in computer algebra systems but still doesn't achieve O(n³) complexity.
Beyond computing determinants, Laplace expansion has important theoretical and practical applications.
The characteristic polynomial of A is .
The coefficient of is (negative trace).
The constant term is .
For , find eigenvalues.
Eigenvalues: λ = 3, λ = 2 (diagonal entries of this triangular matrix)
The cross product in ℝ³ can be computed as a "determinant":
Expand along row 1 to get the standard formula.
Compute :
=
=
Find the area of triangle with vertices (0,0), (3,0), (1,2).
Edges from origin: u = (3,0), v = (1,2)
In multivariable calculus, the Jacobian determinant appears in change of variables:
where
Expand along any row or column
Checkerboard pattern starting with +
for i ≠ k
Leads to adjugate identity
Replace column i with b
Gives inverse via A⁻¹ = adj(A)/det(A)
O(n!) for cofactor vs O(n³) row reduction
Use for small/sparse matrices only
Problem 1
Compute by expanding along row 1.
Problem 2
Find the cofactor for .
Problem 3
Use Cramer's rule to solve:
Problem 4
Compute for and verify .
Solution 1
Expand along row 1:
det = 2·A₁₁ + 0·A₁₂ + 3·A₁₃
A₁₁ = det(4,-1; 2,5) = 20+2 = 22
A₁₃ = det(1,4; 0,2) = 2
det = 2(22) + 3(2) = 44 + 6 = 50
Solution 2
A₂₃ = (-1)^{2+3} M₂₃ = -M₂₃
M₂₃ = det(1,2; 7,8) = 8 - 14 = -6
A₂₃ = -(-6) = 6
Solution 3
det(A) = 3(-1) - 2(4) = -11
det(A₁) = 7(-1) - 2(2) = -11, so x = -11/(-11) = 1
det(A₂) = 3(2) - 7(4) = -22, so y = -22/(-11) = 2
Solution 4
Cofactors: A₁₁=4, A₁₂=-3, A₂₁=-2, A₂₂=1
adj(A) = (A₁₁, A₂₁; A₁₂, A₂₂) = (4,-2; -3,1)
det(A) = 4-6 = -2
A·adj(A) = (1,2; 3,4)(4,-2; -3,1) = (-2,0; 0,-2) = -2·I ✓
Problem 5
Verify the alien cofactor theorem: show for the matrix in Problem 2.
Problem 6
Use Cramer's rule to find only z in:
Problem 7 (Challenge)
Prove: For a 3×3 matrix A, det(adj(A)) = det(A)².
Problem 8
Compute .
Answer: 1 × 5 × 8 × 10 = 400 (upper triangular)
Problem 9
If det(A) = 5 for a 3×3 matrix A, find det(adj(A)).
Answer: det(adj(A)) = det(A)^{n-1} = 5² = 25
Problem 10 (Challenge)
Use cofactor expansion to prove det(AB) = det(A)det(B) for 2×2 matrices.
Compute by expanding along row 2.
Step 1: Identify row 2 entries and signs:
Step 2: Compute minors:
M₂₁ = det(1,3; 2,5) = 5 - 6 = -1
M₂₂ = det(2,3; 1,5) = 10 - 3 = 7
Step 3: Combine:
det = 4·(-1)·(-1) + (-1)·(+1)·7 + 0 = 4 - 7 = -3
For system :
Pierre-Simon Laplace (1749-1827): French mathematician and astronomer who developed the expansion formula in his work on celestial mechanics. The formula allowed systematic computation of determinants of any size. His "Théorie analytique des probabilités" (1812) contains key results.
Gabriel Cramer (1704-1752): Swiss mathematician who published Cramer's rule in his "Introduction à l'analyse des lignes courbes algébriques" (1750), providing the first explicit formula for solving systems of linear equations. The rule predates modern matrix notation by a century.
Gottfried Wilhelm Leibniz (1646-1716): German polymath who first studied determinants in 1693, using them to eliminate variables in systems of equations. He used the notation |a b c| for what we now call a determinant.
Historical Context: Determinants were studied before matrices! Seki Takakazu in Japan and Leibniz in Europe independently discovered determinants in the late 17th century. The matrix notation wasn't introduced until Cayley in 1858.
Etymology: "Cofactor" comes from Latin "co-" (together) + "factor" (maker), reflecting how cofactors "work together" with matrix entries. "Adjugate" derives from Latin "adjungere" (to join to), referring to how adj(A) is "joined" to A in the identity A·adj(A) = det(A)·I.
While Laplace expansion is computationally expensive (O(n!)), it remains important for:
Cofactor expansion provides elegant proofs for many matrix identities.
Using cofactors, we can prove :
Consider the block matrix and compute its determinant two ways.
The Vandermonde determinant can be proved by cofactor expansion along the first row, using induction on n.
Viewing det(A) as a polynomial in entries, cofactor expansion shows:
The Laplace expansion is equivalent to the Leibniz formula:
Both give the same n! terms, but organized differently.
Using Laplace expansion, we can prove that the determinant of the transpose equals the determinant:
Row expansion of A corresponds to column expansion of A^T. Since both expansions give the same result, det(A) = det(A^T).
The cofactor can be viewed as a partial derivative:
This follows from the fact that det(A) is linear in entry .
Now that you understand Laplace expansion, explore:
| Method | Complexity | Best For | Limitations |
|---|---|---|---|
| Laplace Expansion | O(n!) | Small/sparse, symbolic | Exponential growth |
| Row Reduction | O(n³) | General numerical | May introduce fractions |
| LU Decomposition | O(n³) | Multiple systems | Pivoting needed |
| Cramer's Rule | O(n · n!) | Single variable, theory | Requires n+1 determinants |
This module covered Laplace expansion, the recursive method for computing determinants via cofactors. Key results include the alien cofactor theorem, the adjugate matrix identity, and Cramer's rule.
8
Core Theorems
12
Quiz Questions
12
FAQs Answered
10
Practice Problems
Minor M_{ij} is the (n-1)×(n-1) determinant obtained by deleting row i and column j. Cofactor A_{ij} = (-1)^{i+j} M_{ij} includes the checkerboard sign. Always use cofactors (not minors) in the expansion formula.
If you expand row i using cofactors from row k≠i, you're computing the determinant of a matrix where row i has been replaced by row k. This matrix has two identical rows (rows i and k), so its determinant is 0.
Cramer's rule is elegant for theoretical proofs and very small systems (2×2 or 3×3). For larger systems, row reduction (Gaussian elimination) is O(n³) vs O(n·n!) for Cramer's rule, making it vastly more efficient.
Yes! The Laplace expansion theorem guarantees all rows and columns give the same determinant. Choose a row/column with many zeros to minimize computation.
Position (1,1) is always +. Then alternate: + - + - ... along each row and column. Equivalently, the sign at (i,j) is (-1)^{i+j}, which is + when i+j is even, - when odd.
The adjugate adj(A) is the transpose of the cofactor matrix: [adj(A)]_{ij} = A_{ji}. It satisfies A·adj(A) = adj(A)·A = det(A)·I, giving the inverse formula A^{-1} = adj(A)/det(A) when det(A) ≠ 0.
An n×n determinant is expressed in terms of (n-1)×(n-1) minors. Each minor can be expanded further, recursively reducing to smaller determinants until reaching 1×1 or 2×2 base cases.
Instead of expanding along a single row or column, you can expand along multiple rows (or columns) simultaneously. The formula involves products of complementary minors with appropriate signs.
Cramer's rule x_i = det(A_i)/det(A) can be derived from x = A^{-1}b = adj(A)b/det(A). The i-th component involves the cofactors from column i of A, which equals det(A_i).
Yes! Since det(A) = det(A^T), expanding A along row i is equivalent to expanding A^T along column i. This is why row and column expansions give the same result.
Using the wrong sign (forgetting (-1)^{i+j}) will give an incorrect determinant. This is one of the most common errors. Always check: (1,1), (1,3), (2,2), (3,1), (3,3) are + signs.
Yes! Unlike row reduction which may introduce fractions, Laplace expansion preserves the polynomial structure of entries. This makes it preferred in computer algebra systems for symbolic determinants.