MathIsimple
LA-5.4
Available
45 min read

Laplace Expansion

The Laplace expansion theorem provides a powerful recursive formula for computing determinants by expanding along any row or column. This method is foundational for understanding the adjugate matrix, deriving Cramer's rule, and performing symbolic determinant calculations.

Cofactor Expansion
Cramer's Rule
Adjugate Matrix
Alien Cofactors
Learning Objectives
  • State and prove the Laplace expansion theorem for rows and columns
  • Compute determinants by expanding along any row or column
  • Apply the checkerboard sign pattern for cofactors
  • Understand and prove the alien cofactor theorem
  • Derive the adjugate matrix from cofactors
  • Apply Cramer's rule to solve linear systems
  • Analyze computational complexity of cofactor expansion
  • Recognize when Laplace expansion is optimal vs row reduction
  • Understand generalized (multi-row) Laplace expansion
Prerequisites
  • Determinant definition and axiomatic properties (LA-5.1)
  • Minors and cofactors: M_{ij} and A_{ij} = (-1)^{i+j} M_{ij}
  • Determinant properties: multilinearity, alternation (LA-5.2)
  • Row reduction method for determinants (LA-5.3)
  • Matrix operations and elementary matrices (LA-4.4)

1. Laplace Expansion Theorem

The Laplace expansion (also called cofactor expansion) expresses an n×n determinant as a weighted sum of (n-1)×(n-1) minors. This recursive formula is named after Pierre-Simon Laplace, who developed it in the 18th century.

Definition 5.8: Minor and Cofactor

For an n×n matrix A:

  • Minor MijM_{ij}: the (n-1)×(n-1) determinant obtained by deleting row i and column j
  • Cofactor Aij=(1)i+jMijA_{ij} = (-1)^{i+j} M_{ij}: the signed minor
Theorem 5.12: Laplace Expansion (Row)

For any fixed row ii, the determinant equals:

det(A)=j=1naijAij=j=1n(1)i+jaijMij\det(A) = \sum_{j=1}^{n} a_{ij} A_{ij} = \sum_{j=1}^{n} (-1)^{i+j} a_{ij} M_{ij}

This is the expansion along row ii.

Theorem 5.13: Laplace Expansion (Column)

For any fixed column jj, the determinant equals:

det(A)=i=1naijAij=i=1n(1)i+jaijMij\det(A) = \sum_{i=1}^{n} a_{ij} A_{ij} = \sum_{i=1}^{n} (-1)^{i+j} a_{ij} M_{ij}

This is the expansion along column jj.

Proof:

We derive the formula from the axiomatic definition using multilinearity.

Step 1: Write column jj as a sum of standard basis vectors:

αj=a1je1+a2je2++anjen=i=1naijei\alpha_j = a_{1j}e_1 + a_{2j}e_2 + \cdots + a_{nj}e_n = \sum_{i=1}^{n} a_{ij}e_i

Step 2: By multilinearity (linearity in each column):

det(A)=det(α1,,αn)=i=1naijdet(α1,,ei,,αn)\det(A) = \det(\alpha_1, \ldots, \alpha_n) = \sum_{i=1}^{n} a_{ij} \det(\alpha_1, \ldots, e_i, \ldots, \alpha_n)

Step 3: The determinant det(α1,,ei,,αn)\det(\alpha_1, \ldots, e_i, \ldots, \alpha_n) equals (1)i+jMij(-1)^{i+j} M_{ij} by moving the standard basis vector to position (j,j) and computing the resulting determinant.

Example 5.6: 3×3 Expansion Along Row 1

Compute det(213041125)\det\begin{pmatrix} 2 & 1 & 3 \\ 0 & 4 & -1 \\ 1 & 2 & 5 \end{pmatrix} by expanding along row 1.

Solution:

det(A)=a11A11+a12A12+a13A13\det(A) = a_{11}A_{11} + a_{12}A_{12} + a_{13}A_{13}

Compute each cofactor:

  • A11=(+1)det(4125)=4(5)(1)(2)=22A_{11} = (+1) \det\begin{pmatrix} 4 & -1 \\ 2 & 5 \end{pmatrix} = 4(5) - (-1)(2) = 22
  • A12=(1)det(0115)=(0(5)(1)(1))=1A_{12} = (-1) \det\begin{pmatrix} 0 & -1 \\ 1 & 5 \end{pmatrix} = -(0(5) - (-1)(1)) = -1
  • A13=(+1)det(0412)=0(2)4(1)=4A_{13} = (+1) \det\begin{pmatrix} 0 & 4 \\ 1 & 2 \end{pmatrix} = 0(2) - 4(1) = -4

Result: det(A) = 2(22) + 1(-1) + 3(-4) = 44 - 1 - 12 = 31

Remark 5.6: Checkerboard Sign Pattern

The signs (1)i+j(-1)^{i+j} follow a checkerboard pattern:

++
++
++
++

Position (1,1) is always +. Signs alternate along rows and columns.

Example 5.7: Strategic Column Expansion

Compute det(302104501)\det\begin{pmatrix} 3 & 0 & 2 \\ 1 & 0 & 4 \\ 5 & 0 & 1 \end{pmatrix}.

Observation: Column 2 is all zeros!

Expanding along column 2: det = 0·A₁₂ + 0·A₂₂ + 0·A₃₂ = 0

This confirms: a matrix with a zero column has det = 0.

Corollary 5.4: Triangular Matrix

For a triangular matrix, Laplace expansion confirms that det = product of diagonal entries.

For upper triangular: expand along column 1 (only a11a_{11} survives), then recursively.

Example 5.17: 4×4 Upper Triangular

Compute det(2513037200480001)\det\begin{pmatrix} 2 & 5 & 1 & 3 \\ 0 & 3 & 7 & 2 \\ 0 & 0 & 4 & 8 \\ 0 & 0 & 0 & 1 \end{pmatrix}.

Using Laplace along column 1:

Only a11=2a_{11} = 2 is nonzero, with cofactor sign (+).

det=2det(372048001)=2341=24\det = 2 \cdot \det\begin{pmatrix} 3 & 7 & 2 \\ 0 & 4 & 8 \\ 0 & 0 & 1 \end{pmatrix} = 2 \cdot 3 \cdot 4 \cdot 1 = 24
Example 5.18: Expanding Along a Strategic Row

Compute det(1002345607801003)\det\begin{pmatrix} 1 & 0 & 0 & 2 \\ 3 & 4 & 5 & 6 \\ 0 & 7 & 8 & 0 \\ 1 & 0 & 0 & 3 \end{pmatrix}.

Strategy: Row 3 has two zeros. Expand along it:

det=0A31+7A32+8A33+0A34\det = 0 \cdot A_{31} + 7 \cdot A_{32} + 8 \cdot A_{33} + 0 \cdot A_{34}

Only two 3×3 cofactors to compute instead of four!

Remark 5.14: Row vs Column Choice

When choosing between rows and columns for expansion:

  • Count zeros in each row and column
  • Expand along the row/column with most zeros
  • A row/column with all zeros gives det = 0 immediately
  • Consider which minors will be easiest to compute
Theorem 5.22: Recursive Formula Complexity

The Laplace expansion gives a recurrence for the number of multiplications:

M(n)=nM(n1)+n1M(n) = n \cdot M(n-1) + n - 1

with M(1)=0M(1) = 0. This solves to approximately en!1e \cdot n! - 1 multiplications.

2. Alien Cofactor Theorem

The alien cofactor theorem is a crucial result that explains what happens when you "mix" entries from one row with cofactors from another. This leads directly to the adjugate matrix formula.

Theorem 5.14: Alien Cofactor Theorem (Rows)

If you expand row ii using cofactors from a different row kneqik \\neq i:

j=1naijAkj=0for ik\sum_{j=1}^{n} a_{ij} A_{kj} = 0 \quad \text{for } i \neq k
Theorem 5.15: Alien Cofactor Theorem (Columns)

Similarly for columns: if you expand column jj using cofactors from column kneqjk \\neq j:

i=1naijAik=0for jk\sum_{i=1}^{n} a_{ij} A_{ik} = 0 \quad \text{for } j \neq k
Proof:

Idea: The sum jaijAkj\sum_j a_{ij} A_{kj} is the Laplace expansion of a modified matrix BB where row kk has been replaced by row ii.

Step 1: Construct BB by replacing row kk of AA with row ii of AA.

Step 2: Matrix BB has two identical rows (rows ii and kk), so det(B)=0\det(B) = 0.

Step 3: Expanding BB along row kk gives exactly jaijAkj\sum_j a_{ij} A_{kj}.

Conclusion: jaijAkj=det(B)=0\sum_j a_{ij} A_{kj} = \det(B) = 0.

Remark 5.7: Unified Formula

Combining proper and alien cofactor expansions:

j=1naijAkj={det(A)if i=k0if ik=δikdet(A)\sum_{j=1}^{n} a_{ij} A_{kj} = \begin{cases} \det(A) & \text{if } i = k \\ 0 & \text{if } i \neq k \end{cases} = \delta_{ik} \det(A)

where δik\delta_{ik} is the Kronecker delta.

Definition 5.9: Adjugate (Classical Adjoint) Matrix

The adjugate of AA, denoted adj(A)\text{adj}(A), is the transpose of the cofactor matrix:

[adj(A)]ij=Aji[\text{adj}(A)]_{ij} = A_{ji}

Note the index swap: entry (i,j) of adj(A) is the cofactor from position (j,i).

Corollary 5.5: Fundamental Adjugate Identity
Aadj(A)=adj(A)A=det(A)IA \cdot \text{adj}(A) = \text{adj}(A) \cdot A = \det(A) \cdot I
Proof:

The (i,k)-entry of Aadj(A)A \cdot \text{adj}(A) is:

j=1naij[adj(A)]jk=j=1naijAkj=δikdet(A)\sum_{j=1}^{n} a_{ij} [\text{adj}(A)]_{jk} = \sum_{j=1}^{n} a_{ij} A_{kj} = \delta_{ik} \det(A)

This equals det(A)\det(A) when i=ki = k and 0 otherwise, which is exactly det(A)I\det(A) \cdot I.

Corollary 5.6: Inverse via Adjugate

If det(A)0\det(A) \neq 0, then:

A1=1det(A)adj(A)A^{-1} = \frac{1}{\det(A)} \text{adj}(A)
Example 5.8: 2×2 Adjugate

For A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}:

adj(A)=(dbca)\text{adj}(A) = \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}

Thus A1=1adbc(dbca)A^{-1} = \frac{1}{ad-bc}\begin{pmatrix} d & -b \\ -c & a \end{pmatrix}, confirming the familiar 2×2 inverse formula.

Example 5.19: 3×3 Adjugate

Compute adj(A) for A=(123014560)A = \begin{pmatrix} 1 & 2 & 3 \\ 0 & 1 & 4 \\ 5 & 6 & 0 \end{pmatrix}.

Step 1: Compute all 9 cofactors:

  • A₁₁ = +det(1,4; 6,0) = -24
  • A₁₂ = -det(0,4; 5,0) = +20
  • A₁₃ = +det(0,1; 5,6) = -5
  • A₂₁ = -det(2,3; 6,0) = +18
  • A₂₂ = +det(1,3; 5,0) = -15
  • A₂₃ = -det(1,2; 5,6) = +4
  • A₃₁ = +det(2,3; 1,4) = +5
  • A₃₂ = -det(1,3; 0,4) = -4
  • A₃₃ = +det(1,2; 0,1) = +1

Step 2: Transpose the cofactor matrix:

adj(A)=(2418520154541)\text{adj}(A) = \begin{pmatrix} -24 & 18 & 5 \\ 20 & -15 & -4 \\ -5 & 4 & 1 \end{pmatrix}
Theorem 5.23: Determinant of Adjugate

For an n×n matrix A:

det(adj(A))=(det(A))n1\det(\text{adj}(A)) = (\det(A))^{n-1}
Proof:

From Aadj(A)=det(A)IA \cdot \text{adj}(A) = \det(A) \cdot I, take determinants:

det(A)det(adj(A))=det(A)n\det(A) \cdot \det(\text{adj}(A)) = \det(A)^n

If det(A)0\det(A) \neq 0, divide by det(A) to get det(adj(A))=det(A)n1\det(\text{adj}(A)) = \det(A)^{n-1}.

The formula holds for singular A by continuity.

Corollary 5.7: Adjugate of Adjugate

For invertible A:

adj(adj(A))=(det(A))n2A\text{adj}(\text{adj}(A)) = (\det(A))^{n-2} A

3. Cramer's Rule

Cramer's rule provides an explicit formula for solving linear systems using determinants. While elegant theoretically, it is computationally expensive for large systems.

Theorem 5.16: Cramer's Rule

Consider the system Ax=bAx = b where AA is n×n. If det(A)0\det(A) \neq 0, the unique solution is:

xi=det(Ai)det(A)for i=1,2,,nx_i = \frac{\det(A_i)}{\det(A)} \quad \text{for } i = 1, 2, \ldots, n

where AiA_i is the matrix AA with column ii replaced by the vector bb.

Proof:

Method 1 (via adjugate): From Ax=bAx = b, we have:

x=A1b=1det(A)adj(A)bx = A^{-1}b = \frac{1}{\det(A)} \text{adj}(A) \cdot b

The i-th component is:

xi=1det(A)j=1nAjibjx_i = \frac{1}{\det(A)} \sum_{j=1}^{n} A_{ji} b_j

Key insight: jAjibj\sum_j A_{ji} b_j is exactly the cofactor expansion of AiA_i along column ii.

Example 5.9: 2×2 System

Solve {2x+3y=7xy=1\begin{cases} 2x + 3y = 7 \\ x - y = 1 \end{cases}

Step 1: Compute det(A):

det(A)=det(2311)=2(1)3(1)=5\det(A) = \det\begin{pmatrix} 2 & 3 \\ 1 & -1 \end{pmatrix} = 2(-1) - 3(1) = -5

Step 2: Compute det(A₁) (replace column 1 with b):

det(A1)=det(7311)=7(1)3(1)=10\det(A_1) = \det\begin{pmatrix} 7 & 3 \\ 1 & -1 \end{pmatrix} = 7(-1) - 3(1) = -10

Step 3: Compute det(A₂) (replace column 2 with b):

det(A2)=det(2711)=2(1)7(1)=5\det(A_2) = \det\begin{pmatrix} 2 & 7 \\ 1 & 1 \end{pmatrix} = 2(1) - 7(1) = -5

Result: x = -10/(-5) = 2, y = -5/(-5) = 1

Example 5.10: 3×3 System

Solve {x+y+z=62xy+z=3x+2yz=2\begin{cases} x + y + z = 6 \\ 2x - y + z = 3 \\ x + 2y - z = 2 \end{cases}

Coefficient matrix and RHS:

A=(111211121),b=(632)A = \begin{pmatrix} 1 & 1 & 1 \\ 2 & -1 & 1 \\ 1 & 2 & -1 \end{pmatrix}, \quad b = \begin{pmatrix} 6 \\ 3 \\ 2 \end{pmatrix}

det(A) = -9 (compute by row reduction or Sarrus)

det(A₁) = -9, det(A₂) = -18, det(A₃) = -27

Solution: x = 1, y = 2, z = 3

Remark 5.8: Geometric Interpretation

Cramer's rule has a beautiful geometric interpretation:

  • det(A) represents the (signed) volume of the parallelepiped spanned by columns of A
  • det(Aᵢ) represents the volume when column i is replaced by b
  • The ratio xᵢ = det(Aᵢ)/det(A) measures how much b "contributes" in direction i

Computational Complexity Warning

Cramer's rule requires computing n+1 determinants, each of size n×n. This gives O(n · n!) complexity with naive expansion, or O(n⁴) with row reduction for determinants. Compare to O(n³) for Gaussian elimination directly on the system. Use Cramer's rule for theory or small systems only.

Remark 5.9: When Cramer's Rule is Useful
  • Solving for a single variable xᵢ without finding all others
  • Theoretical derivations and proofs
  • Symbolic computation (preserves exact arithmetic)
  • Very small systems (2×2 or 3×3) by hand
Example 5.20: Partial Solution via Cramer

In a 4×4 system, find only x₃ without computing x₁, x₂, x₄.

Method: Compute det(A) and det(A₃) only. No need for det(A₁), det(A₂), det(A₄).

This is useful when only one component of the solution is needed.

Theorem 5.24: Cramer's Rule for Non-Square Systems

For a consistent system Ax = b where A is m×n with m < n and rank(A) = m:

The solution is not unique, but Cramer-like formulas exist for the basic variables in terms of free variables.

Remark 5.15: Numerical Stability

Cramer's rule can be numerically unstable for ill-conditioned matrices. The ratio of determinants can magnify round-off errors. For numerical work, use LU decomposition or QR factorization instead.

4. Common Mistakes

Forgetting the sign pattern (-1)^{i+j}

The cofactor sign follows a checkerboard pattern. Position (1,1) is +, (1,2) is -, etc. Use (1)i+j(-1)^{i+j} to compute the sign.

Confusing minor Mᵢⱼ with cofactor Aᵢⱼ

The minor is the unsigned (n-1)×(n-1) determinant. The cofactor includes the sign: Aij=(1)i+jMijA_{ij} = (-1)^{i+j} M_{ij}.

Wrong column replacement in Cramer's rule

For xix_i, replace column ii with bb, not row ii. The system is Ax = b where columns of A multiply x.

Deleting wrong row/column for minor

For MijM_{ij}, delete row ii and column jj. A common error is mixing up which subscript corresponds to row vs column.

Forgetting the transpose in adjugate

The adjugate is the transpose of the cofactor matrix: [adj(A)]ij=Aji[\text{adj}(A)]_{ij} = A_{ji} (note the swapped indices).

Using Cramer's rule when det(A) = 0

Cramer's rule only works when det(A) ≠ 0. If det(A) = 0, the system has either no solution or infinitely many.

5. Generalized Laplace Expansion

The Laplace expansion can be generalized to expand along multiple rows or columns simultaneously. This is particularly useful for block matrices.

Theorem 5.17: Generalized Laplace Expansion

For an n×n matrix A, choose k rows i1<i2<<iki_1 < i_2 < \cdots < i_k. Then:

det(A)=1j1<<jkn(1)(ir+jr)Mi1ikj1jkMi1ikj1jk(compl)\det(A) = \sum_{1 \leq j_1 < \cdots < j_k \leq n} (-1)^{\sum(i_r + j_r)} M_{i_1 \cdots i_k}^{j_1 \cdots j_k} \cdot M_{i_1 \cdots i_k}^{j_1 \cdots j_k \text{(compl)}}

where Mi1ikj1jkM_{i_1 \cdots i_k}^{j_1 \cdots j_k} is the k×k minor using rows i1,,iki_1, \ldots, i_k and columns j1,,jkj_1, \ldots, j_k, and the complementary minor uses the remaining rows and columns.

Example 5.11: Block Diagonal

For a block diagonal matrix:

(A00B)\begin{pmatrix} A & 0 \\ 0 & B \end{pmatrix}

Generalized Laplace expansion along the first k rows (where A is k×k) gives:

det=det(A)det(B)\det = \det(A) \cdot \det(B)
Theorem 5.18: Block Upper Triangular
det(AB0D)=det(A)det(D)\det\begin{pmatrix} A & B \\ 0 & D \end{pmatrix} = \det(A) \cdot \det(D)

Similarly for block lower triangular matrices.

Remark 5.10: Complexity of Generalized Expansion

Expanding along k rows involves (nk)\binom{n}{k} terms. For k = 1 (standard expansion), this is n terms. For k = n/2, this can be exponentially many.

6. Computational Aspects

Understanding the computational cost of cofactor expansion is important for choosing the right method.

Theorem 5.19: Complexity of Cofactor Expansion

The naive recursive cofactor expansion has complexity:

T(n)=nT(n1)+O(n)    T(n)=O(n!)T(n) = n \cdot T(n-1) + O(n) \implies T(n) = O(n!)

This is exponentially worse than O(n³) for row reduction.

Matrix SizeCofactor O(n!)Row Reduction O(n³)
3×3~6 ops~27 ops
5×5~120 ops~125 ops
10×10~3.6M ops~1,000 ops
20×20~2.4×10¹⁸ ops~8,000 ops
Remark 5.11: When Cofactor Expansion Wins

Despite poor complexity, cofactor expansion is preferred for:

  • Sparse matrices: A row with mostly zeros needs only a few minor computations
  • Symbolic computation: Avoids fractions, preserves polynomial structure
  • Small matrices: For n ≤ 4, the overhead of row reduction may exceed savings
  • Theoretical derivations: Closed-form expressions via minors
Example 5.12: Sparse Matrix Advantage

For A=(5000230101400026)A = \begin{pmatrix} 5 & 0 & 0 & 0 \\ 2 & 3 & 0 & 1 \\ 0 & 1 & 4 & 0 \\ 0 & 0 & 2 & 6 \end{pmatrix}:

Expanding along row 1: only a11=5a_{11} = 5 is nonzero, giving one 3×3 minor.

This is much faster than full 4×4 row reduction!

Example 5.13: Complete 4×4 Cofactor Expansion

Compute det(1200340000560078)\det\begin{pmatrix} 1 & 2 & 0 & 0 \\ 3 & 4 & 0 & 0 \\ 0 & 0 & 5 & 6 \\ 0 & 0 & 7 & 8 \end{pmatrix}.

Observation: This is block diagonal!

det=det(1234)det(5678)=(46)(4042)=(2)(2)=4\det = \det\begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} \cdot \det\begin{pmatrix} 5 & 6 \\ 7 & 8 \end{pmatrix} = (4-6)(40-42) = (-2)(-2) = 4
Definition 5.10: Memoization for Recursive Expansion

When computing determinants via cofactor expansion, identical minors may appear multiple times. Storing computed minors (memoization) can reduce redundant computation.

This optimization is used in computer algebra systems but still doesn't achieve O(n³) complexity.

Applications of Laplace Expansion

Beyond computing determinants, Laplace expansion has important theoretical and practical applications.

Theorem 5.20: Characteristic Polynomial via Cofactors

The characteristic polynomial of A is p(λ)=det(AλI)p(\lambda) = \det(A - \lambda I).

The coefficient of λn1\lambda^{n-1} is tr(A)-\text{tr}(A) (negative trace).

The constant term is (1)ndet(A)(-1)^n \det(A).

Example 5.14: Finding Eigenvalues

For A=(3102)A = \begin{pmatrix} 3 & 1 \\ 0 & 2 \end{pmatrix}, find eigenvalues.

det(AλI)=det(3λ102λ)=(3λ)(2λ)\det(A - \lambda I) = \det\begin{pmatrix} 3-\lambda & 1 \\ 0 & 2-\lambda \end{pmatrix} = (3-\lambda)(2-\lambda)

Eigenvalues: λ = 3, λ = 2 (diagonal entries of this triangular matrix)

Remark 5.12: Cross Product via Cofactors

The cross product in ℝ³ can be computed as a "determinant":

u×v=det(ijku1u2u3v1v2v3)\vec{u} \times \vec{v} = \det\begin{pmatrix} \vec{i} & \vec{j} & \vec{k} \\ u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \end{pmatrix}

Expand along row 1 to get the standard formula.

Example 5.15: Cross Product Calculation

Compute (1,2,3)×(4,5,6)(1, 2, 3) \times (4, 5, 6):

u×v=det(ijk123456)\vec{u} \times \vec{v} = \det\begin{pmatrix} \vec{i} & \vec{j} & \vec{k} \\ 1 & 2 & 3 \\ 4 & 5 & 6 \end{pmatrix}

= i(2635)j(1634)+k(1524)\vec{i}(2 \cdot 6 - 3 \cdot 5) - \vec{j}(1 \cdot 6 - 3 \cdot 4) + \vec{k}(1 \cdot 5 - 2 \cdot 4)

= i(3)j(6)+k(3)=(3,6,3)\vec{i}(-3) - \vec{j}(-6) + \vec{k}(-3) = (-3, 6, -3)

Theorem 5.21: Area and Volume
  • Area of parallelogram with sides u, v: det(u,v)|\det(u, v)|
  • Volume of parallelepiped with edges u, v, w: det(u,v,w)|\det(u, v, w)|
  • n-dimensional volume: det(v1,,vn)|\det(v_1, \ldots, v_n)|
Example 5.16: Triangle Area

Find the area of triangle with vertices (0,0), (3,0), (1,2).

Edges from origin: u = (3,0), v = (1,2)

Area=12det(3102)=126=3\text{Area} = \frac{1}{2}|\det\begin{pmatrix} 3 & 1 \\ 0 & 2 \end{pmatrix}| = \frac{1}{2}|6| = 3
Remark 5.13: Jacobian Determinant

In multivariable calculus, the Jacobian determinant appears in change of variables:

Rf(x,y)dxdy=Sf(g(u,v))Jdudv\iint_R f(x,y) \, dx \, dy = \iint_S f(g(u,v)) |J| \, du \, dv

where J=det(x/ux/vy/uy/v)J = \det\begin{pmatrix} \partial x/\partial u & \partial x/\partial v \\ \partial y/\partial u & \partial y/\partial v \end{pmatrix}

7. Key Takeaways

Laplace Expansion

det(A)=jaijAij\det(A) = \sum_{j} a_{ij} A_{ij}

Expand along any row or column

Cofactor Sign

Aij=(1)i+jMijA_{ij} = (-1)^{i+j} M_{ij}

Checkerboard pattern starting with +

Alien Cofactors

jaijAkj=0\sum_j a_{ij} A_{kj} = 0 for i ≠ k

Leads to adjugate identity

Cramer's Rule

xi=det(Ai)/det(A)x_i = \det(A_i)/\det(A)

Replace column i with b

Adjugate Formula

Aadj(A)=det(A)IA \cdot \text{adj}(A) = \det(A) \cdot I

Gives inverse via A⁻¹ = adj(A)/det(A)

Complexity

O(n!) for cofactor vs O(n³) row reduction

Use for small/sparse matrices only

8. Practice Problems

Problem 1

Compute det(203141025)\det\begin{pmatrix} 2 & 0 & 3 \\ 1 & 4 & -1 \\ 0 & 2 & 5 \end{pmatrix} by expanding along row 1.

Problem 2

Find the cofactor A23A_{23} for A=(123456789)A = \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix}.

Problem 3

Use Cramer's rule to solve: {3x+2y=74xy=2\begin{cases} 3x + 2y = 7 \\ 4x - y = 2 \end{cases}

Problem 4

Compute adj(A)\text{adj}(A) for A=(1234)A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} and verify Aadj(A)=det(A)IA \cdot \text{adj}(A) = \det(A) \cdot I.

Solutions

Solution 1

Expand along row 1:

det = 2·A₁₁ + 0·A₁₂ + 3·A₁₃

A₁₁ = det(4,-1; 2,5) = 20+2 = 22

A₁₃ = det(1,4; 0,2) = 2

det = 2(22) + 3(2) = 44 + 6 = 50

Solution 2

A₂₃ = (-1)^{2+3} M₂₃ = -M₂₃

M₂₃ = det(1,2; 7,8) = 8 - 14 = -6

A₂₃ = -(-6) = 6

Solution 3

det(A) = 3(-1) - 2(4) = -11

det(A₁) = 7(-1) - 2(2) = -11, so x = -11/(-11) = 1

det(A₂) = 3(2) - 7(4) = -22, so y = -22/(-11) = 2

Solution 4

Cofactors: A₁₁=4, A₁₂=-3, A₂₁=-2, A₂₂=1

adj(A) = (A₁₁, A₂₁; A₁₂, A₂₂) = (4,-2; -3,1)

det(A) = 4-6 = -2

A·adj(A) = (1,2; 3,4)(4,-2; -3,1) = (-2,0; 0,-2) = -2·I ✓

Additional Practice

Problem 5

Verify the alien cofactor theorem: show a11A21+a12A22+a13A23=0a_{11}A_{21} + a_{12}A_{22} + a_{13}A_{23} = 0 for the matrix in Problem 2.

Problem 6

Use Cramer's rule to find only z in: {x+y+z=6xy+2z=52x+yz=1\begin{cases} x+y+z=6 \\ x-y+2z=5 \\ 2x+y-z=1 \end{cases}

Problem 7 (Challenge)

Prove: For a 3×3 matrix A, det(adj(A)) = det(A)².

Problem 8

Compute det(12340567008900010)\det\begin{pmatrix} 1 & 2 & 3 & 4 \\ 0 & 5 & 6 & 7 \\ 0 & 0 & 8 & 9 \\ 0 & 0 & 0 & 10 \end{pmatrix}.

Answer: 1 × 5 × 8 × 10 = 400 (upper triangular)

Problem 9

If det(A) = 5 for a 3×3 matrix A, find det(adj(A)).

Answer: det(adj(A)) = det(A)^{n-1} = 5² = 25

Problem 10 (Challenge)

Use cofactor expansion to prove det(AB) = det(A)det(B) for 2×2 matrices.

Detailed Worked Example

Complete 3×3 Cofactor Expansion

Compute det(213410125)\det\begin{pmatrix} 2 & 1 & 3 \\ 4 & -1 & 0 \\ 1 & 2 & 5 \end{pmatrix} by expanding along row 2.

Step 1: Identify row 2 entries and signs:

  • a₂₁ = 4, sign = (-1)^{2+1} = -
  • a₂₂ = -1, sign = (-1)^{2+2} = +
  • a₂₃ = 0, sign = (-1)^{2+3} = - (but 0, so skipped)

Step 2: Compute minors:

M₂₁ = det(1,3; 2,5) = 5 - 6 = -1

M₂₂ = det(2,3; 1,5) = 10 - 3 = 7

Step 3: Combine:

det = 4·(-1)·(-1) + (-1)·(+1)·7 + 0 = 4 - 7 = -3

9. Quick Reference

Key Formulas

  • Minor: MijM_{ij} = (n-1)×(n-1) det after deleting row i, column j
  • Cofactor: Aij=(1)i+jMijA_{ij} = (-1)^{i+j} M_{ij}
  • Row Expansion: det(A)=jaijAij\det(A) = \sum_j a_{ij} A_{ij}
  • Column Expansion: det(A)=iaijAij\det(A) = \sum_i a_{ij} A_{ij}
  • Alien Cofactors: jaijAkj=δikdet(A)\sum_j a_{ij} A_{kj} = \delta_{ik} \det(A)
  • Adjugate: [adj(A)]ij=Aji[\text{adj}(A)]_{ij} = A_{ji}
  • Adjugate Identity: Aadj(A)=det(A)IA \cdot \text{adj}(A) = \det(A) \cdot I
  • Inverse: A1=1det(A)adj(A)A^{-1} = \frac{1}{\det(A)} \text{adj}(A)
  • Cramer's Rule: xi=det(Ai)/det(A)x_i = \det(A_i) / \det(A)

2×2 Formulas

  • Determinant: det(abcd)=adbc\det\begin{pmatrix} a & b \\ c & d \end{pmatrix} = ad - bc
  • Adjugate: adj(abcd)=(dbca)\text{adj}\begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}
  • Inverse: (abcd)1=1adbc(dbca)\begin{pmatrix} a & b \\ c & d \end{pmatrix}^{-1} = \frac{1}{ad-bc}\begin{pmatrix} d & -b \\ -c & a \end{pmatrix}

3×3 Cramer's Rule

For system {a1x+b1y+c1z=d1a2x+b2y+c2z=d2a3x+b3y+c3z=d3\begin{cases} a_1x + b_1y + c_1z = d_1 \\ a_2x + b_2y + c_2z = d_2 \\ a_3x + b_3y + c_3z = d_3 \end{cases}:

  • x=det(d1,b1,c1;d2,b2,c2;d3,b3,c3)det(a1,b1,c1;a2,b2,c2;a3,b3,c3)x = \frac{\det(d_1,b_1,c_1; d_2,b_2,c_2; d_3,b_3,c_3)}{\det(a_1,b_1,c_1; a_2,b_2,c_2; a_3,b_3,c_3)}
  • y=det(a1,d1,c1;a2,d2,c2;a3,d3,c3)det(A)y = \frac{\det(a_1,d_1,c_1; a_2,d_2,c_2; a_3,d_3,c_3)}{\det(A)}
  • z=det(a1,b1,d1;a2,b2,d2;a3,b3,d3)det(A)z = \frac{\det(a_1,b_1,d_1; a_2,b_2,d_2; a_3,b_3,d_3)}{\det(A)}

Strategy Checklist

  • ☐ Count zeros in each row/column
  • ☐ Expand along row/column with most zeros
  • ☐ Apply checkerboard sign pattern correctly
  • ☐ Delete correct row AND column for each minor
  • ☐ For adjugate: transpose the cofactor matrix
  • ☐ For Cramer: replace column (not row) with b
  • ☐ Verify: A·adj(A) = det(A)·I

10. Historical Notes

Pierre-Simon Laplace (1749-1827): French mathematician and astronomer who developed the expansion formula in his work on celestial mechanics. The formula allowed systematic computation of determinants of any size. His "Théorie analytique des probabilités" (1812) contains key results.

Gabriel Cramer (1704-1752): Swiss mathematician who published Cramer's rule in his "Introduction à l'analyse des lignes courbes algébriques" (1750), providing the first explicit formula for solving systems of linear equations. The rule predates modern matrix notation by a century.

Gottfried Wilhelm Leibniz (1646-1716): German polymath who first studied determinants in 1693, using them to eliminate variables in systems of equations. He used the notation |a b c| for what we now call a determinant.

Historical Context: Determinants were studied before matrices! Seki Takakazu in Japan and Leibniz in Europe independently discovered determinants in the late 17th century. The matrix notation wasn't introduced until Cayley in 1858.

Etymology: "Cofactor" comes from Latin "co-" (together) + "factor" (maker), reflecting how cofactors "work together" with matrix entries. "Adjugate" derives from Latin "adjungere" (to join to), referring to how adj(A) is "joined" to A in the identity A·adj(A) = det(A)·I.

Modern Developments

While Laplace expansion is computationally expensive (O(n!)), it remains important for:

  • Symbolic computation in computer algebra systems (Mathematica, Maple, SymPy)
  • Theoretical proofs in linear algebra and matrix analysis
  • Understanding the structure of determinants and their properties
  • Education: building intuition about determinants

11. Proof Techniques Using Cofactors

Cofactor expansion provides elegant proofs for many matrix identities.

Theorem 5.25: Product Rule for Determinants

Using cofactors, we can prove det(AB)=det(A)det(B)\det(AB) = \det(A) \det(B):

Consider the block matrix (A0IB)\begin{pmatrix} A & 0 \\ -I & B \end{pmatrix} and compute its determinant two ways.

Example 5.21: Vandermonde Determinant Proof

The Vandermonde determinant V(x1,,xn)=i<j(xjxi)V(x_1, \ldots, x_n) = \prod_{i<j}(x_j - x_i) can be proved by cofactor expansion along the first row, using induction on n.

Remark 5.16: Polynomial Identities

Viewing det(A) as a polynomial in entries, cofactor expansion shows:

  • det(A) is linear in each row (or column)
  • Each term in the expansion has degree n (one entry from each row)
  • The characteristic polynomial det(A - λI) has degree n in λ
Theorem 5.26: Leibniz Formula Connection

The Laplace expansion is equivalent to the Leibniz formula:

det(A)=σSnsgn(σ)i=1nai,σ(i)\det(A) = \sum_{\sigma \in S_n} \text{sgn}(\sigma) \prod_{i=1}^{n} a_{i,\sigma(i)}

Both give the same n! terms, but organized differently.

Example 5.22: Proving det(A^T) = det(A)

Using Laplace expansion, we can prove that the determinant of the transpose equals the determinant:

Row expansion of A corresponds to column expansion of A^T. Since both expansions give the same result, det(A) = det(A^T).

Remark 5.17: Cofactors and Derivatives

The cofactor AijA_{ij} can be viewed as a partial derivative:

Aij=det(A)aijA_{ij} = \frac{\partial \det(A)}{\partial a_{ij}}

This follows from the fact that det(A) is linear in entry aija_{ij}.

Connections to Other Topics

  • Eigenvalues: det(A - λI) = 0 defines eigenvalues via Laplace expansion
  • Cayley-Hamilton: A satisfies its own characteristic polynomial
  • Matrix exponential: Uses determinant in the formula for inverse
  • SVD: Singular values relate to determinant via det(A) = ∏σᵢ

What's Next?

Now that you understand Laplace expansion, explore:

  • Adjugate Matrix: Deep dive into properties and applications of adj(A)
  • Eigenvalues: Use det(A - λI) = 0 to find eigenvalues (characteristic polynomial)
  • Applications: Cross products, area/volume, change of variables in integration

Skills Mastered

  • Laplace expansion along any row or column
  • Checkerboard sign pattern for cofactors
  • Alien cofactor theorem and adjugate identity
  • Cramer's rule for solving linear systems
  • Generalized expansion for block matrices

Algorithm Comparison

MethodComplexityBest ForLimitations
Laplace ExpansionO(n!)Small/sparse, symbolicExponential growth
Row ReductionO(n³)General numericalMay introduce fractions
LU DecompositionO(n³)Multiple systemsPivoting needed
Cramer's RuleO(n · n!)Single variable, theoryRequires n+1 determinants
Remark 5.18: Choosing the Right Method
  • For n ≤ 3: Either method works; Laplace may be faster by hand
  • For n = 4: Consider sparsity; row reduction usually wins
  • For n ≥ 5: Always use row reduction or LU decomposition
  • For symbolic work: Laplace preserves polynomial structure

Chapter Summary

This module covered Laplace expansion, the recursive method for computing determinants via cofactors. Key results include the alien cofactor theorem, the adjugate matrix identity, and Cramer's rule.

8

Core Theorems

12

Quiz Questions

12

FAQs Answered

10

Practice Problems

Key Formulas to Remember

  • Laplace: det(A) = Σⱼ aᵢⱼ Aᵢⱼ (expand along row i)
  • Cofactor: Aᵢⱼ = (-1)^{i+j} Mᵢⱼ
  • Alien: Σⱼ aᵢⱼ Aₖⱼ = 0 for i ≠ k
  • Adjugate: A · adj(A) = det(A) · I
  • Cramer: xᵢ = det(Aᵢ) / det(A)

Study Tips

  • Memorize the checkerboard: Draw the sign pattern for 4×4 and practice until automatic.
  • Practice 2×2 and 3×3: These appear constantly in larger problems.
  • Verify with row reduction: For practice problems, compute det both ways to check.
  • Understand, don't just memorize: Know WHY alien cofactors sum to zero.
  • Connect concepts: Adjugate → Cramer → Inverse form a connected chain.

12. Application Summary

Theoretical Uses

  • • Deriving determinant properties
  • • Proving det(AB) = det(A)det(B)
  • • Characteristic polynomial analysis
  • • Cayley-Hamilton theorem proof

Computational Uses

  • • Sparse matrix determinants
  • • Symbolic computation
  • • Small system solutions
  • • Finding specific solution components

Geometric Uses

  • • Cross product calculation
  • • Area and volume formulas
  • • Jacobian for coordinate changes
  • • Orientation determination

Matrix Analysis

  • • Adjugate matrix construction
  • • Matrix inverse derivation
  • • Eigenvalue computation
  • • Singularity testing

Related Topics

Adjugate Matrix
Matrix Inverse
Eigenvalues
Characteristic Polynomial
Linear Systems
Block Matrices
Cross Product
Jacobian
Laplace Expansion Practice
12
Questions
0
Correct
0%
Accuracy
1
Laplace expansion along row ii gives:
Easy
Not attempted
2
The alien cofactor sum jaijAkj\sum_j a_{ij} A_{kj} (with iki \neq k) equals:
Medium
Not attempted
3
Cramer's rule solves Ax=bAx = b when:
Easy
Not attempted
4
For xix_i in Cramer's rule:
Medium
Not attempted
5
Column expansion and row expansion give:
Easy
Not attempted
6
The number of terms in Laplace expansion along one row is:
Easy
Not attempted
7
The sign of cofactor A23A_{23} is:
Easy
Not attempted
8
If A is 4×4, how many 3×3 minors are computed in row expansion?
Medium
Not attempted
9
The adjugate matrix adj(A) satisfies:
Medium
Not attempted
10
Computational complexity of Laplace expansion for n×n:
Hard
Not attempted
11
If row 1 = (0, 0, 5, 0), best expansion strategy:
Medium
Not attempted
12
The 2×2 formula adbcad - bc is:
Medium
Not attempted

Frequently Asked Questions

What's the difference between minor and cofactor?

Minor M_{ij} is the (n-1)×(n-1) determinant obtained by deleting row i and column j. Cofactor A_{ij} = (-1)^{i+j} M_{ij} includes the checkerboard sign. Always use cofactors (not minors) in the expansion formula.

Why do alien cofactors sum to zero?

If you expand row i using cofactors from row k≠i, you're computing the determinant of a matrix where row i has been replaced by row k. This matrix has two identical rows (rows i and k), so its determinant is 0.

When should I use Cramer's rule?

Cramer's rule is elegant for theoretical proofs and very small systems (2×2 or 3×3). For larger systems, row reduction (Gaussian elimination) is O(n³) vs O(n·n!) for Cramer's rule, making it vastly more efficient.

Can I expand along any row or column?

Yes! The Laplace expansion theorem guarantees all rows and columns give the same determinant. Choose a row/column with many zeros to minimize computation.

How do I remember the checkerboard sign pattern?

Position (1,1) is always +. Then alternate: + - + - ... along each row and column. Equivalently, the sign at (i,j) is (-1)^{i+j}, which is + when i+j is even, - when odd.

What is the adjugate (classical adjoint) matrix?

The adjugate adj(A) is the transpose of the cofactor matrix: [adj(A)]_{ij} = A_{ji}. It satisfies A·adj(A) = adj(A)·A = det(A)·I, giving the inverse formula A^{-1} = adj(A)/det(A) when det(A) ≠ 0.

Why is Laplace expansion called 'recursive'?

An n×n determinant is expressed in terms of (n-1)×(n-1) minors. Each minor can be expanded further, recursively reducing to smaller determinants until reaching 1×1 or 2×2 base cases.

What is generalized Laplace expansion?

Instead of expanding along a single row or column, you can expand along multiple rows (or columns) simultaneously. The formula involves products of complementary minors with appropriate signs.

How does Cramer's rule relate to the inverse matrix?

Cramer's rule x_i = det(A_i)/det(A) can be derived from x = A^{-1}b = adj(A)b/det(A). The i-th component involves the cofactors from column i of A, which equals det(A_i).

Is det(A) = det(A^T) related to row vs column expansion?

Yes! Since det(A) = det(A^T), expanding A along row i is equivalent to expanding A^T along column i. This is why row and column expansions give the same result.

What happens if I compute the wrong sign?

Using the wrong sign (forgetting (-1)^{i+j}) will give an incorrect determinant. This is one of the most common errors. Always check: (1,1), (1,3), (2,2), (3,1), (3,3) are + signs.

Can Laplace expansion be used for symbolic computation?

Yes! Unlike row reduction which may introduce fractions, Laplace expansion preserves the polynomial structure of entries. This makes it preferred in computer algebra systems for symbolic determinants.