MathIsimple
LA-5.3
Available

Determinant Computation

Efficient techniques for computing determinants of matrices of any size. We compare row reduction, cofactor expansion, and special formulas to find the best method for each situation.

2-3 hours Core Level 10 Objectives
Learning Objectives
  • Compute determinants using row reduction (Gaussian elimination) efficiently
  • Apply cofactor expansion strategically by choosing optimal rows/columns
  • Recognize and exploit special matrix patterns (triangular, block, sparse)
  • Calculate Vandermonde determinants using the product formula
  • Understand computational complexity trade-offs between methods
  • Combine multiple methods for maximum efficiency
  • Use the Sarrus rule for 3×3 matrices
  • Handle sign tracking correctly during row operations
  • Recognize when a determinant must be zero without computing
  • Apply shortcuts for matrices with special structure
Prerequisites
  • Determinant definition via axioms and permutations (LA-5.1)
  • Determinant properties: multiplicativity, transpose, inverse (LA-5.2)
  • Gaussian elimination and echelon forms
  • Elementary row and column operations
  • Elementary matrices and their determinants
  • Basic polynomial arithmetic

1. Row Reduction Method (Gaussian Elimination)

The most efficient general method for computing determinants is row reduction. The key insight is that triangular matrices have easily computed determinants, and we can transform any matrix to triangular form while tracking how each operation affects the determinant.

Theorem 5.15: Triangular Determinant

For upper or lower triangular matrices (including diagonal):

det(A)=a11a22ann=i=1naii\det(A) = a_{11} \cdot a_{22} \cdot \ldots \cdot a_{nn} = \prod_{i=1}^{n} a_{ii}

The determinant is simply the product of diagonal entries.

Proof:

For upper triangular matrix, expand along column 1. Only a11a_{11} is nonzero, giving:

det(A)=a11det(A11)\det(A) = a_{11} \cdot \det(A_{11})

where A11A_{11} is the (n-1)×(n-1) upper triangular submatrix. By induction, det(A11)=a22ann\det(A_{11}) = a_{22} \cdots a_{nn}.

Remark 5.8: Row Reduction Strategy

To compute det(A):

  1. Use row operations to transform AA to upper triangular form UU
  2. Track each operation's effect on determinant:
OperationEffect on det
Swap rows ii and jjMultiply by 1-1
Multiply row ii by c0c \neq 0Multiply by cc
Add kk × row jj to row iiNo change
  1. Compute det(U)=u11u22unn\det(U) = u_{11} \cdot u_{22} \cdots u_{nn}
  2. Adjust for tracked operations: det(A)=(1)swapsdet(U)scaling factors\det(A) = (-1)^{\text{swaps}} \cdot \frac{\det(U)}{\text{scaling factors}}
Example 5.6: 3×3 Determinant by Row Reduction

Compute det(213412221)\det\begin{pmatrix} 2 & 1 & 3 \\ 4 & -1 & 2 \\ -2 & 2 & 1 \end{pmatrix}.

Step 1: R2R22R1R_2 \to R_2 - 2R_1 (det unchanged):

(213034221)\begin{pmatrix} 2 & 1 & 3 \\ 0 & -3 & -4 \\ -2 & 2 & 1 \end{pmatrix}

Step 2: R3R3+R1R_3 \to R_3 + R_1 (det unchanged):

(213034034)\begin{pmatrix} 2 & 1 & 3 \\ 0 & -3 & -4 \\ 0 & 3 & 4 \end{pmatrix}

Step 3: R3R3+R2R_3 \to R_3 + R_2 (det unchanged):

(213034000)\begin{pmatrix} 2 & 1 & 3 \\ 0 & -3 & -4 \\ 0 & 0 & 0 \end{pmatrix}

Result: Upper triangular with zero on diagonal ⟹ det(A)=2(3)0=0\det(A) = 2 \cdot (-3) \cdot 0 = 0

Example 5.7: 4×4 Determinant with Row Swaps

Compute det(0121210110121111)\det\begin{pmatrix} 0 & 1 & 2 & 1 \\ 2 & 1 & 0 & 1 \\ 1 & 0 & 1 & 2 \\ 1 & 1 & 1 & 1 \end{pmatrix}.

Step 1: R1R2R_1 \leftrightarrow R_2 (det × (-1), swap count = 1):

(2101012110121111)\begin{pmatrix} 2 & 1 & 0 & 1 \\ 0 & 1 & 2 & 1 \\ 1 & 0 & 1 & 2 \\ 1 & 1 & 1 & 1 \end{pmatrix}

Steps 2-4: Eliminate below pivot, continue to triangular form...

Final: After more operations (no additional swaps), triangular with diagonal (2, 1, *, *).

det(A)=(1)1(product of pivots)\det(A) = (-1)^1 \cdot \text{(product of pivots)}

Corollary 5.6: Zero Pivot Detection

If during row reduction a zero pivot appears with no nonzero entry below it in that column, then det(A)=0\det(A) = 0. This indicates the matrix is singular (rank < n).

Example 5.8: Complete 4×4 Row Reduction

Compute det(1234234134124123)\det\begin{pmatrix} 1 & 2 & 3 & 4 \\ 2 & 3 & 4 & 1 \\ 3 & 4 & 1 & 2 \\ 4 & 1 & 2 & 3 \end{pmatrix}.

Step 1: Eliminate column 1 below pivot:

R2R22R1R_2 \to R_2 - 2R_1, R3R33R1R_3 \to R_3 - 3R_1, R4R44R1R_4 \to R_4 - 4R_1

(1234012702810071013)\begin{pmatrix} 1 & 2 & 3 & 4 \\ 0 & -1 & -2 & -7 \\ 0 & -2 & -8 & -10 \\ 0 & -7 & -10 & -13 \end{pmatrix}

Step 2: Eliminate column 2 below pivot:

R3R32R2R_3 \to R_3 - 2R_2, R4R47R2R_4 \to R_4 - 7R_2

(12340127004400436)\begin{pmatrix} 1 & 2 & 3 & 4 \\ 0 & -1 & -2 & -7 \\ 0 & 0 & -4 & 4 \\ 0 & 0 & 4 & 36 \end{pmatrix}

Step 3: Final elimination: R4R4+R3R_4 \to R_4 + R_3

(12340127004400040)\begin{pmatrix} 1 & 2 & 3 & 4 \\ 0 & -1 & -2 & -7 \\ 0 & 0 & -4 & 4 \\ 0 & 0 & 0 & 40 \end{pmatrix}

Result: No swaps, so det = 1 × (-1) × (-4) × 40 = 160

Remark 5.13: Practical Tips for Row Reduction
  • Avoid fractions: If possible, swap rows to get a pivot of ±1
  • Factor out: If a row has a common factor, extract it (tracking the scaling)
  • Check early: Two identical or proportional rows means det = 0
  • Organize: Write swap count clearly at each step
Definition 5.7: LU Decomposition

If AA can be factored as A=LUA = LU where LL is lower triangular and UU is upper triangular (both with 1's on diagonal of L), then:

det(A)=det(L)det(U)=1iuii=iuii\det(A) = \det(L) \cdot \det(U) = 1 \cdot \prod_{i} u_{ii} = \prod_{i} u_{ii}
Remark 5.14: LU with Partial Pivoting

In practice, row swaps are needed for numerical stability. If PA=LUPA = LU where PP is a permutation matrix:

det(A)=det(P)1det(L)det(U)=(1)swapsiuii\det(A) = \det(P)^{-1} \cdot \det(L) \cdot \det(U) = (-1)^{\text{swaps}} \cdot \prod_i u_{ii}

2. Strategic Cofactor Expansion

Cofactor expansion (also called Laplace expansion) expresses an n×n determinant as a sum of n terms, each involving an (n-1)×(n-1) minor. Strategic choice of row or column can dramatically reduce computation.

Definition 5.6: Minor and Cofactor

For matrix A=(aij)A = (a_{ij}):

  • Minor MijM_{ij}: determinant of (n-1)×(n-1) matrix obtained by deleting row ii and column jj
  • Cofactor Aij=(1)i+jMijA_{ij} = (-1)^{i+j} M_{ij}: signed minor
Theorem 5.16: Cofactor Expansion Formula

For any row ii or column jj:

det(A)=j=1naijAij=i=1naijAij\det(A) = \sum_{j=1}^{n} a_{ij} A_{ij} = \sum_{i=1}^{n} a_{ij} A_{ij}

Expand along row ii: det(A)=ai1Ai1+ai2Ai2++ainAin\det(A) = a_{i1}A_{i1} + a_{i2}A_{i2} + \cdots + a_{in}A_{in}

Remark 5.9: Sign Pattern (Checkerboard)

The cofactor signs follow a checkerboard pattern:

(++++++)\begin{pmatrix} + & - & + & - & \cdots \\ - & + & - & + & \cdots \\ + & - & + & - & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}

Position (i,j)(i,j) has sign (1)i+j(-1)^{i+j}. Top-left (1,1)(1,1) is always ++.

Remark 5.10: Choosing Optimal Row/Column

Key principle: Each zero entry eliminates one term from the sum.

  • Count zeros in each row and column
  • Expand along the row/column with the most zeros
  • For triangular matrices, row 1 or column 1 has n-1 zeros!
Example 5.8: Using Zeros Strategically

Compute det(3002150124201314)\det\begin{pmatrix} 3 & 0 & 0 & 2 \\ 1 & 5 & 0 & 1 \\ 2 & 4 & 2 & 0 \\ 1 & 3 & 1 & 4 \end{pmatrix}.

Analysis: Row 1 has two zeros, Column 3 has two zeros. Expand along row 1:

det=3A11+0+0+2A14\det = 3 \cdot A_{11} + 0 + 0 + 2 \cdot A_{14}

Only two 3×3 minors to compute instead of four!

Example 5.9: Triangular via Cofactor

det(237051004)\det\begin{pmatrix} 2 & 3 & 7 \\ 0 & 5 & 1 \\ 0 & 0 & 4 \end{pmatrix}

Expand along column 1: only a11=2a_{11} = 2 is nonzero.

det=2(+1)det(5104)=220=40\det = 2 \cdot (+1) \cdot \det\begin{pmatrix} 5 & 1 \\ 0 & 4 \end{pmatrix} = 2 \cdot 20 = 40

Or directly: 254=402 \cdot 5 \cdot 4 = 40 (product of diagonal).

Example 5.10: 4×4 with Sparse Column

Compute det(2013104230214105)\det\begin{pmatrix} 2 & 0 & 1 & 3 \\ 1 & 0 & 4 & 2 \\ 3 & 0 & 2 & 1 \\ 4 & 1 & 0 & 5 \end{pmatrix}.

Analysis: Column 2 has three zeros! Expand along it:

det=0A12+0A22+0A32+1A42\det = 0 \cdot A_{12} + 0 \cdot A_{22} + 0 \cdot A_{32} + 1 \cdot A_{42}

Only one 3×3 minor to compute: A42=(1)4+2M42A_{42} = (-1)^{4+2} M_{42}

M42=det(213142321)M_{42} = \det\begin{pmatrix} 2 & 1 & 3 \\ 1 & 4 & 2 \\ 3 & 2 & 1 \end{pmatrix}

Use Sarrus or row reduction for this 3×3.

Theorem 5.21: Alien Cofactor Theorem

Expanding along row ii using cofactors from row kneqik \\neq i gives zero:

j=1naijAkj=0for ik\sum_{j=1}^{n} a_{ij} A_{kj} = 0 \quad \text{for } i \neq k

This is because such expansion computes the determinant of a matrix with two identical rows.

Corollary 5.7: Cofactor Matrix Property

If adj(A)\text{adj}(A) is the matrix of cofactors (adjugate), then:

Aadj(A)=det(A)IA \cdot \text{adj}(A) = \det(A) \cdot I

3. Sarrus Rule for 3×3 Matrices

For 3×3 matrices specifically, the Sarrus rule provides a quick visual method. Warning: This only works for 3×3!

Theorem 5.17: Sarrus Rule

For A=(abcdefghi)A = \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix}:

det(A)=aei+bfg+cdhcegbdiafh\det(A) = aei + bfg + cdh - ceg - bdi - afh
Remark 5.11: Visual Method

Write the matrix with columns 1-2 repeated on the right:

abcabdefdeghigh\begin{matrix} a & b & c & | & a & b \\ d & e & f & | & d & e \\ g & h & i & | & g & h \end{matrix}
  • Add: products along diagonals going right-down (aei, bfg, cdh)
  • Subtract: products along diagonals going right-up (ceg, afh, bdi)
Example 5.10: Sarrus Computation

Compute det(123456789)\det\begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix}.

Add: 159+267+348=45+84+96=2251 \cdot 5 \cdot 9 + 2 \cdot 6 \cdot 7 + 3 \cdot 4 \cdot 8 = 45 + 84 + 96 = 225

Subtract: 357+168+249=105+48+72=2253 \cdot 5 \cdot 7 + 1 \cdot 6 \cdot 8 + 2 \cdot 4 \cdot 9 = 105 + 48 + 72 = 225

Result: 225225=0225 - 225 = 0

Critical Warning: 3×3 Only!

Sarrus does NOT work for 4×4 or larger matrices. A 4×4 determinant has 24 terms (4!), but extending Sarrus would only give 8. Use cofactor expansion or row reduction instead.

Example 5.14: More 3×3 Practice

Compute det(213402151)\det\begin{pmatrix} 2 & -1 & 3 \\ 4 & 0 & -2 \\ 1 & 5 & 1 \end{pmatrix}.

Using Sarrus:

Add: 2·0·1 + (-1)·(-2)·1 + 3·4·5 = 0 + 2 + 60 = 62

Subtract: 3·0·1 + 2·(-2)·5 + (-1)·4·1 = 0 - 20 - 4 = -24

Result: 62 - (-24) = 62 + 24 = 86

Remark 5.15: 2×2 Determinant

The simplest case deserves mention:

det(abcd)=adbc\det\begin{pmatrix} a & b \\ c & d \end{pmatrix} = ad - bc

This can be seen as the base case of recursion, or as a degenerate Sarrus rule.

4. Special Determinants

Certain matrix structures have closed-form determinant formulas. Recognizing these patterns allows instant computation without row reduction or cofactor expansion.

Theorem 5.18: Vandermonde Determinant
det(1x1x12x1n11x2x22x2n11xnxn2xnn1)=1i<jn(xjxi)\det\begin{pmatrix} 1 & x_1 & x_1^2 & \cdots & x_1^{n-1} \\ 1 & x_2 & x_2^2 & \cdots & x_2^{n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & x_n^2 & \cdots & x_n^{n-1} \end{pmatrix} = \prod_{1 \le i < j \le n} (x_j - x_i)
Proof:

The determinant is a polynomial in x1,,xnx_1, \ldots, x_n. If xi=xjx_i = x_j for some iji \neq j, two rows are identical, so det = 0. Thus (xjxi)(x_j - x_i) divides the determinant for all pairs.

Degree counting: the determinant has degree 0+1++(n1)=n(n1)20 + 1 + \cdots + (n-1) = \frac{n(n-1)}{2} in the variables, which equals the number of (xjxi)(x_j - x_i) factors. The leading coefficient is 1.

Example 5.11: 3×3 Vandermonde
det(111124139)=(21)(31)(32)=121=2\det\begin{pmatrix} 1 & 1 & 1 \\ 1 & 2 & 4 \\ 1 & 3 & 9 \end{pmatrix} = (2-1)(3-1)(3-2) = 1 \cdot 2 \cdot 1 = 2
Theorem 5.19: Block Diagonal Determinant

For block diagonal matrix with blocks A1,A2,,AkA_1, A_2, \ldots, A_k:

det(A1000A2000Ak)=det(A1)det(A2)det(Ak)\det\begin{pmatrix} A_1 & 0 & \cdots & 0 \\ 0 & A_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & A_k \end{pmatrix} = \det(A_1) \cdot \det(A_2) \cdots \det(A_k)
Theorem 5.20: Block Triangular Determinant
det(AB0D)=det(A)det(D)\det\begin{pmatrix} A & B \\ 0 & D \end{pmatrix} = \det(A) \cdot \det(D)

Similarly for lower block triangular matrices.

Example 5.12: Circulant Matrix

A circulant matrix has form where each row is the previous row shifted right:

C=(c0c1c2c2c0c1c1c2c0)C = \begin{pmatrix} c_0 & c_1 & c_2 \\ c_2 & c_0 & c_1 \\ c_1 & c_2 & c_0 \end{pmatrix}

Its determinant involves roots of unity: det(C)=k=0n1(c0+c1ωk+c2ω2k+)\det(C) = \prod_{k=0}^{n-1} (c_0 + c_1 \omega^k + c_2 \omega^{2k} + \cdots) where ω=e2πi/n\omega = e^{2\pi i/n}.

Example 5.15: Tridiagonal Matrix

A tridiagonal matrix with constant diagonals:

Tn=(ab0cab0ca)T_n = \begin{pmatrix} a & b & 0 & \cdots \\ c & a & b & \cdots \\ 0 & c & a & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix}

Satisfies the recurrence: det(Tn)=adet(Tn1)bcdet(Tn2)\det(T_n) = a \cdot \det(T_{n-1}) - bc \cdot \det(T_{n-2})

Theorem 5.22: Cauchy Determinant
det(1x1+y11x1+yn1xn+y11xn+yn)=i<j(xjxi)(yjyi)i,j(xi+yj)\det\begin{pmatrix} \frac{1}{x_1+y_1} & \cdots & \frac{1}{x_1+y_n} \\ \vdots & \ddots & \vdots \\ \frac{1}{x_n+y_1} & \cdots & \frac{1}{x_n+y_n} \end{pmatrix} = \frac{\prod_{i<j}(x_j-x_i)(y_j-y_i)}{\prod_{i,j}(x_i+y_j)}
Remark 5.16: Pattern Recognition

When faced with a determinant:

  1. Check if triangular or block triangular
  2. Look for rows/columns with many zeros
  3. Recognize Vandermonde or other special structures
  4. Consider factoring out common elements
  5. If nothing special, use row reduction

5. Computational Complexity

Understanding computational complexity helps choose the right method. For large matrices, the difference between O(n³) and O(n!) is astronomical.

Method Comparison

MethodComplexityBest For
Permutation formulaO(nn!)O(n \cdot n!)Theory only
Naive cofactor expansionO(n!)O(n!)Small n ≤ 4, sparse matrices
Row reduction (LU)O(n3)O(n^3)General computation
TriangularO(n)O(n)Already triangular/diagonal
Strassen-likeO(n2.37)O(n^{2.37})Very large n (theory)
Example 5.13: Concrete Numbers

For a 10×10 matrix:

  • Row reduction: ~1,000 operations
  • Cofactor expansion: ~3,600,000 operations (10!)
  • Ratio: ~3600× slower!

For 20×20: row reduction ~8,000 ops vs cofactor ~2.4 × 10¹⁸ ops. Row reduction is essential.

Remark 5.12: When Cofactor Wins

Despite poor complexity, cofactor expansion can be faster for:

  • Very small matrices (n ≤ 3)
  • Matrices with rows/columns of mostly zeros
  • Symbolic computation (avoiding fractions)
  • When only specific minors are needed
Example 5.17: Operation Count Comparison

For n = 5:

  • Row reduction: ~125 multiplications (n³/3)
  • Cofactor expansion: ~600 multiplications (5 × 4! minors)
  • Row reduction is ~5× faster

For n = 10:

  • Row reduction: ~333 multiplications
  • Cofactor expansion: ~3,628,800 multiplications (10!)
  • Row reduction is ~10,000× faster!
Theorem 5.23: Complexity Lower Bound

Computing the determinant of an n×n matrix requires at least Ω(n²) operations, since every entry might affect the result. The best known algorithms achieve O(n^2.37), but O(n³) LU decomposition is most practical.

6. Common Mistakes

Forgetting to track row swaps

Each swap negates det. Keep a counter: final det = (-1)^(swaps) × product of pivots.

Wrong cofactor sign

Remember (1)i+j(-1)^{i+j}: checkerboard pattern starting with + at (1,1). Use the pattern, don't compute each time.

Using Sarrus for 4×4+

Sarrus rule ONLY works for 3×3 matrices. For larger matrices, use row reduction or cofactor expansion.

Forgetting row scaling factors

If you multiply row by c, the determinant is multiplied by c. Track all scaling operations.

Arithmetic errors in elimination

Row reduction requires careful arithmetic. Double-check subtractions and verify pivots are correct.

Confusing minor and cofactor

Minor MijM_{ij} is unsigned; cofactor Aij=(1)i+jMijA_{ij} = (-1)^{i+j} M_{ij} includes the sign.

Not checking for zero determinant early

Identical rows, proportional rows, or a zero row/column means det = 0 immediately. Save time by checking first.

Remark 5.19: Verification Strategies

Always verify your answer when possible:

  • Compute using a different method
  • Check special cases (triangular, block diagonal)
  • Verify det(A) · det(A⁻¹) = 1 if A⁻¹ is known
  • Use technology for complex cases

7. Key Takeaways

Row Reduction

Best for n ≥ 4: O(n³) complexity. Track swaps and scaling.

Exploit Zeros

Expand along sparse rows/columns to minimize computation.

Special Patterns

Triangular, block diagonal, Vandermonde have shortcuts.

Track Operations

Count row swaps (det × -1), note scaling factors.

Sarrus Rule

Quick 3×3 method. Does NOT work for 4×4 or larger!

Zero Detection

Check for zero row, identical rows, or rank < n first.

Method Selection Flowchart

  1. Is the matrix triangular (upper or lower)? → Product of diagonal
  2. Is it block diagonal? → Product of block determinants
  3. Is it Vandermonde or another known special form? → Use formula
  4. Is n ≤ 3? → Use Sarrus (3×3) or direct formula (2×2)
  5. Does any row/column have many zeros? → Cofactor expansion along that row/column
  6. Otherwise → Row reduction (Gaussian elimination)

8. Practice Problems

Problem 1

Compute det(213045006)\det\begin{pmatrix} 2 & 1 & 3 \\ 0 & 4 & 5 \\ 0 & 0 & 6 \end{pmatrix} using the fastest method.

Problem 2

Compute det(1234012300120001)\det\begin{pmatrix} 1 & 2 & 3 & 4 \\ 0 & 1 & 2 & 3 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 1 \end{pmatrix}.

Problem 3

Compute det(111124139)\det\begin{pmatrix} 1 & 1 & 1 \\ 1 & 2 & 4 \\ 1 & 3 & 9 \end{pmatrix} as a Vandermonde determinant.

Problem 4

Use row reduction to compute det(246125314)\det\begin{pmatrix} 2 & 4 & 6 \\ 1 & 2 & 5 \\ 3 & 1 & 4 \end{pmatrix}.

Solutions

Solution 1

Upper triangular! det = product of diagonal = 2 × 4 × 6 = 48

Solution 2

Upper triangular! det = 1 × 1 × 1 × 1 = 1

Solution 3

Vandermonde with x₁=1, x₂=2, x₃=3: det = (2-1)(3-1)(3-2) = 1·2·1 = 2

Solution 4

Row 1 has common factor 2: det(A) = 2 · det(A') where A' has row 1 = (1, 2, 3).

Row reduce: R₂ - R₁, R₃ - 3R₁:

(123002055)\begin{pmatrix} 1 & 2 & 3 \\ 0 & 0 & 2 \\ 0 & -5 & -5 \end{pmatrix}

Swap R₂ ↔ R₃ (det × -1):

(123055002)\begin{pmatrix} 1 & 2 & 3 \\ 0 & -5 & -5 \\ 0 & 0 & 2 \end{pmatrix}

det(A) = 2 × (-1) × (1 × (-5) × 2) = 2 × (-1) × (-10) = 20

Additional Practice

Problem 5

Compute det(123456789)\det\begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix} using Sarrus rule.

Answer: 0 (rows are in arithmetic progression)

Problem 6

Compute the 4×4 Vandermonde: V(1,2,3,4)V(1, 2, 3, 4).

Answer: (2-1)(3-1)(4-1)(3-2)(4-2)(4-3) = 1·2·3·1·2·1 = 12

Problem 7

Prove: If A is n×n with all entries equal to 1, then det(A) = 0 for n ≥ 2.

Hint: All rows are identical.

Problem 8

Compute det(200030005)\det\begin{pmatrix} 2 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 5 \end{pmatrix}.

Answer: 2 × 3 × 5 = 30 (diagonal matrix)

Problem 9 (Challenge)

Compute det(1111123414916182764)\det\begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & 2 & 3 & 4 \\ 1 & 4 & 9 & 16 \\ 1 & 8 & 27 & 64 \end{pmatrix}.

Hint: This is NOT Vandermonde; compute via row reduction.

Problem 10 (Challenge)

For an n×n matrix with all diagonal entries equal to 2 and all off-diagonal entries equal to 1, find det(A).

Hint: Subtract row 1 from all other rows, then expand.

9. Quick Reference

Decision Guide: Which Method?

  • 2×2: ad - bc directly
  • 3×3: Sarrus rule or cofactor expansion
  • 4×4+: Row reduction (Gaussian elimination)
  • Triangular: Product of diagonal (any size)
  • Block diagonal: Product of block determinants
  • Sparse row/column: Cofactor expansion along that row/column
  • Vandermonde: Use product formula

Row Operation Effects

OperationEffect on det
RiRjR_i \leftrightarrow R_j (swap)det → -det
RicRiR_i \to c \cdot R_i (scale)det → c · det
RiRi+kRjR_i \to R_i + k \cdot R_j (add)det → det (unchanged)

Special Formulas

  • 2×2: det(abcd)=adbc\det\begin{pmatrix} a & b \\ c & d \end{pmatrix} = ad - bc
  • Triangular: det(T)=itii\det(T) = \prod_i t_{ii} (diagonal product)
  • Block diagonal: det(A00B)=det(A)det(B)\det\begin{pmatrix} A & 0 \\ 0 & B \end{pmatrix} = \det(A) \cdot \det(B)
  • Vandermonde: i<j(xjxi)\prod_{i < j} (x_j - x_i)

Cofactor Sign Pattern (Checkerboard)

++
++
++
++

Sign at position (i,j) = (-1)^(i+j). Top-left is always +.

Common Determinant Values

  • Identity matrix I: det(I) = 1
  • Zero matrix O: det(O) = 0
  • Matrix with zero row: det = 0
  • Matrix with identical rows: det = 0
  • Scalar multiple: det(cA) = c^n · det(A)
  • Inverse: det(A⁻¹) = 1/det(A)
  • Transpose: det(Aᵀ) = det(A)

10. Historical Notes

Gaussian Elimination: Named after Carl Friedrich Gauss (1777-1855), though the method was known to Chinese mathematicians in "The Nine Chapters on the Mathematical Art" (circa 200 BCE). Gauss systematized it for astronomical calculations and solving large systems.

Sarrus Rule: Pierre Frédéric Sarrus (1798-1861), a French mathematician, discovered this mnemonic shortcut for 3×3 determinants in 1833. Despite its simplicity, it fundamentally cannot extend to larger matrices due to the factorial growth of terms.

Vandermonde: Alexandre-Théophile Vandermonde (1735-1796) was a French musician and mathematician who studied these matrices in connection with polynomial interpolation. The Vandermonde determinant is crucial for proving uniqueness of polynomial interpolation.

Leibniz Formula: Gottfried Wilhelm Leibniz (1646-1716) gave the first explicit formula for determinants using permutations, though the concept wasn't fully formalized until later.

Modern Algorithms: Today, numerical computation uses LU decomposition with partial pivoting for stability. Strassen-like algorithms achieve sub-cubic complexity (O(n^2.37)) but are mainly of theoretical interest for current hardware.

Etymology

The word "determinant" was coined by Gauss in 1801, from Latin "determinare" meaning "to determine" or "to set bounds." This reflects the determinant's role in determining whether a system has a unique solution.

11. Numerical Considerations

When computing determinants on a computer, numerical issues can arise. Understanding these helps avoid incorrect results.

Remark 5.17: Floating-Point Issues
  • Pivoting: Always use partial pivoting (swap to put largest element as pivot) to minimize round-off error
  • Scaling: Poorly scaled matrices (entries differing by many orders of magnitude) can cause problems
  • Near-singular: If det ≈ 0, the result may be unreliable; check the condition number instead
  • Overflow/Underflow: For large matrices, det can become astronomically large or tiny; use log-determinant
Example 5.16: Log-Determinant

For positive definite matrices, compute log|det(A)| instead:

logdet(A)=i=1nloguii\log|\det(A)| = \sum_{i=1}^{n} \log|u_{ii}|

where uiiu_{ii} are the LU pivots. This avoids overflow for large matrices.

Remark 5.18: When to Use What
  • Exact arithmetic: Use cofactor expansion for symbolic computation
  • Numerical computation: Use LU with partial pivoting
  • Checking invertibility: Don't compute det; compute rank or use SVD
  • Very large matrices: Consider iterative or randomized methods
Example 5.18: Ill-Conditioned Matrix

Consider the Hilbert matrix Hij=1i+j1H_{ij} = \frac{1}{i+j-1}:

H4=(11/21/31/41/21/31/41/51/31/41/51/61/41/51/61/7)H_4 = \begin{pmatrix} 1 & 1/2 & 1/3 & 1/4 \\ 1/2 & 1/3 & 1/4 & 1/5 \\ 1/3 & 1/4 & 1/5 & 1/6 \\ 1/4 & 1/5 & 1/6 & 1/7 \end{pmatrix}

det(H₄) = 1/6048000 ≈ 1.65 × 10⁻⁷. Floating-point computation can have significant errors due to the matrix being nearly singular.

Software Libraries

Most scientific computing libraries provide efficient determinant computation:

  • NumPy (Python): numpy.linalg.det(A)
  • MATLAB: det(A)
  • Mathematica: Det[A] (exact for symbolic)
  • Julia: det(A) or logdet(A) for large matrices

What's Next?

Now that you can compute determinants efficiently, explore:

  • Laplace Expansion: General theory of cofactor expansion along any row or column
  • Adjugate Matrix: The matrix of cofactors, its properties, and Cramer's Rule for solving systems
  • Eigenvalues: Finding eigenvalues via det(A - λI) = 0 (characteristic polynomial)
  • Applications: Area, volume, cross products, change of variables in integrals

Skills Mastered

  • Row reduction with operation tracking
  • Strategic cofactor expansion
  • Sarrus rule for 3×3 matrices
  • Special matrix formulas (triangular, block, Vandermonde)
  • Choosing the optimal computation method

Chapter Summary

This module covered the practical computation of determinants. The key insight is that different methods suit different situations: row reduction for large general matrices, cofactor expansion for sparse or small matrices, and direct formulas for special structures.

4

Main Methods

15

Practice Questions

12

FAQs Answered

O(n³)

Best Complexity

Related Topics

Matrix Inverse
Cramer's Rule
Eigenvalues
Characteristic Polynomial
LU Decomposition
Volume and Area
Cross Product
Determinant Computation Practice
15
Questions
0
Correct
0%
Accuracy
1
The most efficient general method for large matrices is:
Easy
Not attempted
2
After row reducing to upper triangular form, det equals:
Medium
Not attempted
3
When using cofactor expansion, choose a row/column with:
Easy
Not attempted
4
The 3×3 Vandermonde determinant V(x1,x2,x3)V(x_1, x_2, x_3) equals:
Hard
Not attempted
5
Complexity of naive n×n cofactor expansion:
Medium
Not attempted
6
If a 4×4 matrix has a row of zeros:
Easy
Not attempted
7
For a diagonal matrix, det equals:
Easy
Not attempted
8
Each row swap during reduction:
Easy
Not attempted
9
The Sarrus rule applies to:
Easy
Not attempted
10
If you multiply row 2 by 5 during reduction, the original det is:
Medium
Not attempted
11
For block diagonal matrix diag(A, B), det equals:
Medium
Not attempted
12
Cofactor AijA_{ij} equals:
Medium
Not attempted
13
If two rows are identical, then det =
Easy
Not attempted
14
Time to compute 10×10 determinant by row reduction vs cofactor expansion:
Medium
Not attempted
15
Adding 3× row 1 to row 2 changes det by:
Easy
Not attempted

Frequently Asked Questions

Which method should I use for a 3×3 matrix?

For 3×3, the Sarrus rule is fastest: write the matrix, repeat columns 1-2 on the right, then take products along diagonals (add going right-down, subtract going right-up). Alternatively, use cofactor expansion along a row/column with zeros.

When should I use row reduction vs cofactor expansion?

Use row reduction for 4×4 and larger matrices—it's O(n³) vs O(n!) for cofactor expansion. Use cofactor expansion for small matrices (2×2, 3×3) or when a row/column has many zeros. For sparse matrices, cofactor expansion along the sparse row can be faster.

How do I handle row swaps when computing?

Keep a swap counter. Each swap negates det. Final det = (-1)^(swaps) × product of pivots. Tip: avoid swaps when possible by choosing pivot rows strategically, or just track them carefully.

What if I get fractions during row reduction?

Fractions are fine! The answer will be correct. To avoid them: (1) factor out common terms from rows, (2) clear denominators before dividing, or (3) use integer operations when possible. Some prefer to work symbolically.

Can I use column operations too?

Yes! Since det(A^T) = det(A), column operations follow identical rules: swap columns → negate, scale column → scale det, add multiple of column → unchanged. Use whichever is more convenient.

How do I know when det = 0 without computing?

det = 0 if: (1) any row/column is all zeros, (2) two rows/columns are identical, (3) two rows/columns are proportional, (4) rows/columns are linearly dependent, (5) rank < n. These shortcuts save computation.

What's the cofactor sign pattern?

The sign (-1)^{i+j} creates a checkerboard: + for positions where i+j is even (like a₁₁, a₁₃, a₂₂), - where i+j is odd (like a₁₂, a₂₁, a₂₃). Remember: top-left is always +.

Why doesn't Sarrus work for 4×4 matrices?

Sarrus rule is specific to 3×3. For 4×4+, the diagonal pattern doesn't capture all terms. A 4×4 determinant has 24 terms (4!), not the 6 (3!) that Sarrus covers. You must use cofactor expansion or row reduction.

How do I verify my answer?

Several checks: (1) compute by a different method, (2) verify det(A)det(A^{-1}) = 1 if you know A^{-1}, (3) check that row operations were tracked correctly, (4) use technology for complex cases.

What's the fastest way to compute a triangular determinant?

Just multiply the diagonal entries! For upper or lower triangular (including diagonal matrices), det = a₁₁ · a₂₂ · ... · aₙₙ. This is O(n), the fastest possible.

How does LU decomposition help?

If A = LU (lower × upper triangular), then det(A) = det(L)·det(U) = (product of L's diagonal) × (product of U's diagonal). With partial pivoting A = PLU, include det(P) = (-1)^(swaps).

Can determinants be computed in parallel?

Yes! Cofactor expansion naturally parallelizes (each minor is independent). Matrix operations in row reduction can also be parallelized. Modern numerical libraries exploit this for large matrices.