MathIsimple
LA-C8
Available

Course 8: Determinants

The determinant is a fundamental scalar-valued function on square matrices that measures how a matrix scales volume and determines invertibility. This course covers the definition, properties, computation methods, and applications of determinants.

12-15 hours Core Level 10 Objectives
Learning Objectives
  • State and apply the axiomatic definition of determinants (multilinear, alternating, normalized).
  • Master the permutation (Leibniz) formula with inversion numbers.
  • Prove and apply multiplicativity: det(AB) = det(A)det(B).
  • Understand the connection between determinant and invertibility.
  • Compute determinants using row reduction and cofactor expansion.
  • Apply Laplace expansion along any row or column strategically.
  • Understand the adjugate matrix and the inverse formula A⁻¹ = adj(A)/det(A).
  • Apply Cramer's rule to solve linear systems.
  • Recognize special determinants (triangular, Vandermonde, block).
  • Understand the geometric interpretation as signed volume.
Prerequisites
  • LA-C7: Matrix Inverses, Elementary Matrices & Dual Spaces
  • Matrix operations and elementary matrices
  • Permutations and basic combinatorics
  • Gaussian elimination
  • Basic properties of linear maps
Historical Context

Determinants were first studied by Gottfried Wilhelm Leibniz (1646–1716) and Seki Takakazu (1642–1708) independently in the late 17th century. The modern axiomatic definition was developed by Karl Weierstrass and others in the 19th century. Pierre-Simon Laplace (1749–1827) developed the cofactor expansion method, while Gabriel Cramer (1704–1752) gave the rule for solving linear systems. The geometric interpretation as signed volume was recognized early, and determinants remain central to linear algebra, differential geometry, and many areas of mathematics.

1. Definition of Determinant

The determinant can be defined in multiple equivalent ways: axiomatically (multilinear, alternating, normalized), via permutations (Leibniz formula), or recursively via cofactor expansion.

Definition 1.1: Axiomatic Definition

The determinant is the unique function det:Mn(F)F\det: M_n(F) \to F satisfying:

  1. Multilinearity: Linear in each row (and column)
  2. Alternating: Swapping two rows (or columns) negates the determinant
  3. Normalization: det(I)=1\det(I) = 1
Definition 1.2: Permutation (Leibniz) Formula

For an n×nn \times n matrix AA:

det(A)=σSnsgn(σ)i=1nai,σ(i)\det(A) = \sum_{\sigma \in S_n} \text{sgn}(\sigma) \prod_{i=1}^n a_{i,\sigma(i)}

where SnS_n is the symmetric group and sgn(σ)=(1)inv(σ)\text{sgn}(\sigma) = (-1)^{\text{inv}(\sigma)} is the sign of permutation σ\sigma.

Example 1.1: 2×2 Determinant

For A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}:

det(A)=adbc\det(A) = ad - bc

This comes from two permutations: identity (gives +ad+ad) and swap (gives bc-bc).

Remark 1.1: Geometric Interpretation

For 2×22 \times 2, det(A)|\det(A)| is the area of the parallelogram spanned by the columns. For 3×33 \times 3, it's the signed volume of the parallelepiped. The sign indicates orientation.

Definition 1.3: Recursive Definition via Minors

For n>1n > 1, expanding along the first row:

det(A)=j=1n(1)1+ja1jdet(M1j)\det(A) = \sum_{j=1}^n (-1)^{1+j} a_{1j} \det(M_{1j})

where M1jM_{1j} is the (n1)×(n1)(n-1) \times (n-1) matrix obtained by deleting row 1 and column jj.

Example 1.2: 3×3 Determinant

For A=(abcdefghi)A = \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix}:

det(A)=a(eifh)b(difg)+c(dheg)\det(A) = a(ei - fh) - b(di - fg) + c(dh - eg)

This is the "Sarrus rule" for 3×33 \times 3 matrices (valid only for 3×33 \times 3).

2. Properties of Determinants

Determinants have beautiful algebraic properties that make them powerful tools for understanding matrices and linear maps.

Theorem 2.1: Multiplicativity

det(AB)=det(A)det(B)\det(AB) = \det(A) \det(B) for any n×nn \times n matrices A,BA, B.

Theorem 2.2: Transpose Property

det(AT)=det(A)\det(A^T) = \det(A).

Theorem 2.3: Invertibility Criterion

An n×nn \times n matrix AA is invertible if and only if det(A)0\det(A) \neq 0.

Corollary 2.1: Inverse Determinant

If AA is invertible, then det(A1)=1/det(A)\det(A^{-1}) = 1/\det(A).

Theorem 2.4: Row Operations
  1. Swapping two rows: detdet\det \to -\det
  2. Scaling a row by cc: detcdet\det \to c \cdot \det
  3. Adding a multiple of one row to another: det\det unchanged
Theorem 2.5: Triangular Matrices

For triangular (upper or lower) matrices, det=i=1naii\det = \prod_{i=1}^n a_{ii} (product of diagonal entries).

Example 2.1: Scalar Multiple

For an n×nn \times n matrix AA and scalar cc:

det(cA)=cndet(A)\det(cA) = c^n \det(A)

Each of nn rows gets multiplied by cc, so the determinant scales by cnc^n.

Theorem 2.6: Block Matrix Determinant

For block matrices:

  • If A=(BC0D)A = \begin{pmatrix} B & C \\ 0 & D \end{pmatrix} with B,DB, D square, then det(A)=det(B)det(D)\det(A) = \det(B) \det(D)
  • Similarly for block lower triangular matrices
Corollary 2.2: Determinant of Similar Matrices

If B=P1APB = P^{-1} A P, then det(B)=det(A)\det(B) = \det(A). Similar matrices have the same determinant.

3. Computation Methods

There are several methods for computing determinants, each with different computational complexity and practical advantages.

Definition 3.1: Row Reduction Method

Row reduce AA to upper triangular form UU, tracking:

  • Each row swap multiplies det by 1-1
  • Each row scaling by cc multiplies det by cc
  • Row additions don't change det

Then det(A)=det(U)/(product of scaling factors)×(1)number of swaps\det(A) = \det(U) / (\text{product of scaling factors}) \times (-1)^{\text{number of swaps}}.

Remark 3.1: Complexity

Row reduction is O(n3)O(n^3), making it the preferred method for large matrices.

Definition 3.2: Cofactor Expansion (Laplace)

For any row ii or column jj:

det(A)=j=1naijAij=i=1naijAij\det(A) = \sum_{j=1}^n a_{ij} A_{ij} = \sum_{i=1}^n a_{ij} A_{ij}

where Aij=(1)i+jMijA_{ij} = (-1)^{i+j} M_{ij} is the cofactor and MijM_{ij} is the minor (determinant after deleting row ii and column jj).

Remark 3.2: Strategic Choice

Choose a row/column with many zeros to minimize computation. Cofactor expansion is O(n!)O(n!) in general but can be efficient for sparse matrices.

Example 3.1: Vandermonde Determinant

For V(x1,,xn)=det(1x1x121x2x22)V(x_1, \ldots, x_n) = \det\begin{pmatrix} 1 & x_1 & x_1^2 & \cdots \\ 1 & x_2 & x_2^2 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix}:

det(V)=1i<jn(xjxi)\det(V) = \prod_{1 \leq i < j \leq n} (x_j - x_i)

This is crucial in polynomial interpolation theory.

Example 3.2: Determinant via LU Decomposition

If A=LUA = LU where LL is lower triangular and UU is upper triangular, then:

det(A)=det(L)det(U)=(lii)(uii)\det(A) = \det(L) \det(U) = (\prod l_{ii}) (\prod u_{ii})

This is efficient when AA can be factored as LULU.

Remark 3.3: Computational Strategies

  • For small matrices (≤3): use direct formula
  • For sparse matrices: use cofactor expansion along rows/columns with many zeros
  • For general matrices: use row reduction (most efficient)
  • For structured matrices: use specialized formulas (Vandermonde, block, etc.)

4. Laplace Expansion and Adjugate

Laplace expansion provides a recursive method for computing determinants, and the adjugate matrix gives an explicit formula for the inverse.

Theorem 4.1: Laplace Expansion

For any row ii:

det(A)=j=1naijAij\det(A) = \sum_{j=1}^n a_{ij} A_{ij}

Similarly for any column jj. The expansion is valid along any row or column.

Theorem 4.2: Alien Cofactor Theorem

If iki \neq k, then j=1naijAkj=0\sum_{j=1}^n a_{ij} A_{kj} = 0.

This is like computing the determinant of a matrix with two identical rows.

Definition 4.1: Adjugate Matrix

The adjugate (or classical adjoint) of AA is:

[adj(A)]ij=Aji[\text{adj}(A)]_{ij} = A_{ji}

where AjiA_{ji} is the cofactor. That is, adj(A)\text{adj}(A) is the transpose of the cofactor matrix.

Theorem 4.3: Fundamental Adjugate Identity

Aadj(A)=adj(A)A=det(A)IA \cdot \text{adj}(A) = \text{adj}(A) \cdot A = \det(A) \cdot I.

Corollary 4.1: Inverse Formula

If det(A)0\det(A) \neq 0, then A1=adj(A)det(A)A^{-1} = \frac{\text{adj}(A)}{\det(A)}.

Example 4.1: 2×2 Adjugate

For A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}:

adj(A)=(dbca)\text{adj}(A) = \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}

Note: swap diagonal, negate off-diagonal.

Definition 4.2: Cramer's Rule

For the system Ax=bAx = b with det(A)0\det(A) \neq 0:

xi=det(Ai)det(A)x_i = \frac{\det(A_i)}{\det(A)}

where AiA_i is AA with column ii replaced by bb.

Remark 4.1: Computational Note

Cramer's rule is elegant but computationally expensive (O(nn!)O(n \cdot n!)). For practical computation, use Gaussian elimination (O(n3)O(n^3)).

Example 4.2: Cramer's Rule Example

Solve {2x+y=5x+3y=7\begin{cases} 2x + y = 5 \\ x + 3y = 7 \end{cases} using Cramer's rule.

A=(2113)A = \begin{pmatrix} 2 & 1 \\ 1 & 3 \end{pmatrix}, b=(57)b = \begin{pmatrix} 5 \\ 7 \end{pmatrix}

det(A)=5\det(A) = 5, det(A1)=det(5173)=8\det(A_1) = \det\begin{pmatrix} 5 & 1 \\ 7 & 3 \end{pmatrix} = 8, det(A2)=det(2517)=9\det(A_2) = \det\begin{pmatrix} 2 & 5 \\ 1 & 7 \end{pmatrix} = 9

So x=8/5x = 8/5, y=9/5y = 9/5.

Theorem 4.4: Adjugate and Inverse

The adjugate provides an explicit formula for the inverse:

A1=1det(A)adj(A)A^{-1} = \frac{1}{\det(A)} \text{adj}(A)

This is particularly useful for symbolic computation and theoretical analysis.

5. Advanced Determinant Techniques

Advanced techniques for computing and manipulating determinants, including special matrix types and computational tricks.

Theorem 5.1: Determinant of Block Diagonal

If A=diag(A1,A2,,Ak)A = \text{diag}(A_1, A_2, \ldots, A_k) is block diagonal, then:

det(A)=i=1kdet(Ai)\det(A) = \prod_{i=1}^k \det(A_i)
Theorem 5.2: Schur Complement Formula

For block matrix A=(BCDE)A = \begin{pmatrix} B & C \\ D & E \end{pmatrix} with BB invertible:

det(A)=det(B)det(EDB1C)\det(A) = \det(B) \det(E - DB^{-1}C)

The matrix EDB1CE - DB^{-1}C is called the Schur complement of BB in AA.

Example 5.1: Computing Large Determinants

For a 10×1010 \times 10 matrix with block structure, use Schur complement to reduce to smaller determinants.

Definition 5.1: Wronskian

For functions f1,,fnf_1, \ldots, f_n, the Wronskian is:

W(f1,,fn)=det(f1f2fnf1f2fnf1(n1)f2(n1)fn(n1))W(f_1, \ldots, f_n) = \det\begin{pmatrix} f_1 & f_2 & \cdots & f_n \\ f_1' & f_2' & \cdots & f_n' \\ \vdots & \vdots & \ddots & \vdots \\ f_1^{(n-1)} & f_2^{(n-1)} & \cdots & f_n^{(n-1)} \end{pmatrix}

If W0W \neq 0 at some point, the functions are linearly independent.

Theorem 5.3: Cauchy-Binet Formula

For AMm×n(F)A \in M_{m \times n}(F) and BMn×m(F)B \in M_{n \times m}(F) with mnm \leq n:

det(AB)=Sdet(AS)det(BS)\det(AB) = \sum_{S} \det(A_S) \det(B_S)

where the sum is over all mm-element subsets SS of {1,,n}\{1, \ldots, n\}, and AS,BSA_S, B_S are the corresponding submatrices.

Example 5.2: Determinant of Product of Non-Square Matrices

For AM2×3A \in M_{2 \times 3} and BM3×2B \in M_{3 \times 2}, det(AB)\det(AB) can be computed using Cauchy-Binet formula.

6. Determinants and Geometry

Determinants have deep geometric interpretations: they measure volumes, areas, and orientation. This connection between algebra and geometry is fundamental to linear algebra.

Theorem 6.1: Volume Interpretation

For AMn(R)A \in M_n(\mathbb{R}), det(A)|\det(A)| is the nn-dimensional volume of the parallelepiped spanned by the columns (or rows) of AA.

Example 6.1: Area in ℝ²

For A=(acbd)A = \begin{pmatrix} a & c \\ b & d \end{pmatrix}, the columns are (a,b)(a, b) and (c,d)(c, d).

The area of the parallelogram is adbc=det(A)|ad - bc| = |\det(A)|.

Example 6.2: Volume in ℝ³

For 3×33 \times 3 matrix AA, det(A)|\det(A)| is the volume of the parallelepiped spanned by the three column vectors.

If det(A)=0\det(A) = 0, the vectors are coplanar (volume is zero).

Definition 6.1: Orientation

The sign of det(A)\det(A) indicates orientation:

  • det(A)>0\det(A) > 0: preserves orientation (right-handed)
  • det(A)<0\det(A) < 0: reverses orientation (left-handed)
  • det(A)=0\det(A) = 0: collapses to lower dimension
Theorem 6.2: Change of Variables

For a linear transformation T:RnRnT: \mathbb{R}^n \to \mathbb{R}^n with matrix AA, if SS is a region with volume VV, then T(S)T(S) has volume det(A)V|\det(A)| \cdot V.

Example 6.3: Scaling and Rotation

  • Scaling by factor kk: det=kn\det = k^n (volume scales by knk^n)
  • Rotation: det=1\det = 1 (preserves volume and orientation)
  • Reflection: det=1\det = -1 (preserves volume, reverses orientation)

Theorem 6.3: Cross Product and Determinant

In R3\mathbb{R}^3, the cross product u×vu \times v can be computed as:

u×v=det(e1e2e3u1u2u3v1v2v3)u \times v = \det\begin{pmatrix} e_1 & e_2 & e_3 \\ u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \end{pmatrix}

The magnitude u×v|u \times v| equals the area of the parallelogram spanned by uu and vv.

Example 6.4: Triangle Area

For a triangle with vertices P,Q,RP, Q, R in R2\mathbb{R}^2, the area is:

Area=12det(111PxQxRxPyQyRy)\text{Area} = \frac{1}{2} \left|\det\begin{pmatrix} 1 & 1 & 1 \\ P_x & Q_x & R_x \\ P_y & Q_y & R_y \end{pmatrix}\right|
Remark 6.1: Geometric Applications

Determinants are used in:

  • Computing areas and volumes in computational geometry
  • Testing for collinearity/coplanarity
  • Finding equations of lines/planes through given points
  • Change of variables in multiple integrals
Course 8 Practice Quiz
10
Questions
0
Correct
0%
Accuracy
1
The determinant of (abcd)\begin{pmatrix} a & b \\ c & d \end{pmatrix} is:
Easy
Not attempted
2
det(AB)=\det(AB) =
Easy
Not attempted
3
det(AT)=\det(A^T) =
Easy
Not attempted
4
If AA is invertible, det(A1)=\det(A^{-1}) =
Medium
Not attempted
5
Determinant of an upper triangular matrix equals:
Easy
Not attempted
6
Laplace expansion along row ii gives:
Medium
Not attempted
7
The adjugate matrix adj(A) satisfies:
Medium
Not attempted
8
Cramer's rule solves Ax=bAx = b when:
Easy
Not attempted
9
det(cA)\det(cA) for an n×nn \times n matrix AA equals:
Medium
Not attempted
10
If a matrix has two identical rows, then:
Easy
Not attempted

Frequently Asked Questions

What is the geometric meaning of the determinant?

For a 2×2 matrix, det(A) is the signed area of the parallelogram spanned by the columns. For 3×3, it's the signed volume of the parallelepiped. The sign indicates orientation (right-handed vs left-handed).

Why does det(AB) = det(A)det(B)?

This follows from the axiomatic definition. The map A ↦ det(A) is a multiplicative function. Geometrically, composing linear maps multiplies the volume scaling factors.

When should I use row reduction vs cofactor expansion?

Row reduction is O(n³) and preferred for large matrices. Cofactor expansion is O(n!) but useful for symbolic computation, small matrices, or when a row/column has many zeros.

What is the adjugate matrix and why is it important?

The adjugate adj(A) is the transpose of the cofactor matrix. It satisfies A·adj(A) = det(A)·I, giving the inverse formula A^{-1} = adj(A)/det(A) when det(A) ≠ 0.

How does Cramer's rule work?

For Ax = b with det(A) ≠ 0, Cramer's rule gives x_i = det(A_i)/det(A) where A_i is A with column i replaced by b. It's elegant but computationally expensive (O(n·n!)).