The determinant is a fundamental scalar-valued function on square matrices that measures how a matrix scales volume and determines invertibility. This course covers the definition, properties, computation methods, and applications of determinants.
Determinants were first studied by Gottfried Wilhelm Leibniz (1646–1716) and Seki Takakazu (1642–1708) independently in the late 17th century. The modern axiomatic definition was developed by Karl Weierstrass and others in the 19th century. Pierre-Simon Laplace (1749–1827) developed the cofactor expansion method, while Gabriel Cramer (1704–1752) gave the rule for solving linear systems. The geometric interpretation as signed volume was recognized early, and determinants remain central to linear algebra, differential geometry, and many areas of mathematics.
The determinant can be defined in multiple equivalent ways: axiomatically (multilinear, alternating, normalized), via permutations (Leibniz formula), or recursively via cofactor expansion.
The determinant is the unique function satisfying:
For an matrix :
where is the symmetric group and is the sign of permutation .
For :
This comes from two permutations: identity (gives ) and swap (gives ).
For , is the area of the parallelogram spanned by the columns. For , it's the signed volume of the parallelepiped. The sign indicates orientation.
For , expanding along the first row:
where is the matrix obtained by deleting row 1 and column .
For :
This is the "Sarrus rule" for matrices (valid only for ).
Determinants have beautiful algebraic properties that make them powerful tools for understanding matrices and linear maps.
for any matrices .
.
An matrix is invertible if and only if .
If is invertible, then .
For triangular (upper or lower) matrices, (product of diagonal entries).
For an matrix and scalar :
Each of rows gets multiplied by , so the determinant scales by .
For block matrices:
If , then . Similar matrices have the same determinant.
There are several methods for computing determinants, each with different computational complexity and practical advantages.
Row reduce to upper triangular form , tracking:
Then .
Row reduction is , making it the preferred method for large matrices.
For any row or column :
where is the cofactor and is the minor (determinant after deleting row and column ).
Choose a row/column with many zeros to minimize computation. Cofactor expansion is in general but can be efficient for sparse matrices.
For :
This is crucial in polynomial interpolation theory.
If where is lower triangular and is upper triangular, then:
This is efficient when can be factored as .
Laplace expansion provides a recursive method for computing determinants, and the adjugate matrix gives an explicit formula for the inverse.
For any row :
Similarly for any column . The expansion is valid along any row or column.
If , then .
This is like computing the determinant of a matrix with two identical rows.
The adjugate (or classical adjoint) of is:
where is the cofactor. That is, is the transpose of the cofactor matrix.
.
If , then .
For :
Note: swap diagonal, negate off-diagonal.
For the system with :
where is with column replaced by .
Cramer's rule is elegant but computationally expensive (). For practical computation, use Gaussian elimination ().
Solve using Cramer's rule.
,
, ,
So , .
The adjugate provides an explicit formula for the inverse:
This is particularly useful for symbolic computation and theoretical analysis.
Advanced techniques for computing and manipulating determinants, including special matrix types and computational tricks.
If is block diagonal, then:
For block matrix with invertible:
The matrix is called the Schur complement of in .
For a matrix with block structure, use Schur complement to reduce to smaller determinants.
For functions , the Wronskian is:
If at some point, the functions are linearly independent.
For and with :
where the sum is over all -element subsets of , and are the corresponding submatrices.
For and , can be computed using Cauchy-Binet formula.
Determinants have deep geometric interpretations: they measure volumes, areas, and orientation. This connection between algebra and geometry is fundamental to linear algebra.
For , is the -dimensional volume of the parallelepiped spanned by the columns (or rows) of .
For , the columns are and .
The area of the parallelogram is .
For matrix , is the volume of the parallelepiped spanned by the three column vectors.
If , the vectors are coplanar (volume is zero).
The sign of indicates orientation:
For a linear transformation with matrix , if is a region with volume , then has volume .
In , the cross product can be computed as:
The magnitude equals the area of the parallelogram spanned by and .
For a triangle with vertices in , the area is:
Determinants are used in:
For a 2×2 matrix, det(A) is the signed area of the parallelogram spanned by the columns. For 3×3, it's the signed volume of the parallelepiped. The sign indicates orientation (right-handed vs left-handed).
This follows from the axiomatic definition. The map A ↦ det(A) is a multiplicative function. Geometrically, composing linear maps multiplies the volume scaling factors.
Row reduction is O(n³) and preferred for large matrices. Cofactor expansion is O(n!) but useful for symbolic computation, small matrices, or when a row/column has many zeros.
The adjugate adj(A) is the transpose of the cofactor matrix. It satisfies A·adj(A) = det(A)·I, giving the inverse formula A^{-1} = adj(A)/det(A) when det(A) ≠ 0.
For Ax = b with det(A) ≠ 0, Cramer's rule gives x_i = det(A_i)/det(A) where A_i is A with column i replaced by b. It's elegant but computationally expensive (O(n·n!)).