The determinant is a scalar-valued function on square matrices that captures essential properties of linear transformations, including invertibility and volume scaling. We present three equivalent definitions: axiomatic, permutation, and cofactor expansion.
The determinant emerged from attempts to solve systems of linear equations. Gottfried Wilhelm Leibniz (1693) first discovered determinants while studying elimination methods for solving linear systems, though he did not publish his findings.
Gabriel Cramer (1750) independently discovered determinants and published what we now call Cramer's Rule for solving linear systems. The term "determinant" was coined by Carl Friedrich Gauss in 1801.
Augustin-Louis Cauchy (1812) systematically developed determinant theory, proving the product formula det(AB) = det(A)det(B). The axiomatic approach was later formalized in the 20th century, showing that the three axioms uniquely determine the determinant.
The determinant is one of the most important quantities associated with a square matrix. It arises naturally from multiple perspectives and has profound theoretical and practical significance. Understanding why determinants matter helps appreciate the three equivalent definitions we present.
For 2×2 systems, det determines when unique solutions exist: . The denominator ad - bc is precisely the determinant!
det(A) measures how A scales volumes. |det(A)| is the scaling factor, the sign indicates whether orientation is preserved (+) or reversed (−).
A matrix A is invertible if and only if det(A) ≠ 0. This provides a single-number criterion for invertibility.
Eigenvalues are roots of det(A − λI) = 0. The determinant connects linear algebra to polynomial equations.
The axiomatic approach defines the determinant by specifying the properties it must satisfy. Remarkably, these three simple properties completely determine a unique function. This approach reveals the essential nature of determinants and provides the foundation for proving many properties.
The determinant is the unique function satisfying:
These three axioms completely characterize the determinant:
The proof of uniqueness uses the fact that any matrix can be row-reduced to the identity (or a matrix with a zero row), and row operations have known effects on det.
There exists exactly one function satisfying the three axioms.
Uniqueness: Let be any function satisfying the axioms. Any invertible matrix can be written as a product of elementary matrices: .
By multilinearity and the alternating property:
Starting from , these rules determine for every matrix.
From the three axioms, we can immediately derive:
Proof: Swapping identical rows gives same matrix but negates det, so det = −det, hence det = 0.
Proof: By homogeneity, det(..., 0, ...) = det(..., 0·α, ...) = 0·det(..., α, ...) = 0.
Proof: det(..., αᵢ + cαⱼ, ..., αⱼ, ...) = det(..., αᵢ, ..., αⱼ, ...) + c·det(..., αⱼ, ..., αⱼ, ...) = det(A) + c·0.
Compute using only the axioms:
Step 1: Use R₁ → R₁ - 2R₂ (row addition, preserves det):
Step 2: Swap rows (negates det):
Step 3: Factor out -2 from row 2 (homogeneity):
Step 4: Use R₁ → R₁ - 4R₂:
Verification: Using ad - bc: 2(4) - 6(1) = 8 - 6 = 2 ✓
The permutation definition gives an explicit formula for computing determinants. While not practical for large matrices (n! terms!), it provides theoretical insight and proves that a function satisfying the three axioms actually exists.
A permutation of is a bijection .
We write where .
An inversion is a pair with but .
The inversion number counts all inversions. The sign of is .
(a) For :
Total: , sign = (odd permutation)
(b) For :
Total: , sign = (odd)
(c) Identity permutation : , sign = +1
For :
where is the symmetric group (all n! permutations of ).
Each term picks exactly one entry from each row, with column indices forming a permutation. The sign is determined by the parity of the permutation:
Sketch: We verify the formula satisfies the three axioms.
Multilinearity: Each term is linear in each row (product of one entry from each row).
Alternating: Swapping rows i and j swaps entries in those positions, which changes every permutation's parity (adds or removes exactly one inversion).
Normalization: For I, only the identity permutation gives a non-zero term: 1·1·...·1 with sign +1.
For , there are 2! = 2 permutations:
Therefore: ✓
For 3×3 matrices, there are 3! = 6 permutations:
| σ | τ(σ) | Sign | Term |
|---|---|---|---|
| (1,2,3) | 0 | + | +a₁₁a₂₂a₃₃ |
| (1,3,2) | 1 | − | −a₁₁a₂₃a₃₂ |
| (2,1,3) | 1 | − | −a₁₂a₂₁a₃₃ |
| (2,3,1) | 2 | + | +a₁₂a₂₃a₃₁ |
| (3,1,2) | 2 | + | +a₁₃a₂₁a₃₂ |
| (3,2,1) | 3 | − | −a₁₃a₂₂a₃₁ |
This gives the Sarrus formula: det = a₁₁a₂₂a₃₃ + a₁₂a₂₃a₃₁ + a₁₃a₂₁a₃₂ − a₁₃a₂₂a₃₁ − a₁₁a₂₃a₃₂ − a₁₂a₂₁a₃₃
For small matrices, explicit formulas are practical. These formulas are derived from the permutation definition and are essential to memorize for quick calculations.
The determinant of a 1×1 matrix is just the entry itself.
Product of main diagonal minus product of anti-diagonal.
Memory aid: Add products along ↘ diagonals, subtract products along ↙ diagonals. (Extend the matrix by copying first two columns to the right.)
Calculate :
Calculate :
The determinant is 0 because the rows are linearly dependent: Row 2 = (Row 1 + Row 3)/2.
The Rule of Sarrus does NOT generalize to 4×4 or larger matrices! For n ≥ 4, use cofactor expansion or row reduction.
The cofactor expansion provides a recursive method to compute determinants. It reduces an n×n determinant to a sum of (n-1)×(n-1) determinants. This is the most practical method for hand calculations.
For :
The signs follow a checkerboard pattern:
Row expansion (along row ):
Column expansion (along column ):
Calculate by expanding along row 2 (which has a zero):
Since , we only need two cofactors:
Calculate by expanding along row 3:
Row 3 has only one non-zero entry at position (3,3), so:
Expand this 3×3 along column 2:
The determinant has a beautiful geometric meaning: it measures how a linear transformation scales volumes and whether it preserves or reverses orientation.
Let be an matrix with rows . Then:
Vectors and span a parallelogram.
Vectors , , :
If is a linear transformation with matrix , then for any measurable region :
Determinants have several counterintuitive properties. Understanding these common errors will help you avoid them in calculations and proofs.
Wrong:
Correct: for matrix
Why? Each of the n rows gets multiplied by c, and det is linear in each row. Example: For 3×3, det(2A) = 2³·det(A) = 8·det(A).
Wrong:
Reality: No simple formula exists for det(A + B)
Counterexample: Let A = B = I₂. Then det(A) = det(B) = 1, but det(A + B) = det(2I) = 4 ≠ 2.
Wrong: Forgetting in cofactor
Correct:
Tip: Use the checkerboard pattern. Position (1,1) has +, (1,2) has −, (2,1) has −, (2,2) has +, etc.
Wrong: Extending the "diagonal rule" to 4×4 matrices
Reality: Sarrus rule only works for 3×3
For larger matrices: Use cofactor expansion or row reduction.
Correct effects:
Common error: Forgetting to track sign changes when using row reduction.
Axiomatic, permutation, and cofactor—all equivalent.
The expansion has n! terms, one per permutation.
|det| = volume scaling, sign = orientation.
A invertible ⟺ det(A) ≠ 0.
Work through these problems to solidify your understanding. Solutions are provided below.
Problem 1 (Easy)
Calculate using cofactor expansion.
Problem 2 (Medium)
Prove that using the permutation definition.
Problem 3 (Easy)
Find the inversion number of and determine if it's odd or even.
Problem 4 (Easy)
If , what is for a 4×4 matrix ?
Problem 5 (Medium)
Calculate .
Problem 6 (Hard)
Prove: If A is n×n and det(A) = 0, then there exists a non-zero vector x such that Ax = 0.
Solution 1
Expand along row 1: det = 1·A₁₁ + 2·A₁₂ + 3·A₁₃
The determinant is 0 because the rows are linearly dependent (row 2 = average of rows 1 and 3).
Solution 2
Using the Leibniz formula:
The products are the same (just reordered), and the bijection σ → σ⁻¹ shows the sums are equal since τ(σ) = τ(σ⁻¹).
Solution 3
For σ = (4, 2, 3, 1), count inversions:
Total: τ = 5 (odd), so sign = (−1)⁵ = −1. This is an odd permutation.
Solution 4
For n×n matrix: det(cA) = cⁿ·det(A)
Solution 5
This is a block diagonal matrix! For block diagonal matrices:
Solution 6
Proof: If det(A) = 0, then the columns of A are linearly dependent.
This means there exist scalars c₁, ..., cₙ (not all zero) such that c₁a₁ + ... + cₙaₙ = 0, where aⱼ are the columns of A.
Let x = (c₁, ..., cₙ)ᵀ. Then Ax = c₁a₁ + ... + cₙaₙ = 0, and x ≠ 0. ∎
Certain types of matrices have determinants that can be computed immediately without expansion. Recognizing these patterns greatly simplifies calculations.
For upper or lower triangular matrices, the determinant is the product of diagonal entries:
Special case of triangular:
Determinant factors over blocks:
Same formula applies:
Leibniz (1693): First introduced determinants while studying systems of linear equations. He discovered the n! term expansion but did not publish his findings.
Cramer (1750): Published the famous rule bearing his name for solving linear systems using determinants. His work brought determinants to wider attention.
Vandermonde (1771): First systematic treatment of determinants as independent objects. The Vandermonde determinant is named after him.
Laplace (1772): Developed the cofactor expansion method that bears his name, generalizing the recursive approach to computing determinants.
Cauchy (1812): Used the word "déterminant" and established many fundamental properties including the product formula det(AB) = det(A)det(B).
Jacobi (1841): Introduced the Jacobian determinant for change of variables in multivariable calculus, connecting determinants to differential geometry.
Sylvester (1850): Contributed to the theory of invariants and developed the terminology of minors and cofactors.
| Definition | Key Feature |
|---|---|
| Axiomatic | Multilinear, alternating, det(I) = 1 |
| Permutation | |
| Cofactor | (Laplace expansion) |
| Geometric | Signed volume scaling factor |
| Formula | Expression |
|---|---|
| 2×2 det | |
| Cofactor | |
| Scalar multiple | |
| Triangular | Product of diagonal entries |
| Block diagonal |
| Operation | Effect on det |
|---|---|
| Swap rows | Multiply by −1 |
| Scale row by c | Multiply by c |
| Add multiple of row | No change |
Now that you understand the definition, the next topics explore:
Each definition serves different purposes: the axiomatic definition (multilinear, alternating, normalized) reveals fundamental properties and proves uniqueness; the permutation (Leibniz) formula gives an explicit computational formula; and the cofactor expansion provides a recursive algorithm. All three are mathematically equivalent—they define the same function on matrices.
The determinant measures the signed volume scaling factor. If A maps the unit n-cube, |det(A)| is the volume of the resulting parallelepiped. The sign indicates orientation: positive preserves orientation, negative reverses it. In 2D, |det| gives the area of a parallelogram; in 3D, the volume of a parallelepiped.
The normalization det(I) = 1 makes the determinant unique. Given only multilinearity and the alternating property, we could multiply by any constant and still satisfy those axioms. The requirement det(I) = 1 pins down exactly one function. It also makes geometric sense: the identity transformation preserves volume.
A matrix A is invertible if and only if det(A) ≠ 0. Geometrically, det = 0 means the transformation collapses space to a lower dimension (zero volume). Algebraically, det = 0 means the columns are linearly dependent, so A has non-trivial kernel and cannot be injective.
Method 1: Rule of Sarrus—add products of main diagonals, subtract products of anti-diagonals. Method 2: Cofactor expansion along a row/column with zeros. Method 3: Row reduce to upper triangular form (product of diagonal entries). Always look for zeros to simplify cofactor expansion!
For permutation σ = (k₁, k₂, ..., kₙ), an inversion is a pair of positions (i, j) where i < j but kᵢ > kⱼ (a 'larger number appears before a smaller one'). The inversion number τ(σ) counts all such pairs. Even inversions give sign +1, odd give -1.
This is the alternating property, fundamental to the axiomatic definition. Geometrically, swapping two basis vectors reverses the orientation of the coordinate system. Algebraically, each row swap corresponds to composing with a transposition, which has sign -1 in the symmetric group.
Yes! Adding a multiple of row j to row i (with i ≠ j) preserves the determinant. This follows from multilinearity: det(..., αᵢ + cαⱼ, ..., αⱼ, ...) = det(..., αᵢ, ..., αⱼ, ...) + c·det(..., αⱼ, ..., αⱼ, ...) = det(A) + c·0 = det(A).
The minor Mᵢⱼ is the (n-1)×(n-1) determinant obtained by deleting row i and column j. The cofactor Aᵢⱼ = (-1)^(i+j) × Mᵢⱼ includes a sign factor. The signs follow a checkerboard pattern: + - + - ... for the first row, - + - + ... for the second, etc.
This multiplicativity follows from the axiomatic definition. The map B ↦ det(AB) is multilinear and alternating in the columns of B, with det(AI) = det(A). By uniqueness, it must equal det(A)·det(B). Geometrically: composing transformations multiplies their volume scaling factors.