Determinants possess remarkable algebraic properties that make them indispensable tools for matrix analysis. These properties—multiplicativity, behavior under transpose, and effects of elementary operations—form the computational backbone of linear algebra.
The most fundamental property of determinants is multiplicativity: the determinant of a product equals the product of determinants. This connects matrix algebra to scalar algebra and has geometric meaning—composing transformations multiplies volume scaling factors.
For any matrices and over a field :
Method 1 (Axiomatic): Fix matrix and define .
We verify satisfies the three axioms:
By uniqueness, .
Method 2 (Elementary matrices): If is invertible, write . Since for each elementary matrix, by induction .
If is not invertible, then is also not invertible, so both sides equal 0.
Let and .
For any matrix and positive integer :
By induction using multiplicativity: .
If and are linear transformations with matrices and :
The transpose exchanges rows and columns. Remarkably, this preserves the determinant, establishing a powerful duality: every row property has a column counterpart.
For any matrix :
Using permutation formula: We have , so:
The map is a bijection on with .
The products are identical (just reordered), so the sums are equal.
Since , all row properties have column versions:
(upper triangular), .
(lower triangular),
The connection between determinants and invertibility is fundamental. A matrix is invertible if and only if its determinant is nonzero—a simple scalar test for an otherwise complex property.
If is an invertible matrix:
Apply multiplicativity to :
Since is invertible, , so .
For any matrix :
(⟹) If is invertible, write . Each , so .
(⟸) If is not invertible, columns are dependent. By multilinearity, any column can be written as a linear combination of others, giving det = 0.
Is invertible?
Since , is not invertible.
For matrix , these are equivalent:
Understanding how elementary operations affect determinants is crucial for computation. These effects follow from the axiomatic definition and enable efficient algorithms.
Let be and result from a row operation:
| Operation | Notation | Effect |
|---|---|---|
| Swap rows , | ||
| Scale row by | ||
| Add × row to row |
Type I (Swap): Follows from alternating property.
Type II (Scale): By multilinearity: .
Type III (Add): .
Compute .
Step 1: Factor 2 from row 1:
Step 2: :
Step 3: :
Step 4: Upper triangular:
By transpose property, column operations have identical effects on determinant.
Elementary matrices are obtained by performing one row operation on the identity. Their determinants follow directly from the effect on det(I) = 1.
The three types have determinants:
, confirming row operation effects.
Right multiplication performs column operations:
Same determinant effects: .
If , we can row reduce:
:
:
So
Verify:
For an matrix and scalar :
Writing with rows :
Each of rows contributes a factor of by multilinearity.
If is with :
Warning: unless !
For an matrix:
Example: For 3×3 matrix with det(A) = 5:
If (similar matrices), then:
Conclusion: Similar matrices have the same determinant.
For upper/lower triangular or diagonal matrices:
For block triangular (A is k×k, B is r×r):
Expand along the first column/row repeatedly. The block of zeros means only one cofactor is nonzero in each expansion step.
With and :
where A is k×r and B is r×k.
For block matrix with invertible:
is called the Schur complement of in .
If is invertible:
Block LU decomposition (Gaussian block elimination):
The left matrix has det = 1 (block lower triangular with I on diagonal), so:
Compute using Schur complement.
Take , , , .
A classic special determinant is the Vandermonde determinant:
This is nonzero iff all are distinct, which is key in polynomial interpolation.
Determinants have counterintuitive properties. Here are critical errors to avoid.
Wrong! Determinant is NOT additive.
Example: A = B = I₂. det(A) = det(B) = 1, but det(A+B) = det(2I) = 4 ≠ 2.
Wrong! It's for n×n matrices.
Example: det(2I₃) = 2³ = 8, not 2.
Each row swap multiplies det by -1.
Tip: Count swaps during row reduction and adjust sign at the end.
Scaling ONE row by c → multiply det by c
Scaling WHOLE matrix by c → multiply det by c^n
True! This one is actually correct: det(AB) = det(A)det(B) = det(B)det(A) = det(BA).
Note: Even though AB ≠ BA in general, their determinants are always equal.
det(AB) = det(A)det(B)
det(A^T) = det(A)
A invertible ⟺ det(A) ≠ 0
det(kA) = k^n det(A)
Problem 1
If and , find .
Problem 2
Let be 3×3 with . Find and .
Problem 3
Compute .
Problem 4
Prove: If (idempotent), then .
Problem 5
If is orthogonal (), prove .
Problem 6
Compute .
Solution 1
Solution 2
Solution 3
Lower triangular, so det = product of diagonal:
Solution 4
From :
So , meaning .
Therefore or .
Solution 5
From :
Therefore .
Solution 6
Use row/column operations. Subtract row 1 from rows 2, 3:
Expand along column 3:
| Property | Formula |
|---|---|
| Multiplicativity | |
| Transpose | |
| Inverse | |
| Power | |
| Scalar multiple | |
| Invertibility | invertible |
| Type | Matrix | Determinant |
|---|---|---|
| Row swap | ||
| Row scale | ||
| Row addition |
| Type | Formula |
|---|---|
| Triangular | Product of diagonal: |
| Block triangular | |
| Anti-block diagonal | |
| Schur complement | |
| Similar matrices |
| Operation | Effect on det |
|---|---|
| Swap two rows | Multiply by |
| Scale row by | Multiply by |
| Add multiple of row | No change |
Cauchy (1812): Proved the multiplicative property det(AB) = det(A)det(B), establishing determinants as a proper algebraic theory beyond just computational formulas. His work laid the foundation for understanding determinants as algebraic objects with consistent rules.
Jacobi (1841): Systematically developed properties of determinants and introduced the Jacobian determinant, connecting linear algebra to multivariable calculus. The Jacobian measures how volume changes under coordinate transformations.
Sylvester (1851): Coined the term "matrix" and developed the theory of matrix determinants alongside Arthur Cayley. Introduced the concept of minors and contributed to the understanding of block matrices.
Weierstrass: Contributed to the axiomatic understanding, showing that the three axioms (multilinear, alternating, normalized) uniquely determine the determinant. This abstract approach became foundational for modern linear algebra.
Modern view: Determinant is now understood as the unique alternating multilinear form on n vectors that equals 1 on the standard basis—connecting to exterior algebra and differential forms. In this view, det is a volume form.
Determinant properties are used throughout mathematics and applications. Here are key areas where these properties are essential.
Solve using determinants: . Requires det(A) ≠ 0 (invertibility).
Eigenvalues satisfy . The characteristic polynomial uses det properties extensively.
In multivariable integrals: where is the Jacobian matrix.
Vectors are linearly independent iff.
Area of parallelogram: . Volume of parallelepiped:.
Sign of det determines orientation. Positive = preserves handedness, negative = reverses (like reflection).
Now that you've mastered determinant properties, the next topics explore:
This module covered the fundamental algebraic properties of determinants. The key insight is that determinant behaves like a multiplicative homomorphism from matrices to scalars: it respects products but not sums. Combined with the transpose property, this gives determinants their computational power.
13
Theorems Covered
15
Practice Questions
12
FAQs Answered
The function f(B) = det(AB) is multilinear and alternating in the rows of B, with f(I) = det(A). By the uniqueness of determinants, f(B) must equal det(A)·det(B). Geometrically, composing linear transformations multiplies their volume scaling factors.
There are three types: (1) Swap two rows → multiply det by -1; (2) Scale a row by c → multiply det by c; (3) Add a multiple of one row to another → det unchanged. These follow directly from the axiomatic definition (alternating, multilinear, normalized).
For both upper and lower triangular matrices, det = product of diagonal entries. This makes Gaussian elimination extremely efficient: reduce to triangular form (tracking sign changes from swaps), then multiply the diagonal. This is O(n³) vs O(n!) for direct computation.
A square matrix A is invertible if and only if det(A) ≠ 0. Geometrically, det = 0 means the transformation collapses space (zero volume). Algebraically, det = 0 means the columns are linearly dependent, so A has a non-trivial kernel and cannot be injective.
NO! This is a common mistake. Determinant is NOT additive. It's only linear in each row separately (multilinear). Counterexample: A = B = I₂ gives det(A) = det(B) = 1, but det(A+B) = det(2I) = 4 ≠ 2.
In the permutation formula, transposing swaps row and column indices. Each permutation σ appears paired with its inverse σ⁻¹ (which has the same sign), so the sum is unchanged. This duality means all row properties have column versions.
For block triangular matrices: det([A,B;O,D]) = det(A)·det(D) and det([A,O;C,B]) = det(A)·det(B). For general block matrices [A,B;C,D], if A is invertible: det = det(A)·det(D - CA⁻¹B). The quantity D - CA⁻¹B is called the Schur complement.
det(E_ij) = -1 (row swap), det(E_i(c)) = c (scale row by c), det(E_ij(k)) = 1 (add k times row j to row i). These follow from applying each operation to the identity matrix and using the determinant axioms.
When you multiply a matrix by scalar k, EVERY row gets multiplied by k. Since det is linear in each row, multiplying n rows by k gives k^n factor. For example, det(2·I₃) = 2³ = 8, not 2.
Yes! The sign of det indicates orientation. det > 0 means the transformation preserves orientation, det < 0 means it reverses orientation (like a reflection). Only |det| measures volume scaling.
For invertible A: det(A⁻¹) = 1/det(A). This follows from det(A)·det(A⁻¹) = det(A·A⁻¹) = det(I) = 1. Note that A must be invertible (det(A) ≠ 0) for this to make sense.
Multiplying ONE row by scalar c multiplies det by c (multilinearity). This is different from multiplying the WHOLE matrix by c, which gives c^n·det(A). Common mistake: confusing these two operations.