MathIsimple
LA-5.1
Available

Determinant Definition

The determinant is a scalar-valued function on square matrices that captures essential properties of linear transformations, including invertibility and volume scaling. We present three equivalent definitions: axiomatic, permutation, and cofactor expansion.

3-4 hours Core Level 10 Objectives
Learning Objectives
  • State and apply the axiomatic definition of determinants (multilinear, alternating, normalized)
  • Master the permutation (Leibniz) formula with inversion numbers
  • Use cofactor expansion to compute determinants recursively
  • Prove that different definitions are equivalent
  • Calculate determinants of 2×2, 3×3, and 4×4 matrices efficiently
  • Understand the geometric interpretation as signed volume
  • Connect det ≠ 0 with matrix invertibility
  • Apply row operations to simplify determinant computation
  • Recognize special determinants (triangular, diagonal, block)
  • Prove basic properties from the axiomatic definition
Prerequisites
  • Matrix operations and notation (LA-4.2)
  • Elementary matrices and row operations (LA-4.4)
  • Permutations and the symmetric group Sₙ
  • Basic properties of linear maps
  • Field axioms and scalar operations
Historical Context

The determinant emerged from attempts to solve systems of linear equations. Gottfried Wilhelm Leibniz (1693) first discovered determinants while studying elimination methods for solving linear systems, though he did not publish his findings.

Gabriel Cramer (1750) independently discovered determinants and published what we now call Cramer's Rule for solving linear systems. The term "determinant" was coined by Carl Friedrich Gauss in 1801.

Augustin-Louis Cauchy (1812) systematically developed determinant theory, proving the product formula det(AB) = det(A)det(B). The axiomatic approach was later formalized in the 20th century, showing that the three axioms uniquely determine the determinant.

1. Motivation: Why Determinants?

The determinant is one of the most important quantities associated with a square matrix. It arises naturally from multiple perspectives and has profound theoretical and practical significance. Understanding why determinants matter helps appreciate the three equivalent definitions we present.

Solving Linear Systems

For 2×2 systems, det determines when unique solutions exist: x=d1bb1dadbcx = \frac{d_1 b - b_1 d}{ad - bc}. The denominator ad - bc is precisely the determinant!

Volume Scaling

det(A) measures how A scales volumes. |det(A)| is the scaling factor, the sign indicates whether orientation is preserved (+) or reversed (−).

Invertibility Test

A matrix A is invertible if and only if det(A) ≠ 0. This provides a single-number criterion for invertibility.

Eigenvalue Theory

Eigenvalues are roots of det(A − λI) = 0. The determinant connects linear algebra to polynomial equations.

2. Axiomatic Definition

The axiomatic approach defines the determinant by specifying the properties it must satisfy. Remarkably, these three simple properties completely determine a unique function. This approach reveals the essential nature of determinants and provides the foundation for proving many properties.

Definition 5.1: Axiomatic Definition of Determinant

The determinant is the unique function det:Mn(F)F\det: M_n(F) \to F satisfying:

  1. Multilinearity: det\det is linear in each row (or column):
    det(,αi+βi,)=det(,αi,)+det(,βi,)\det(\ldots, \alpha_i + \beta_i, \ldots) = \det(\ldots, \alpha_i, \ldots) + \det(\ldots, \beta_i, \ldots)
    det(,cαi,)=cdet(,αi,)\det(\ldots, c\alpha_i, \ldots) = c \cdot \det(\ldots, \alpha_i, \ldots)
  2. Alternating: Swapping two rows negates the determinant:
    det(,αi,,αj,)=det(,αj,,αi,)\det(\ldots, \alpha_i, \ldots, \alpha_j, \ldots) = -\det(\ldots, \alpha_j, \ldots, \alpha_i, \ldots)
  3. Normalization: det(In)=1\det(I_n) = 1
Remark 5.1: Uniqueness and Existence

These three axioms completely characterize the determinant:

  • Uniqueness: Any function satisfying all three axioms must be the determinant
  • Existence: The permutation formula (Section 3) provides an explicit construction

The proof of uniqueness uses the fact that any matrix can be row-reduced to the identity (or a matrix with a zero row), and row operations have known effects on det.

Theorem 5.0: Uniqueness of Determinant

There exists exactly one function det:Mn(F)F\det: M_n(F) \to F satisfying the three axioms.

Proof:

Uniqueness: Let ff be any function satisfying the axioms. Any invertible matrix AA can be written as a product of elementary matrices: A=E1E2EkA = E_1 E_2 \cdots E_k.

By multilinearity and the alternating property:

  • Row swap EijE_{ij}: f(EijB)=f(B)f(E_{ij}B) = -f(B)
  • Row scaling Ei(c)E_i(c): f(Ei(c)B)=cf(B)f(E_i(c)B) = c \cdot f(B)
  • Row addition Eij(k)E_{ij}(k): f(Eij(k)B)=f(B)f(E_{ij}(k)B) = f(B)

Starting from f(I)=1f(I) = 1, these rules determine f(A)f(A) for every matrix.

Corollary 5.1: Immediate Consequences from Axioms

From the three axioms, we can immediately derive:

  • Identical rows: If αi=αj\alpha_i = \alpha_j for iji \neq j, then det=0\det = 0

    Proof: Swapping identical rows gives same matrix but negates det, so det = −det, hence det = 0.

  • Zero row: If some row is the zero vector, then det=0\det = 0

    Proof: By homogeneity, det(..., 0, ...) = det(..., 0·α, ...) = 0·det(..., α, ...) = 0.

  • Row addition: Adding cαjc \cdot \alpha_j to row ii (for iji \neq j) preserves det

    Proof: det(..., αᵢ + cαⱼ, ..., αⱼ, ...) = det(..., αᵢ, ..., αⱼ, ...) + c·det(..., αⱼ, ..., αⱼ, ...) = det(A) + c·0.

Example 5.0: Using the Axioms

Compute det(2614)\det\begin{pmatrix} 2 & 6 \\ 1 & 4 \end{pmatrix} using only the axioms:

Step 1: Use R₁ → R₁ - 2R₂ (row addition, preserves det):

det(2614)=det(0214)\det\begin{pmatrix} 2 & 6 \\ 1 & 4 \end{pmatrix} = \det\begin{pmatrix} 0 & -2 \\ 1 & 4 \end{pmatrix}

Step 2: Swap rows (negates det):

=det(1402)= -\det\begin{pmatrix} 1 & 4 \\ 0 & -2 \end{pmatrix}

Step 3: Factor out -2 from row 2 (homogeneity):

=(2)det(1401)=2det(1401)= -(-2)\det\begin{pmatrix} 1 & 4 \\ 0 & 1 \end{pmatrix} = 2\det\begin{pmatrix} 1 & 4 \\ 0 & 1 \end{pmatrix}

Step 4: Use R₁ → R₁ - 4R₂:

=2det(1001)=21=2= 2\det\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = 2 \cdot 1 = 2

Verification: Using ad - bc: 2(4) - 6(1) = 8 - 6 = 2 ✓

3. Permutation (Leibniz) Definition

The permutation definition gives an explicit formula for computing determinants. While not practical for large matrices (n! terms!), it provides theoretical insight and proves that a function satisfying the three axioms actually exists.

Definition 5.2: Permutation and Inversion

A permutation of {1,2,,n}\{1, 2, \ldots, n\} is a bijection σ:{1,,n}{1,,n}\sigma: \{1, \ldots, n\} \to \{1, \ldots, n\}.

We write σ=(k1,k2,,kn)\sigma = (k_1, k_2, \ldots, k_n) where ki=σ(i)k_i = \sigma(i).

An inversion is a pair (i,j)(i, j) with i<ji < j but σ(i)>σ(j)\sigma(i) > \sigma(j).

The inversion number τ(σ)\tau(\sigma) counts all inversions. The sign of σ\sigma is (1)τ(σ)(-1)^{\tau(\sigma)}.

Example 5.1: Computing Inversion Numbers

(a) For σ=(3,1,4,2)\sigma = (3, 1, 4, 2):

  • Compare with position 1 (value 3): 3 > 1 ✓, 3 > 2 ✓ → 2 inversions
  • Compare with position 2 (value 1): 1 < 4, 1 < 2 → 0 inversions
  • Compare with position 3 (value 4): 4 > 2 ✓ → 1 inversion

Total: τ=3\tau = 3, sign = (1)3=1(-1)^3 = -1 (odd permutation)

(b) For σ=(2,1,3)\sigma = (2, 1, 3):

  • 2 > 1 → 1 inversion

Total: τ=1\tau = 1, sign = (1)1=1(-1)^1 = -1 (odd)

(c) Identity permutation (1,2,3,,n)(1, 2, 3, \ldots, n): τ=0\tau = 0, sign = +1

Theorem 5.1: Leibniz Formula for Determinant

For A=(aij)n×nA = (a_{ij})_{n \times n}:

det(A)=σSn(1)τ(σ)a1,σ(1)a2,σ(2)an,σ(n)\det(A) = \sum_{\sigma \in S_n} (-1)^{\tau(\sigma)} a_{1,\sigma(1)} a_{2,\sigma(2)} \cdots a_{n,\sigma(n)}

where SnS_n is the symmetric group (all n! permutations of {1,,n}\{1, \ldots, n\}).

Remark 5.2: Understanding the Formula

Each term picks exactly one entry from each row, with column indices forming a permutation. The sign is determined by the parity of the permutation:

  • Even permutation (even inversions) → +1
  • Odd permutation (odd inversions) → −1
Proof:

Sketch: We verify the formula satisfies the three axioms.

Multilinearity: Each term is linear in each row (product of one entry from each row).

Alternating: Swapping rows i and j swaps entries in those positions, which changes every permutation's parity (adds or removes exactly one inversion).

Normalization: For I, only the identity permutation gives a non-zero term: 1·1·...·1 with sign +1.

Example 5.2: 2×2 via Permutation Formula

For A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}, there are 2! = 2 permutations:

  • σ=(1,2)\sigma = (1, 2): τ = 0, term = (+1)·a₁₁a₂₂ = ad
  • σ=(2,1)\sigma = (2, 1): τ = 1, term = (−1)·a₁₂a₂₁ = −bc

Therefore: det(A)=adbc\det(A) = ad - bc

Example 5.3: 3×3 via Permutation Formula

For 3×3 matrices, there are 3! = 6 permutations:

στ(σ)SignTerm
(1,2,3)0++a₁₁a₂₂a₃₃
(1,3,2)1−a₁₁a₂₃a₃₂
(2,1,3)1−a₁₂a₂₁a₃₃
(2,3,1)2++a₁₂a₂₃a₃₁
(3,1,2)2++a₁₃a₂₁a₃₂
(3,2,1)3−a₁₃a₂₂a₃₁

This gives the Sarrus formula: det = a₁₁a₂₂a₃₃ + a₁₂a₂₃a₃₁ + a₁₃a₂₁a₃₂ − a₁₃a₂₂a₃₁ − a₁₁a₂₃a₃₂ − a₁₂a₂₁a₃₃

4. Explicit Formulas for Small Matrices

For small matrices, explicit formulas are practical. These formulas are derived from the permutation definition and are essential to memorize for quick calculations.

1×1 Determinant
det(a)=a\det(a) = a

The determinant of a 1×1 matrix is just the entry itself.

2×2 Determinant
det(abcd)=adbc\det\begin{pmatrix} a & b \\ c & d \end{pmatrix} = ad - bc

Product of main diagonal minus product of anti-diagonal.

3×3 Determinant (Rule of Sarrus)
det(abcdefghi)=aei+bfg+cdhcegbdiafh\det\begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix} = aei + bfg + cdh - ceg - bdi - afh

Memory aid: Add products along ↘ diagonals, subtract products along ↙ diagonals. (Extend the matrix by copying first two columns to the right.)

Example 5.4: Numerical 2×2 Example

Calculate det(3725)\det\begin{pmatrix} 3 & 7 \\ 2 & 5 \end{pmatrix}:

det=3572=1514=1\det = 3 \cdot 5 - 7 \cdot 2 = 15 - 14 = 1
Example 5.5: Numerical 3×3 Example

Calculate det(123456789)\det\begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix}:

=1(45)+2(42)+3(32)3(35)1(48)2(36)= 1(45) + 2(42) + 3(32) - 3(35) - 1(48) - 2(36)
=45+84+961054872=0= 45 + 84 + 96 - 105 - 48 - 72 = 0

The determinant is 0 because the rows are linearly dependent: Row 2 = (Row 1 + Row 3)/2.

Remark 5.3: Warning: Sarrus Only for 3×3!

The Rule of Sarrus does NOT generalize to 4×4 or larger matrices! For n ≥ 4, use cofactor expansion or row reduction.

5. Cofactor Expansion (Recursive Definition)

The cofactor expansion provides a recursive method to compute determinants. It reduces an n×n determinant to a sum of (n-1)×(n-1) determinants. This is the most practical method for hand calculations.

Definition 5.3: Minor and Cofactor

For A=(aij)n×nA = (a_{ij})_{n \times n}:

  • Minor MijM_{ij}: The (n1)×(n1)(n-1) \times (n-1) determinant obtained by deleting row ii and column jj
  • Cofactor Aij=(1)i+jMijA_{ij} = (-1)^{i+j} M_{ij}
Remark 5.4: Checkerboard Sign Pattern

The signs (1)i+j(-1)^{i+j} follow a checkerboard pattern:

(++++++)\begin{pmatrix} + & - & + & - & \cdots \\ - & + & - & + & \cdots \\ + & - & + & - & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}
Theorem 5.2: Cofactor Expansion (Laplace Expansion)

Row expansion (along row ii):

det(A)=j=1naijAij=ai1Ai1+ai2Ai2++ainAin\det(A) = \sum_{j=1}^{n} a_{ij} A_{ij} = a_{i1}A_{i1} + a_{i2}A_{i2} + \cdots + a_{in}A_{in}

Column expansion (along column jj):

det(A)=i=1naijAij=a1jA1j+a2jA2j++anjAnj\det(A) = \sum_{i=1}^{n} a_{ij} A_{ij} = a_{1j}A_{1j} + a_{2j}A_{2j} + \cdots + a_{nj}A_{nj}
Example 5.6: 3×3 Cofactor Expansion (Complete)

Calculate det(213102152)\det\begin{pmatrix} 2 & 1 & 3 \\ -1 & 0 & 2 \\ 1 & 5 & -2 \end{pmatrix} by expanding along row 2 (which has a zero):

det=(1)A21+0A22+2A23\det = (-1) \cdot A_{21} + 0 \cdot A_{22} + 2 \cdot A_{23}

Since a22=0a_{22} = 0, we only need two cofactors:

A21=(1)2+1det(1352)=(1(2)35)=(17)=17A_{21} = (-1)^{2+1} \det\begin{pmatrix} 1 & 3 \\ 5 & -2 \end{pmatrix} = -(1 \cdot (-2) - 3 \cdot 5) = -(-17) = 17
A23=(1)2+3det(2115)=(2511)=9A_{23} = (-1)^{2+3} \det\begin{pmatrix} 2 & 1 \\ 1 & 5 \end{pmatrix} = -(2 \cdot 5 - 1 \cdot 1) = -9
det=(1)(17)+2(9)=1718=35\det = (-1)(17) + 2(-9) = -17 - 18 = -35
Example 5.7: 4×4 Determinant

Calculate det(1020310400102003)\det\begin{pmatrix} 1 & 0 & 2 & 0 \\ 3 & 1 & 0 & 4 \\ 0 & 0 & 1 & 0 \\ 2 & 0 & 0 & 3 \end{pmatrix} by expanding along row 3:

Row 3 has only one non-zero entry at position (3,3), so:

det=1A33=(+1)det(100314203)\det = 1 \cdot A_{33} = (+1) \det\begin{pmatrix} 1 & 0 & 0 \\ 3 & 1 & 4 \\ 2 & 0 & 3 \end{pmatrix}

Expand this 3×3 along column 2:

=1(1)2+2det(1023)=1(30)=3= 1 \cdot (-1)^{2+2} \det\begin{pmatrix} 1 & 0 \\ 2 & 3 \end{pmatrix} = 1 \cdot (3 - 0) = 3

6. Geometric Interpretation

The determinant has a beautiful geometric meaning: it measures how a linear transformation scales volumes and whether it preserves or reverses orientation.

Theorem 5.3: Determinant as Signed Volume

Let AA be an n×nn \times n matrix with rows v1,,vnRnv_1, \ldots, v_n \in \mathbb{R}^n. Then:

  • det(A)|\det(A)| = nn-dimensional volume of the parallelepiped spanned by v1,,vnv_1, \ldots, v_n
  • Sign of det(A) indicates orientation: + preserves, − reverses
Example 5.8: 2D: Area of Parallelogram

Vectors v=(3,0)v = (3, 0) and w=(1,2)w = (1, 2) span a parallelogram.

Area=det(3012)=60=6\text{Area} = \left|\det\begin{pmatrix} 3 & 0 \\ 1 & 2 \end{pmatrix}\right| = |6 - 0| = 6
Example 5.9: 3D: Volume of Parallelepiped

Vectors v1=(1,0,0)v_1 = (1, 0, 0), v2=(0,2,0)v_2 = (0, 2, 0), v3=(0,0,3)v_3 = (0, 0, 3):

Volume=det(100020003)=123=6\text{Volume} = \left|\det\begin{pmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{pmatrix}\right| = |1 \cdot 2 \cdot 3| = 6
Theorem 5.4: Determinant and Linear Transformations

If T:RnRnT: \mathbb{R}^n \to \mathbb{R}^n is a linear transformation with matrix AA, then for any measurable region RR:

Volume(T(R))=det(A)Volume(R)\text{Volume}(T(R)) = |\det(A)| \cdot \text{Volume}(R)
Remark 5.5: Orientation
  • det > 0: Transformation preserves orientation (e.g., rotation)
  • det < 0: Transformation reverses orientation (e.g., reflection)
  • det = 0: Transformation collapses dimension (not invertible)

7. Common Mistakes

Determinants have several counterintuitive properties. Understanding these common errors will help you avoid them in calculations and proofs.

Mistake 1: Confusing det(cA) with c·det(A)

Wrong: det(2A)=2det(A)\det(2A) = 2\det(A)

Correct: det(cA)=cndet(A)\det(cA) = c^n \det(A) for n×nn \times n matrix

Why? Each of the n rows gets multiplied by c, and det is linear in each row. Example: For 3×3, det(2A) = 2³·det(A) = 8·det(A).

Mistake 2: det(A + B) ≠ det(A) + det(B)

Wrong: det(A+B)=det(A)+det(B)\det(A + B) = \det(A) + \det(B)

Reality: No simple formula exists for det(A + B)

Counterexample: Let A = B = I₂. Then det(A) = det(B) = 1, but det(A + B) = det(2I) = 4 ≠ 2.

Mistake 3: Wrong Sign in Cofactor

Wrong: Forgetting (1)i+j(-1)^{i+j} in cofactor

Correct: Aij=(1)i+jMijA_{ij} = (-1)^{i+j} M_{ij}

Tip: Use the checkerboard pattern. Position (1,1) has +, (1,2) has −, (2,1) has −, (2,2) has +, etc.

Mistake 4: Using Sarrus for n > 3

Wrong: Extending the "diagonal rule" to 4×4 matrices

Reality: Sarrus rule only works for 3×3

For larger matrices: Use cofactor expansion or row reduction.

Mistake 5: Row Operations and Determinant

Correct effects:

  • Swap rows → multiply det by −1
  • Scale row by c → multiply det by c
  • Add multiple of row → det unchanged

Common error: Forgetting to track sign changes when using row reduction.

8. Key Takeaways

Three Definitions

Axiomatic, permutation, and cofactor—all equivalent.

n! Terms

The expansion has n! terms, one per permutation.

Signed Volume

|det| = volume scaling, sign = orientation.

Invertibility

A invertible ⟺ det(A) ≠ 0.

9. Additional Practice Problems

Work through these problems to solidify your understanding. Solutions are provided below.

Problem 1 (Easy)

Calculate det(123456789)\det\begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix} using cofactor expansion.

Problem 2 (Medium)

Prove that det(AT)=det(A)\det(A^T) = \det(A) using the permutation definition.

Problem 3 (Easy)

Find the inversion number of (4,2,3,1)(4, 2, 3, 1) and determine if it's odd or even.

Problem 4 (Easy)

If det(A)=5\det(A) = 5, what is det(3A)\det(3A) for a 4×4 matrix AA?

Problem 5 (Medium)

Calculate det(1200340000560078)\det\begin{pmatrix} 1 & 2 & 0 & 0 \\ 3 & 4 & 0 & 0 \\ 0 & 0 & 5 & 6 \\ 0 & 0 & 7 & 8 \end{pmatrix}.

Problem 6 (Hard)

Prove: If A is n×n and det(A) = 0, then there exists a non-zero vector x such that Ax = 0.

Solutions

Solution 1

Expand along row 1: det = 1·A₁₁ + 2·A₁₂ + 3·A₁₃

=1(5968)2(4967)+3(4857)= 1(5 \cdot 9 - 6 \cdot 8) - 2(4 \cdot 9 - 6 \cdot 7) + 3(4 \cdot 8 - 5 \cdot 7)
=1(3)2(6)+3(3)=3+129=0= 1(-3) - 2(-6) + 3(-3) = -3 + 12 - 9 = 0

The determinant is 0 because the rows are linearly dependent (row 2 = average of rows 1 and 3).

Solution 2

Using the Leibniz formula:

det(A)=σ(1)τ(σ)a1,σ(1)an,σ(n)\det(A) = \sum_{\sigma} (-1)^{\tau(\sigma)} a_{1,\sigma(1)} \cdots a_{n,\sigma(n)}
det(AT)=σ(1)τ(σ)aσ(1),1aσ(n),n\det(A^T) = \sum_{\sigma} (-1)^{\tau(\sigma)} a_{\sigma(1),1} \cdots a_{\sigma(n),n}

The products are the same (just reordered), and the bijection σ → σ⁻¹ shows the sums are equal since τ(σ) = τ(σ⁻¹).

Solution 3

For σ = (4, 2, 3, 1), count inversions:

  • 4 > 2, 4 > 3, 4 > 1 → 3 inversions
  • 2 > 1 → 1 inversion
  • 3 > 1 → 1 inversion

Total: τ = 5 (odd), so sign = (−1)⁵ = −1. This is an odd permutation.

Solution 4

For n×n matrix: det(cA) = cⁿ·det(A)

det(3A)=34det(A)=815=405\det(3A) = 3^4 \cdot \det(A) = 81 \cdot 5 = 405

Solution 5

This is a block diagonal matrix! For block diagonal matrices:

det(AOOB)=det(A)det(B)\det\begin{pmatrix} A & O \\ O & B \end{pmatrix} = \det(A) \cdot \det(B)
=det(1234)det(5678)= \det\begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} \cdot \det\begin{pmatrix} 5 & 6 \\ 7 & 8 \end{pmatrix}
=(1423)(5867)=(2)(2)=4= (1 \cdot 4 - 2 \cdot 3)(5 \cdot 8 - 6 \cdot 7) = (-2)(-2) = 4

Solution 6

Proof: If det(A) = 0, then the columns of A are linearly dependent.

This means there exist scalars c₁, ..., cₙ (not all zero) such that c₁a₁ + ... + cₙaₙ = 0, where aⱼ are the columns of A.

Let x = (c₁, ..., cₙ)ᵀ. Then Ax = c₁a₁ + ... + cₙaₙ = 0, and x ≠ 0. ∎

10. Special Determinants

Certain types of matrices have determinants that can be computed immediately without expansion. Recognizing these patterns greatly simplifies calculations.

Triangular Matrices

For upper or lower triangular matrices, the determinant is the product of diagonal entries:

det(a110a2200a33)=a11a22a33\det\begin{pmatrix} a_{11} & * & * \\ 0 & a_{22} & * \\ 0 & 0 & a_{33} \end{pmatrix} = a_{11} a_{22} a_{33}

Diagonal Matrices

Special case of triangular:

det(diag(d1,d2,,dn))=d1d2dn\det(\text{diag}(d_1, d_2, \ldots, d_n)) = d_1 d_2 \cdots d_n

Block Diagonal Matrices

Determinant factors over blocks:

det(AOOB)=det(A)det(B)\det\begin{pmatrix} A & O \\ O & B \end{pmatrix} = \det(A) \cdot \det(B)

Block Triangular Matrices

Same formula applies:

det(ACOB)=det(A)det(B)\det\begin{pmatrix} A & C \\ O & B \end{pmatrix} = \det(A) \cdot \det(B)

11. Historical Notes

Leibniz (1693): First introduced determinants while studying systems of linear equations. He discovered the n! term expansion but did not publish his findings.

Cramer (1750): Published the famous rule bearing his name for solving linear systems using determinants. His work brought determinants to wider attention.

Vandermonde (1771): First systematic treatment of determinants as independent objects. The Vandermonde determinant i<j(xjxi)\prod_{i<j}(x_j - x_i) is named after him.

Laplace (1772): Developed the cofactor expansion method that bears his name, generalizing the recursive approach to computing determinants.

Cauchy (1812): Used the word "déterminant" and established many fundamental properties including the product formula det(AB) = det(A)det(B).

Jacobi (1841): Introduced the Jacobian determinant for change of variables in multivariable calculus, connecting determinants to differential geometry.

Sylvester (1850): Contributed to the theory of invariants and developed the terminology of minors and cofactors.

12. Quick Reference Summary

Definitions

DefinitionKey Feature
AxiomaticMultilinear, alternating, det(I) = 1
Permutationσ(1)τ(σ)ai,σ(i)\sum_{\sigma} (-1)^{\tau(\sigma)} \prod a_{i,\sigma(i)}
CofactorjaijAij\sum_j a_{ij} A_{ij} (Laplace expansion)
GeometricSigned volume scaling factor

Key Formulas

FormulaExpression
2×2 detadbcad - bc
CofactorAij=(1)i+jMijA_{ij} = (-1)^{i+j} M_{ij}
Scalar multipledet(cA)=cndet(A)\det(cA) = c^n \det(A)
TriangularProduct of diagonal entries
Block diagonaldet(A)det(B)\det(A) \cdot \det(B)

Row Operations Effects

OperationEffect on det
Swap rowsMultiply by −1
Scale row by cMultiply by c
Add multiple of rowNo change

What's Next?

Now that you understand the definition, the next topics explore:

  • Properties of Determinants: How det behaves under matrix operations
  • Computation Methods: Efficient techniques for calculating det
  • Laplace Expansion: Generalized cofactor expansion
Determinant Definition Practice
12
Questions
0
Correct
0%
Accuracy
1
The determinant of (abcd)\begin{pmatrix} a & b \\ c & d \end{pmatrix} is:
Easy
Not attempted
2
Swapping two rows of a matrix:
Easy
Not attempted
3
If a matrix has two identical rows, its determinant is:
Easy
Not attempted
4
The determinant of the identity matrix InI_n is:
Easy
Not attempted
5
The inversion number of permutation (3,1,2)(3, 1, 2) is:
Medium
Not attempted
6
A permutation with odd inversion number contributes what sign?
Medium
Not attempted
7
The cofactor AijA_{ij} equals:
Medium
Not attempted
8
How many terms are in the expansion of an n×nn \times n determinant?
Medium
Not attempted
9
det(A)0\det(A) \neq 0 implies:
Easy
Not attempted
10
Multiplying one row by scalar cc multiplies the determinant by:
Easy
Not attempted
11
What is det(cA)\det(cA) for an n×nn \times n matrix AA?
Medium
Not attempted
12
Adding a multiple of one row to another row:
Medium
Not attempted

Frequently Asked Questions

Why are there multiple definitions of determinant?

Each definition serves different purposes: the axiomatic definition (multilinear, alternating, normalized) reveals fundamental properties and proves uniqueness; the permutation (Leibniz) formula gives an explicit computational formula; and the cofactor expansion provides a recursive algorithm. All three are mathematically equivalent—they define the same function on matrices.

What does the determinant measure geometrically?

The determinant measures the signed volume scaling factor. If A maps the unit n-cube, |det(A)| is the volume of the resulting parallelepiped. The sign indicates orientation: positive preserves orientation, negative reverses it. In 2D, |det| gives the area of a parallelogram; in 3D, the volume of a parallelepiped.

Why is det(I) = 1 important?

The normalization det(I) = 1 makes the determinant unique. Given only multilinearity and the alternating property, we could multiply by any constant and still satisfy those axioms. The requirement det(I) = 1 pins down exactly one function. It also makes geometric sense: the identity transformation preserves volume.

What's the relationship between determinant and invertibility?

A matrix A is invertible if and only if det(A) ≠ 0. Geometrically, det = 0 means the transformation collapses space to a lower dimension (zero volume). Algebraically, det = 0 means the columns are linearly dependent, so A has non-trivial kernel and cannot be injective.

How do I calculate a 3×3 determinant quickly?

Method 1: Rule of Sarrus—add products of main diagonals, subtract products of anti-diagonals. Method 2: Cofactor expansion along a row/column with zeros. Method 3: Row reduce to upper triangular form (product of diagonal entries). Always look for zeros to simplify cofactor expansion!

What is an inversion in a permutation?

For permutation σ = (k₁, k₂, ..., kₙ), an inversion is a pair of positions (i, j) where i < j but kᵢ > kⱼ (a 'larger number appears before a smaller one'). The inversion number τ(σ) counts all such pairs. Even inversions give sign +1, odd give -1.

Why does swapping rows negate the determinant?

This is the alternating property, fundamental to the axiomatic definition. Geometrically, swapping two basis vectors reverses the orientation of the coordinate system. Algebraically, each row swap corresponds to composing with a transposition, which has sign -1 in the symmetric group.

Can I add rows without changing the determinant?

Yes! Adding a multiple of row j to row i (with i ≠ j) preserves the determinant. This follows from multilinearity: det(..., αᵢ + cαⱼ, ..., αⱼ, ...) = det(..., αᵢ, ..., αⱼ, ...) + c·det(..., αⱼ, ..., αⱼ, ...) = det(A) + c·0 = det(A).

What is the difference between minor and cofactor?

The minor Mᵢⱼ is the (n-1)×(n-1) determinant obtained by deleting row i and column j. The cofactor Aᵢⱼ = (-1)^(i+j) × Mᵢⱼ includes a sign factor. The signs follow a checkerboard pattern: + - + - ... for the first row, - + - + ... for the second, etc.

Why does det(AB) = det(A)det(B)?

This multiplicativity follows from the axiomatic definition. The map B ↦ det(AB) is multilinear and alternating in the columns of B, with det(AI) = det(A). By uniqueness, it must equal det(A)·det(B). Geometrically: composing transformations multiplies their volume scaling factors.