The adjugate (classical adjoint) matrix is the transpose of the cofactor matrix. It provides an explicit closed-form formula for matrix inversion: A⁻¹ = adj(A)/det(A). While computationally expensive for large matrices, the adjugate is invaluable for theoretical analysis and symbolic computation.
The adjugate matrix (also called the classical adjoint) is constructed from the cofactors of a matrix. It plays a central role in deriving the inverse formula and connecting to Cramer's rule.
The cofactor matrix of is the n×n matrix whose -entry is the cofactor :
The adjugate (or classical adjoint) of is the transpose of the cofactor matrix:
Equivalently: — note the swapped indices!
In older texts, "adjoint" referred to this matrix. Modern usage reserves "adjoint" for the conjugate transpose (A* or A†), so "adjugate" or "classical adjoint" is preferred to avoid confusion.
For :
Step 1: Compute cofactors:
Step 2: Form cofactor matrix and transpose:
Memory aid: Swap diagonal entries, negate off-diagonal entries.
Check that :
The transpose is essential. Without it, the identity would fail:
The fundamental identity connects the adjugate with the determinant, providing the foundation for the explicit inverse formula.
For any n×n matrix A:
This holds for all matrices, even singular ones (where det(A) = 0).
Computing :
Case 1: If :
is the Laplace expansion along row i =
Case 2: If :
is an alien cofactor sum = 0
Conclusion: , which is .
The proof for is similar, using column expansion instead of row expansion. Both products equal .
If , dividing both sides by det(A):
Find for .
Step 1: det(A) = 3(4) - 1(2) = 10
Step 2: adj(A) = (swap diagonal, negate off-diagonal)
Step 3: A⁻¹ = adj(A)/det(A)
When det(A) = 0:
The adjugate satisfies many elegant algebraic properties that parallel those of the inverse.
For an n×n matrix A:
Take determinants of both sides of :
If det(A) ≠ 0, divide by det(A) to get .
The formula extends to singular matrices by continuity.
Like the inverse, the adjugate reverses the order of products.
For invertible A, B: , so:
Meanwhile, , which equals the same.
For scalar c:
Each cofactor is an (n-1)×(n-1) determinant, so each scales by .
For invertible A:
For a 3×3 matrix with det(A) = 2:
adj(adj(A)) = 2^{3-2} · A = 2 · A
Computing the adjugate requires finding all n² cofactors and then transposing. Here is the step-by-step process.
Compute adj(A) for .
Step 1: Compute all 9 cofactors:
A₁₁ = +(1·0 - 4·6) = -24
A₁₂ = -(0·0 - 4·5) = 20
A₁₃ = +(0·6 - 1·5) = -5
A₂₁ = -(2·0 - 3·6) = 18
A₂₂ = +(1·0 - 3·5) = -15
A₂₃ = -(1·6 - 2·5) = 4
A₃₁ = +(2·4 - 3·1) = 5
A₃₂ = -(1·4 - 3·0) = -4
A₃₃ = +(1·1 - 2·0) = 1
Step 2: Form cofactor matrix:
Step 3: Transpose:
Verify: det(A) = 1(0-24) - 2(0-20) + 3(0-5) = -24 + 40 - 15 = 1
Check A · adj(A) = det(A) · I = 1 · I = I ✓
Computing adj(A) requires:
adj(A) is the transpose of the cofactor matrix, not the cofactor matrix itself! [adj(A)]ᵢⱼ = Aⱼᵢ
In modern usage, "adjoint" often means conjugate transpose (A*). Use "adjugate" or "classical adjoint" for the cofactor transpose.
Remember: Aᵢⱼ = (-1)^{i+j} Mᵢⱼ. The checkerboard pattern starts with + at (1,1).
Like the inverse, adjugate reverses order: adj(AB) = adj(B) · adj(A)
Adjugate is O(n·n!) vs O(n³) for row reduction. For n > 4, always use row reduction for numerical work.
Despite computational limitations, the adjugate has important theoretical and practical applications.
Cramer's rule can be derived from:
The i-th component involves the i-th column of adj(A), which contains cofactors from row i of A.
The adjugate formula preserves polynomial structure:
Find the inverse of for .
det(A) = t² - 1 = (t-1)(t+1)
adj(A) = (swap diagonal, negate off-diagonal)
For the characteristic matrix :
This identity is used in proving the Cayley-Hamilton theorem.
For a singular matrix A (det(A) = 0):
adj(A) = Cᵀ (transpose of cofactor matrix)
[adj(A)]ᵢⱼ = Aⱼᵢ
Works for all matrices
When det(A) ≠ 0
det(adj(A)) = det(A)^{n-1}
Power of n-1, not n
Problem 1
Find adj(A) for and verify A · adj(A) = det(A) · I.
Problem 2
If det(A) = 5 for a 4×4 matrix, find det(adj(A)).
Problem 3
Prove: adj(2A) = 2^{n-1} adj(A) for an n×n matrix.
Problem 4
Find adj(adj(A)) for .
Solution 1
adj(A) = (swap diagonal, negate off-diagonal) = (4,-3; -1,2)
det(A) = 8-3 = 5
A·adj(A) = (2,3; 1,4)(4,-3; -1,2) = (5,0; 0,5) = 5·I ✓
Solution 2
det(adj(A)) = det(A)^{n-1} = 5^{4-1} = 5³ = 125
Solution 3
Each cofactor of 2A is an (n-1)×(n-1) determinant with entries scaled by 2.
So each cofactor scales by 2^{n-1}, giving adj(2A) = 2^{n-1} adj(A).
Solution 4
det(A) = 3, n = 2
adj(adj(A)) = det(A)^{n-2} · A = 3^0 · A = 1 · A = A
Swap diagonal, negate off-diagonal
Arthur Cayley (1821-1895): English mathematician who pioneered matrix theory. He developed the adjugate matrix concept as part of his work on matrix algebra and the theory of invariants. His 1858 paper "A Memoir on the Theory of Matrices" established the foundations.
James Joseph Sylvester (1814-1897): Cayley's colleague who coined many matrix terms. Together they developed much of 19th-century matrix algebra, including determinant theory.
Terminology Evolution: The term "adjoint" was historically used for this matrix. As functional analysis developed in the 20th century, "adjoint" acquired new meanings (conjugate transpose, Hermitian adjoint), leading to adoption of "adjugate" to avoid confusion.
Modern Usage: Today, most advanced texts use "adjugate" for the cofactor transpose, reserving "adjoint" for the conjugate transpose in complex vector spaces. Some older texts still use "classical adjoint" as a compromise.
All cofactors of I are 0 except diagonals which are 1.
For diagonal D = diag(d₁, d₂, ..., dₙ):
For D = diag(2, 3, 5):
det(D) = 2 · 3 · 5 = 30
adj(D) = diag(30/2, 30/3, 30/5) = diag(15, 10, 6)
For upper (or lower) triangular A, adj(A) is also upper (or lower) triangular.
For orthogonal Q (where Q⁻¹ = Qᵀ):
Since det(Q) = ±1 for orthogonal matrices.
For 2D rotation :
det(R) = cos²θ + sin²θ = 1
adj(R) = det(R) · Rᵀ = 1 · Rᵀ = R⁻¹ = R(-θ)
For block diagonal :
The adjugate appears in the proof of Cayley-Hamilton theorem. For characteristic polynomial p(λ):
Comparing coefficients of λ leads to p(A) = 0.
Outline: The entries of adj(A - λI) are polynomials in λ of degree at most n-1.
Write adj(A - λI) = B₀ + B₁λ + ... + B_{n-1}λ^{n-1} for some matrices Bₖ.
Expand (A - λI)·adj(A - λI) = p(λ)·I and compare powers of λ.
Multiply the k-th equation by A^k and sum to get p(A) = 0.
The solution to Ax = b can be written:
The i-th component is:
This sum equals det(Aᵢ) where Aᵢ has column i replaced by b, giving Cramer's rule.
For :
So x = (de-bf)/(ad-bc) = det(e,b; f,d)/det(A) ✓
Problem 5
If A is orthogonal with det(A) = 1, prove adj(A) = Aᵀ.
Hint: Use adj(A) = det(A)·A⁻¹ and the fact that A⁻¹ = Aᵀ for orthogonal matrices.
Problem 6
For nilpotent matrix N (where N^k = 0 for some k), what is adj(I - N)?
Hint: Use the geometric series (I - N)⁻¹ = I + N + N² + ...
Problem 7 (Challenge)
Prove: For rank n-1 matrix A, rank(adj(A)) = 1 and adj(A) = uvᵀ for some vectors u, v.
Problem 8
Compute adj(A) for (Vandermonde matrix).
For an n×n matrix A:
When rank(A) = n-1, exactly one (n-1)×(n-1) minor is nonzero, giving rank(adj(A)) = 1.
When rank(A) < n-1, all (n-1)×(n-1) minors are zero, so adj(A) = 0.
For a linear map T: ℝⁿ → ℝⁿ with matrix A:
For (rank 1):
det(A) = 4 - 4 = 0
adj(A) = (4, -2; -2, 1) ≠ 0
Note: rank(adj(A)) = 1 (both rows are multiples of (2, -1))
Verify: A · adj(A) = (1,2; 2,4)(4,-2; -2,1) = (0,0; 0,0) ✓
If B = P⁻¹AP (A and B are similar), then:
In advanced linear algebra, the adjugate relates to the exterior (wedge) product:
Challenge 1
Prove that for any matrix A: A · adj(A) = adj(A) · A (both products equal det(A)·I).
Challenge 2
For 3×3 matrix A with det(A) = 2, find det(adj(adj(adj(A)))).
Hint: Use det(adj(B)) = det(B)^{n-1} iteratively.
Challenge 3
Show that if A² = A (idempotent), then adj(A) is also related to A by a simple formula.
Challenge 4
Prove: tr(adj(A)) equals the sum of (n-1)×(n-1) principal minors of A.
Input: n×n matrix A
Output: adj(A)
1. Initialize n×n matrix C
2. FOR i = 1 to n:
FOR j = 1 to n:
M = A with row i, col j deleted
C[i,j] = (-1)^{i+j} · det(M)
3. RETURN Cᵀ
Time complexity: O(n² · T(n-1)) where T(k) is time to compute k×k determinant
With adjugate matrices mastered, you're ready for:
This module covered the adjugate matrix, the transpose of the cofactor matrix. The fundamental identity A · adj(A) = det(A) · I leads to the explicit inverse formula A⁻¹ = adj(A)/det(A).
12
Core Theorems
12
Quiz Questions
10
FAQs Answered
12
Practice Problems
det(A) = ad - bc
adj(I) = I
All cofactors are identity cofactors
adj(diag(d₁,...,dₙ)) = diag(det/d₁,...,det/dₙ)
Each diagonal scaled by det/entry
adj(Q) = det(Q) · Qᵀ = ±Qᵀ
Since det(Q) = ±1
adj(cA) = c^{n-1} adj(A)
Power is n-1, not n
adj(AB) = adj(B) · adj(A)
Order reverses like inverse
adj(Aᵀ) = (adj(A))ᵀ
Adjugate and transpose commute
You are here! The adjugate bridges determinant theory to eigenvalue analysis.
| Method | Complexity | Best For | Limitations |
|---|---|---|---|
| Adjugate Formula | O(n·n!) | Symbolic, small matrices | Exponential growth |
| Row Reduction | O(n³) | Numerical computation | May lose exact values |
| LU Decomposition | O(n³) | Multiple systems, speed | Requires pivoting |
| Cayley-Hamilton | O(n³) | Theoretical analysis | Requires char. poly |
Find A⁻¹ for
Step 1: Compute determinant
det(A) = 1(1·1 - 1·0) - 2(0·1 - 1·2) + 0 = 1 + 4 = 5
Step 2: Compute cofactors
A₁₁ = +(1-0) = 1
A₁₂ = -(0-2) = 2
A₁₃ = +(0-2) = -2
A₂₁ = -(2-0) = -2
A₂₂ = +(1-0) = 1
A₂₃ = -(0-4) = 4
A₃₁ = +(2-0) = 2
A₃₂ = -(1-0) = -1
A₃₃ = +(1-0) = 1
Step 3: Form adj(A) = Cᵀ
Step 4: Compute inverse
Given det(A) = 3 for a 4×4 matrix, find det(adj(2A)).
Solution:
Step 1: adj(2A) = 2^{4-1} adj(A) = 8·adj(A)
Step 2: det(adj(2A)) = det(8·adj(A)) = 8^4 · det(adj(A))
Step 3: det(adj(A)) = det(A)^{4-1} = 3³ = 27
Step 4: det(adj(2A)) = 4096 · 27 = 110,592
The adjugate gives explicit formulas (not algorithms), making it invaluable for proofs and symbolic manipulation.
O(n!) complexity makes adjugate impractical for numerical work with n > 4. Use row reduction instead.
Even when A is singular, adj(A) exists. The columns of adj(A) span the null space of A.
Adjugate connects inverse, Cramer's rule, and Cayley-Hamilton - a central concept in matrix theory.
The adjugate (classical adjoint) is the transpose of the cofactor matrix. In modern linear algebra, 'adjoint' usually means the conjugate transpose (A* or A†), so 'adjugate' is used to avoid confusion.
It gives an explicit closed-form formula for the inverse: A^{-1} = adj(A)/det(A). This is valuable for theoretical proofs, symbolic computation, and deriving Cramer's rule.
Step 1: Compute all 9 cofactors A_{ij} (remember the checkerboard signs). Step 2: Arrange them in a 3×3 matrix. Step 3: Transpose to get adj(A). Remember: [adj(A)]_{ij} = A_{ji}.
Then A is singular and A^{-1} doesn't exist. However, adj(A) still exists! The identity becomes A·adj(A) = 0·I = 0. This means columns of adj(A) are in the null space of A.
The transpose is needed to make the identity A·adj(A) = det(A)·I work. Without transposing, the (i,j)-entry of the product would involve row i of A with cofactors of row j - not what we want.
No! For orthogonal Q, we have Q^{-1} = Q^T. The adjugate is adj(Q) = det(Q)·Q^T = ±Q^T (since det(Q) = ±1).
Cramer's rule x_i = det(A_i)/det(A) can be written as x = A^{-1}b = adj(A)b/det(A). The columns of adj(A) contain exactly the information needed for Cramer's formula.
For specific applications (like finding one entry of A^{-1}), you might only need certain cofactors. But for the full adj(A), all n² cofactors are required.
Computing adj(A) directly requires n² cofactors, each an (n-1)×(n-1) determinant. This is O(n² · (n-1)!) ≈ O(n·n!). Row reduction for A^{-1} is only O(n³), vastly faster for large n.
No! Like the inverse, adjugate reverses order: adj(AB) = adj(B)·adj(A). This can be proved from the definition using (AB)^{-1} = B^{-1}A^{-1}.