The characteristic polynomial encodes all eigenvalues and their multiplicities in a single polynomial expression.
This fundamental object connects eigenvalue theory to polynomial algebra, providing computational methods for finding eigenvalues and deep theoretical insights about matrix structure.
In the previous section, we saw that eigenvalues are scalars for which is singular. This condition defines a polynomial equation whose roots are precisely the eigenvalues.
For , the characteristic polynomial is:
This is a monic polynomial of degree in .
Some texts define instead. The two conventions differ by a factor of , but have exactly the same roots. We use to ensure the polynomial is monic (leading coefficient 1).
For a linear operator on finite-dimensional space , the characteristic polynomial is defined as the characteristic polynomial of any matrix representation:
where is the matrix of in any basis. This is well-defined since similar matrices have the same characteristic polynomial.
is an eigenvalue of if and only if .
is an eigenvalue ⟺
⟺ has non-trivial solution
⟺ is singular
⟺
⟺
For :
Eigenvalues: , .
Check: tr(A) = 7 = 2 + 5 ✓, det(A) = 10 = 2 × 5 ✓
For (upper triangular):
Single eigenvalue with algebraic multiplicity 3.
When an eigenvalue appears as a repeated root of the characteristic polynomial, we need to distinguish between how many times it appears as a root versus how many independent eigenvectors it has.
The algebraic multiplicity of eigenvalue , denoted or , is the largest integer such that divides .
Equivalently, it's the exponent of in the factored form of .
The geometric multiplicity of eigenvalue , denoted or , is:
This is the number of linearly independent eigenvectors for .
For every eigenvalue :
Lower bound (): If is an eigenvalue, there exists at least one eigenvector, so .
Upper bound (): Let . Choose a basis of and extend to a basis of .
In this basis, has the form:
Then , so .
For :
, so
, so
Here , and the matrix is diagonalizable (already diagonal!).
For :
, so
, so
Here . This matrix is defective and cannot be diagonalized.
A matrix is diagonalizable if and only if for every eigenvalue. When , there aren't enough eigenvectors to form a basis.
The sum of all algebraic multiplicities equals (the size of the matrix):
The sum of all geometric multiplicities is at most :
Equality holds if and only if the matrix is diagonalizable.
The coefficients of the characteristic polynomial have beautiful interpretations in terms of the matrix entries and eigenvalues.
For , write:
Then:
If are eigenvalues (with multiplicity):
More generally:
Since , expanding and comparing coefficients:
Coefficient of :
Constant term:
Given , find eigenvalues, trace, and determinant.
Solution: Factor:
Eigenvalues: 1, 2, 3
tr(A) = 1 + 2 + 3 = 6 ✓ (check: )
det(A) = 1 × 2 × 3 = 6 ✓ (check: )
A 3×3 matrix has eigenvalues 2, 5, and . If tr(A) = 10, find .
Solution:
These formulas are powerful for:
For triangular (upper or lower) matrix with diagonal entries :
The eigenvalues are exactly the diagonal entries.
For upper triangular , is also upper triangular. The determinant of a triangular matrix is the product of diagonal entries:
For block diagonal :
For :
If , then .
Trace, determinant, and eigenvalues are similarity invariants—they are the same for all similar matrices.
Every polynomial of degree with complex coefficients has exactly roots in (counting multiplicity).
Every × complex matrix has exactly eigenvalues (counting algebraic multiplicity).
For a real matrix, complex eigenvalues occur in conjugate pairs: if is an eigenvalue, so is .
If and is real, then .
The rotation matrix (90° rotation):
Eigenvalues: (complex conjugate pair, no real eigenvalues).
When working over :
When working over :
For :
Compute for .
Method 1 (Direct): Since A is triangular:
Method 2 (Expansion): Expand and verify.
Verification: tr(A) = 11 = 1+4+6 ✓, det(A) = 24 = 1×4×6 ✓
Computing the characteristic polynomial via determinant expansion:
vs differ by . Both give the same eigenvalues, but keep conventions consistent.
Algebraic (root multiplicity) ≠ Geometric (eigenspace dimension) in general. They're equal only for diagonalizable matrices.
FALSE! Different matrices can have identical characteristic polynomials. The char poly doesn't uniquely determine a matrix.
When computing trace/det from eigenvalues, count repeated eigenvalues multiple times according to their algebraic multiplicity.
For , find eigenvalues and their multiplicities.
Step 1: Characteristic polynomial
Since A is upper triangular:
Step 2: Eigenvalues and algebraic multiplicities
Step 3: Geometric multiplicities
For :
For :
Conclusion: For : , so A is NOT diagonalizable.
A 3×3 matrix has eigenvalues 1, 1, 2. Find the characteristic polynomial.
Verify: tr(A) = 1+1+2 = 4 ✓ (coeff of )
det(A) = 1×1×2 = 2 ✓ (constant term with sign)
Let be a projection matrix () of rank on .
Eigenvalues satisfy , so .
Since rank(P) = r, eigenvalue 1 has algebraic multiplicity r.
Since nullity(P) = n-r, eigenvalue 0 has algebraic multiplicity n-r.
If , find possible eigenvalues of .
Solution: If is an eigenvalue of , then:
Possible eigenvalues:
The companion matrix of polynomial :
The characteristic polynomial of is exactly !
This is a key construction: any monic polynomial is the char poly of its companion matrix.
The minimal polynomial is the monic polynomial of smallest degree such that .
The minimal polynomial:
For :
The minimal polynomial has degree 1 while the char poly has degree n.
The minimal polynomial is important because:
The characteristic polynomial roots determine system stability: stable if all roots have negative real part (continuous) or magnitude < 1 (discrete).
Linear recurrences like Fibonacci have solutions determined by roots of a characteristic polynomial.
Solutions to involve terms for each eigenvalue .
The characteristic polynomial of the adjacency matrix encodes graph properties like number of spanning trees.
The Fibonacci sequence has matrix form:
The characteristic polynomial has roots and (golden ratio!).
This gives Binet's formula:
,
Similar matrices have identical char poly
Problem 1
Find the characteristic polynomial of .
Answer:
Problem 2
A 4×4 matrix has eigenvalues 1, 1, 2, 2. Find its trace and determinant.
Answer: tr = 6, det = 4
Problem 3
If with eigenvalues 1, 2, 3, 4, find c.
Hint: c = sum of products of pairs = 1·2+1·3+1·4+2·3+2·4+3·4 = 35
Problem 4
For what values of does have repeated eigenvalues?
Answer: Never (discriminant = 4 > 0 always)
Problem 5 (Challenge)
Prove that and have the same characteristic polynomial.
Every matrix satisfies its own characteristic equation:
This powerful result will be explored in detail in a later section.
Any power for can be expressed as a linear combination of .
For , express as a linear combination of and .
By Cayley-Hamilton:
Rearranging:
For block upper triangular :
This theorem is extremely useful for computing characteristic polynomials of large matrices that have block structure.
A population model with three age groups:
The dominant eigenvalue determines long-term population growth rate.
The largest real root (≈ 1.1) indicates ~10% population growth per generation.
A transition matrix for weather states (sunny, cloudy, rainy):
For stochastic matrices:
Two masses connected by springs have equations of motion determined by:
Natural frequencies:
Computes characteristic polynomial coefficients iteratively:
In practice, directly computing the characteristic polynomial and finding its roots is numerically unstable for large matrices. Instead:
For :
Possible rational roots:
Testing: ✓
Factor:
If are distinct eigenvalues:
The sum is always direct (no overlap except at 0).
Eigenvectors corresponding to distinct eigenvalues are linearly independent (proved in previous section). If , write . Then both sides are eigenvectors for (left) and a combination for other eigenvalues (right). By linear independence, .
For distinct eigenvalues :
For any matrix :
where are eigenvalues counted with multiplicity.
If , find .
Solution:
| Matrix Type | Characteristic Polynomial | Notes |
|---|---|---|
| Identity | Single eigenvalue 1 | |
| Scalar | Single eigenvalue c | |
| Diagonal | Eigenvalues = diagonal | |
| Triangular | Same as diagonal case | |
| Nilpotent | Only eigenvalue is 0 | |
| Projection | r = rank(P) | |
| 2D Rotation by θ | Eigenvalues | |
| Companion matrix | Given polynomial | Char poly = input poly |
The characteristic polynomial encodes all eigenvalue information. Its roots are eigenvalues, with algebraic multiplicity given by root multiplicity. Vieta's formulas connect coefficients to trace and determinant. The fundamental theorem of algebra guarantees existence over .
19
Theorems
24
Examples
12
Quiz Questions
8
FAQs
Cauchy (1826): First studied eigenvalue equations systematically for quadratic forms.
Sylvester (1852): Introduced the term "matrix" and studied determinantal equations.
Cayley (1858): Stated the Cayley-Hamilton theorem: every matrix satisfies its own characteristic equation.
Frobenius (1878): Developed the theory of characteristic and minimal polynomials systematically.
Weierstrass (1868): Introduced elementary divisors, refining the study of characteristic polynomials.
Terminology: "Characteristic" comes from the idea that this polynomial characterizes essential properties of the matrix.
Q1: Can two non-similar matrices have the same characteristic polynomial?
Yes! Example: and both have , but are not similar.
Q2: If all eigenvalues are 0, must the matrix be 0?
No! Nilpotent matrices have all eigenvalues 0 but may be non-zero. Example: .
Q3: Can eigenvalues be complex for a real matrix?
Yes! But they come in conjugate pairs. Example: has eigenvalues .
Q4: Is the characteristic polynomial unique up to scalar multiple?
No, it's completely unique. We define it as a monic polynomial (leading coefficient = 1).
Q5: Does A invertible imply all eigenvalues are non-zero?
Yes! det(A) = product of eigenvalues, so A invertible ⟺ det(A) ≠ 0 ⟺ no eigenvalue is 0.
Challenge 1
Prove that if is idempotent (), then the only possible eigenvalues are 0 and 1.
Challenge 2
If , prove equals the sum of all 2×2 principal minors.
Challenge 3
Let be a 3×3 matrix with . Find .
Hint: Eigenvalues are 1, 1, -2. Use .
Challenge 4
Show that for any polynomial of degree , there exists an × matrix whose characteristic polynomial is .
Hint: Use the companion matrix construction.
Challenge 5
Prove that for any × matrices and .
After computing eigenvalues, always verify:
With the characteristic polynomial understood, you're ready for:
Eigenvalues: 2, 4 (real, distinct)
Eigenvalues: (pure imaginary)
rank = 1, so eigenvalue 0 has
(cyclic)
Eigenvalues: 1, (cube roots of 1)
2×2 Formula
"Trace minus lambda, then add det"
Multiplicity Inequality
"Geometric ≤ Algebraic, like g ≤ a alphabetically"
Vieta's Formulas
"Sum = Trace, Product = Determinant"
Triangular Matrices
"Eigenvalues on the diagonal"
Previous
6.1 Eigenvalue Definition
Current
6.2 Characteristic Polynomial
Next
6.3 Diagonalization
In this module, you learned how the characteristic polynomial encodes all eigenvalue information. You can now:
It 'characterizes' the matrix's essential spectral properties - its eigenvalues. Similar matrices have the same characteristic polynomial, making it an invariant under similarity transformations.
Then there are no real eigenvalues. Over ℂ, by the Fundamental Theorem of Algebra, it always factors completely into linear terms, giving exactly n eigenvalues (counting multiplicity).
Algebraic = exponent in char poly (how many times λ₀ is a root). Geometric = dim(ker(A - λ₀I)) = number of linearly independent eigenvectors. Always: 1 ≤ geometric ≤ algebraic.
Expand det(A - λI) to get a cubic polynomial. Try rational roots (±divisors of constant/leading coeff), look for special structure, or factor by grouping. Numerical methods work for harder cases.
They differ by (-1)ⁿ. Using λI - A makes the polynomial monic (leading coefficient 1). Both give the same eigenvalues.
The minimal polynomial divides the characteristic polynomial. They have the same roots (eigenvalues) but possibly different multiplicities. The minimal polynomial has the smallest degree annihilating A.
It's invariant! If B = P⁻¹AP, then det(λI - B) = det(P⁻¹(λI - A)P) = det(λI - A). This is why eigenvalues are similarity invariants.
Yes! Non-similar matrices can have the same char poly if their Jordan structures differ. For example, I₂ and a non-identity 2×2 with eigenvalue 1 (multiplicity 2) both have char poly (λ-1)².