Eigenvalues and eigenvectors reveal the intrinsic structure of linear transformations. They enable diagonalization, simplify matrix computations, and provide deep insights into the behavior of linear systems. This course covers the complete spectral theory of matrices.
The concept of eigenvalues emerged in the 18th century through the work of Leonhard Euler and Joseph-Louis Lagrange on differential equations. Augustin-Louis Cauchy (1789–1857) developed the characteristic polynomial. Camille Jordan (1838–1922) introduced the Jordan normal form in 1870, providing a canonical form for all matrices. The Cayley-Hamilton theoremwas stated by Arthur Cayley (1821–1895) and proved by William Rowan Hamilton (1805–1865). Eigenvalue theory is central to quantum mechanics, stability analysis, and many areas of applied mathematics.
An eigenvalue of a matrix is a scalar such that for some nonzero vector (the eigenvector). Eigenvectors represent invariant directions under the linear transformation.
For an matrix , a scalar is an eigenvalue if there exists a nonzero vector such that . The vector is an eigenvector corresponding to .
The eigenspace of is .
Eigenvectors corresponding to distinct eigenvalues are linearly independent.
For , eigenvalues are (diagonal entries of triangular matrix).
Eigenvectors define directions that are preserved (possibly scaled) by the transformation. The eigenvalue gives the scaling factor.
The characteristic polynomial provides an algebraic method to find eigenvalues and encodes fundamental information about the matrix.
The characteristic polynomial of is .
is an eigenvalue of if and only if .
For eigenvalue :
For any eigenvalue, .
For an matrix :
A matrix is diagonalizable if it can be written as where is diagonal. This simplifies many computations, especially matrix powers.
An matrix is diagonalizable if there exists an invertible matrix and diagonal matrix such that .
is diagonalizable if and only if for each eigenvalue, geometric multiplicity = algebraic multiplicity.
If has distinct eigenvalues, then is diagonalizable.
If , then for any .
Step 1: Find eigenvalues via . Step 2: For each eigenvalue, find eigenvectors by solving . Step 3: Form from eigenvectors, from eigenvalues. Step 4: Verify .
When a matrix is not diagonalizable, Jordan normal form provides the best possible canonical form. Every matrix over is similar to a Jordan form.
A Jordan block of size with eigenvalue is:
A matrix is in Jordan normal form if it is block-diagonal with Jordan blocks along the diagonal.
Every matrix over is similar to a Jordan normal form, unique up to the order of blocks.
A generalized eigenvector of rank for eigenvalue is a vector such that but .
Jordan form is needed when geometric multiplicity < algebraic multiplicity for some eigenvalue. The number of Jordan blocks for equals the geometric multiplicity.
The Cayley-Hamilton theorem states that every matrix satisfies its own characteristic polynomial. This provides powerful methods for computing matrix inverses and powers.
If is the characteristic polynomial of , then (the zero matrix).
If , then:
when (i.e., ).
The minimal polynomial is the monic polynomial of smallest degree such that .
The minimal polynomial divides the characteristic polynomial and has the same roots (eigenvalues).
If , then , so . Higher powers can be expressed in terms of and .
An eigenvector's direction is preserved by the linear map. It may be stretched (|λ| > 1), shrunk (|λ| < 1), or flipped (λ < 0), but its direction stays the same. Eigenvectors define 'invariant directions' of the transformation.
A matrix is diagonalizable iff for each eigenvalue, the geometric multiplicity equals the algebraic multiplicity. Equivalently, it has n linearly independent eigenvectors. Matrices with n distinct eigenvalues are always diagonalizable.
Jordan form is the 'best possible' canonical form for any matrix over ℂ. It's needed when a matrix isn't diagonalizable (geometric < algebraic multiplicity). Every matrix over ℂ is similar to a Jordan form, which is block-diagonal with Jordan blocks.
If χ_A(λ) = λⁿ + c₁λⁿ⁻¹ + ... + cₙ, then Aⁿ + c₁Aⁿ⁻¹ + ... + cₙI = 0. Rearranging gives A^{-1} = -(Aⁿ⁻¹ + c₁Aⁿ⁻² + ... + cₙ₋₁I)/cₙ when cₙ ≠ 0.
Algebraic multiplicity = number of times λ appears as root of det(A - λI) = 0. Geometric multiplicity = dim(ker(A - λI)) = number of linearly independent eigenvectors. Always: 1 ≤ geometric ≤ algebraic.