MathIsimple
LA-C9
Available

Course 9: Eigenvalues & Eigenvectors

Eigenvalues and eigenvectors reveal the intrinsic structure of linear transformations. They enable diagonalization, simplify matrix computations, and provide deep insights into the behavior of linear systems. This course covers the complete spectral theory of matrices.

15-18 hours Advanced Level 10 Objectives
Learning Objectives
  • Define eigenvalues and eigenvectors for matrices and linear operators.
  • Compute eigenvalues using the characteristic polynomial det(A - λI) = 0.
  • Understand algebraic and geometric multiplicities and their relationship.
  • Prove that eigenvectors for distinct eigenvalues are linearly independent.
  • Master diagonalization: when and how to diagonalize matrices.
  • Understand Jordan normal form for non-diagonalizable matrices.
  • Apply the Cayley-Hamilton theorem to compute matrix inverses and powers.
  • Connect eigenvalues to determinant, trace, and invertibility.
  • Compute matrix powers efficiently using diagonalization.
  • Understand the geometric meaning of eigenvectors as invariant directions.
Prerequisites
  • LA-C8: Determinants
  • Matrix operations and inverses
  • Linear independence and basis
  • Polynomial algebra
  • Complex numbers (for complex eigenvalues)
Historical Context

The concept of eigenvalues emerged in the 18th century through the work of Leonhard Euler and Joseph-Louis Lagrange on differential equations. Augustin-Louis Cauchy (1789–1857) developed the characteristic polynomial. Camille Jordan (1838–1922) introduced the Jordan normal form in 1870, providing a canonical form for all matrices. The Cayley-Hamilton theoremwas stated by Arthur Cayley (1821–1895) and proved by William Rowan Hamilton (1805–1865). Eigenvalue theory is central to quantum mechanics, stability analysis, and many areas of applied mathematics.

1. Eigenvalues and Eigenvectors

An eigenvalue λ\lambda of a matrix AA is a scalar such thatAv=λvAv = \lambda v for some nonzero vector vv (the eigenvector). Eigenvectors represent invariant directions under the linear transformation.

Definition 1.1: Eigenvalue and Eigenvector

For an n×nn \times n matrix AA, a scalar λ\lambda is an eigenvalue if there exists a nonzero vector vv such that Av=λvAv = \lambda v. The vector vv is an eigenvector corresponding to λ\lambda.

Definition 1.2: Eigenspace

The eigenspace of λ\lambda is Eλ=ker(AλI)={v:Av=λv}E_\lambda = \ker(A - \lambda I) = \{v : Av = \lambda v\}.

Theorem 1.1: Independence of Eigenvectors

Eigenvectors corresponding to distinct eigenvalues are linearly independent.

Example 1.1: Finding Eigenvalues

For A=(2103)A = \begin{pmatrix} 2 & 1 \\ 0 & 3 \end{pmatrix}, eigenvalues are λ=2,3\lambda = 2, 3 (diagonal entries of triangular matrix).

Remark 1.1: Geometric Meaning

Eigenvectors define directions that are preserved (possibly scaled) by the transformation. The eigenvalue gives the scaling factor.

2. Characteristic Polynomial

The characteristic polynomial provides an algebraic method to find eigenvalues and encodes fundamental information about the matrix.

Definition 2.1: Characteristic Polynomial

The characteristic polynomial of AA is χA(λ)=det(AλI)\chi_A(\lambda) = \det(A - \lambda I).

Theorem 2.1: Eigenvalues as Roots

λ\lambda is an eigenvalue of AA if and only if χA(λ)=0\chi_A(\lambda) = 0.

Definition 2.2: Multiplicities

For eigenvalue λ0\lambda_0:

  • Algebraic multiplicity aa: exponent of (λλ0)(\lambda - \lambda_0) in χA(λ)\chi_A(\lambda)
  • Geometric multiplicity gg: dim(Eλ0)\dim(E_{\lambda_0})
Theorem 2.2: Multiplicity Inequality

For any eigenvalue, 1ga1 \leq g \leq a.

Theorem 2.3: Vieta's Formulas

For an n×nn \times n matrix AA:

  • Sum of eigenvalues = tr(A)\text{tr}(A)
  • Product of eigenvalues = det(A)\det(A)

3. Diagonalization

A matrix is diagonalizable if it can be written as A=PDP1A = PDP^{-1} whereDD is diagonal. This simplifies many computations, especially matrix powers.

Definition 3.1: Diagonalizable Matrix

An n×nn \times n matrix AA is diagonalizable if there exists an invertible matrix PP and diagonal matrix DD such that A=PDP1A = PDP^{-1}.

Theorem 3.1: Diagonalizability Criterion

AA is diagonalizable if and only if for each eigenvalue, geometric multiplicity = algebraic multiplicity.

Corollary 3.1: Distinct Eigenvalues

If AA has nn distinct eigenvalues, then AA is diagonalizable.

Theorem 3.2: Matrix Powers

If A=PDP1A = PDP^{-1}, then Ak=PDkP1A^k = PD^kP^{-1} for any kk.

Example 3.1: Diagonalization Algorithm

Step 1: Find eigenvalues via det(AλI)=0\det(A - \lambda I) = 0. Step 2: For each eigenvalue, find eigenvectors by solving (AλI)v=0(A - \lambda I)v = 0. Step 3: Form PP from eigenvectors, DD from eigenvalues. Step 4: Verify A=PDP1A = PDP^{-1}.

4. Jordan Normal Form

When a matrix is not diagonalizable, Jordan normal form provides the best possible canonical form. Every matrix over C\mathbb{C} is similar to a Jordan form.

Definition 4.1: Jordan Block

A Jordan block of size kk with eigenvalue λ\lambda is:

Jk(λ)=(λ100λ100λ)J_k(\lambda) = \begin{pmatrix} \lambda & 1 & 0 & \cdots \\ 0 & \lambda & 1 & \cdots \\ \vdots & \ddots & \ddots & \ddots \\ 0 & \cdots & 0 & \lambda \end{pmatrix}
Definition 4.2: Jordan Normal Form

A matrix is in Jordan normal form if it is block-diagonal with Jordan blocks along the diagonal.

Theorem 4.1: Jordan Theorem

Every matrix over C\mathbb{C} is similar to a Jordan normal form, unique up to the order of blocks.

Definition 4.3: Generalized Eigenvector

A generalized eigenvector of rank kk for eigenvalue λ\lambda is a vector vv such that (AλI)kv=0(A - \lambda I)^k v = 0 but (AλI)k1v0(A - \lambda I)^{k-1} v \neq 0.

Remark 4.1: When Jordan Form is Needed

Jordan form is needed when geometric multiplicity < algebraic multiplicity for some eigenvalue. The number of Jordan blocks for λ\lambda equals the geometric multiplicity.

5. Cayley-Hamilton Theorem

The Cayley-Hamilton theorem states that every matrix satisfies its own characteristic polynomial. This provides powerful methods for computing matrix inverses and powers.

Theorem 5.1: Cayley-Hamilton

If χA(λ)=det(AλI)\chi_A(\lambda) = \det(A - \lambda I) is the characteristic polynomial of AA, then χA(A)=0\chi_A(A) = 0 (the zero matrix).

Corollary 5.1: Matrix Inverse via Cayley-Hamilton

If χA(λ)=λn+c1λn1++cn\chi_A(\lambda) = \lambda^n + c_1\lambda^{n-1} + \cdots + c_n, then:

A1=An1+c1An2++cn1IcnA^{-1} = -\frac{A^{n-1} + c_1 A^{n-2} + \cdots + c_{n-1}I}{c_n}

when cn0c_n \neq 0 (i.e., det(A)0\det(A) \neq 0).

Definition 5.1: Minimal Polynomial

The minimal polynomial mA(λ)m_A(\lambda) is the monic polynomial of smallest degree such that mA(A)=0m_A(A) = 0.

Theorem 5.2: Minimal Polynomial Properties

The minimal polynomial divides the characteristic polynomial and has the same roots (eigenvalues).

Example 5.1: Using Cayley-Hamilton

If χA(λ)=λ25λ+6\chi_A(\lambda) = \lambda^2 - 5\lambda + 6, then A25A+6I=0A^2 - 5A + 6I = 0, so A2=5A6IA^2 = 5A - 6I. Higher powers can be expressed in terms of AA and II.

Course 9 Practice Quiz
10
Questions
0
Correct
0%
Accuracy
1
If Av=λvAv = \lambda v with v0v \neq 0, then λ\lambda is called:
Easy
Not attempted
2
The characteristic polynomial of AA is:
Easy
Not attempted
3
Sum of eigenvalues (with multiplicity) equals:
Medium
Not attempted
4
A matrix is diagonalizable iff it has:
Easy
Not attempted
5
If A=PDP1A = PDP^{-1} with DD diagonal, then Ak=A^k =
Medium
Not attempted
6
Cayley-Hamilton states that χA(A)=\chi_A(A) =
Easy
Not attempted
7
Geometric multiplicity satisfies:
Hard
Not attempted
8
Every matrix over C\mathbb{C} has:
Easy
Not attempted
9
If λ=0\lambda = 0 is an eigenvalue:
Medium
Not attempted
10
Eigenvectors for different eigenvalues are:
Medium
Not attempted

Frequently Asked Questions

What's the geometric meaning of eigenvectors?

An eigenvector's direction is preserved by the linear map. It may be stretched (|λ| > 1), shrunk (|λ| < 1), or flipped (λ < 0), but its direction stays the same. Eigenvectors define 'invariant directions' of the transformation.

When is a matrix diagonalizable?

A matrix is diagonalizable iff for each eigenvalue, the geometric multiplicity equals the algebraic multiplicity. Equivalently, it has n linearly independent eigenvectors. Matrices with n distinct eigenvalues are always diagonalizable.

What is Jordan normal form and when is it needed?

Jordan form is the 'best possible' canonical form for any matrix over ℂ. It's needed when a matrix isn't diagonalizable (geometric < algebraic multiplicity). Every matrix over ℂ is similar to a Jordan form, which is block-diagonal with Jordan blocks.

How does Cayley-Hamilton help compute matrix inverses?

If χ_A(λ) = λⁿ + c₁λⁿ⁻¹ + ... + cₙ, then Aⁿ + c₁Aⁿ⁻¹ + ... + cₙI = 0. Rearranging gives A^{-1} = -(Aⁿ⁻¹ + c₁Aⁿ⁻² + ... + cₙ₋₁I)/cₙ when cₙ ≠ 0.

What's the difference between algebraic and geometric multiplicity?

Algebraic multiplicity = number of times λ appears as root of det(A - λI) = 0. Geometric multiplicity = dim(ker(A - λI)) = number of linearly independent eigenvectors. Always: 1 ≤ geometric ≤ algebraic.