This course deepens your understanding of matrix theory by covering inverses, the building blocks of row operations (elementary matrices), the fundamental concept of rank, and the elegant theory of dual spaces. These topics connect matrix computations to abstract linear algebra.
The concept of matrix inverses emerged naturally from solving systems of linear equations, with Arthur Cayley (1821–1895) formalizing matrix algebra in 1858. Elementary matrices, though implicit in Gaussian elimination, were explicitly studied in the early 20th century. The rank of a matrix, a fundamental invariant, was recognized by James Joseph Sylvester in the 1850s. The beautiful theorem that row rank equals column rank was proven by various mathematicians, with Emmy Noetherproviding elegant abstract proofs. Dual spaces, introduced by Hermann Grassmannin the 1840s and later developed by Jean Dieudonné and others, provide a powerful framework for understanding linear maps and their adjoints.
The inverse of a matrix is the unique matrix such that . Invertible matrices correspond to isomorphisms and are fundamental for solving linear systems.
An matrix is invertible (or nonsingular) if there exists such that .
If an inverse exists, it is unique.
If are both inverses of , then .
For an matrix , the following are equivalent:
To find , row reduce to . For example, if , then .
For with :
Computing via Gaussian elimination is operations. For large matrices, iterative methods or specialized algorithms may be more efficient.
Elementary matrices are obtained by applying one elementary row operation to the identity matrix. They are the building blocks of Gaussian elimination and provide a factorization of invertible matrices.
An elementary matrix is obtained by applying one elementary row operation to .
Three types:
Every elementary matrix is invertible, and its inverse is also elementary.
Left-multiplying by an elementary matrix performs the corresponding row operation:
An matrix is invertible if and only if it is a product of elementary matrices.
If is invertible, row reduce to : , so . Conversely, products of invertible matrices are invertible.
If (row echelon form), then . This factorization is the basis for LU decomposition.
For matrices:
The inverse of an elementary matrix is also elementary:
The rank of a matrix is the dimension of its row space (or column space). It is a fundamental invariant that determines solvability of linear systems and invertibility.
The rank of a matrix , denoted , is:
For any matrix , row rank = column rank.
Both equal where is the linear map. Alternatively, row operations preserve row rank and show column rank equals it.
An matrix is invertible if and only if .
For :
For , , where is the nullity of .
For matrices and of compatible sizes:
where is the number of columns of (or rows of ).
The dual space consists of all linear functionals (linear maps from to ). It has the same dimension as and provides a natural way to study vectors through their evaluations.
A linear functional on is a linear map .
The dual space of is , the space of all linear functionals on .
If (finite), then .
, so .
If is a basis for , the dual basis is defined by (Kronecker delta).
The dual basis is indeed a basis for .
For a subspace , the annihilator is .
If and , then .
The double dual is . The evaluation map sends to where .
For finite-dimensional , the evaluation map is an isomorphism. This is a natural isomorphism (doesn't depend on basis choice).
For , the dual map (or transpose) is defined by .
For , every linear functional is of the form:
for some . So .
If is a basis of , then the dual basis provides coordinates for . For , .
Matrix inverses are fundamental for solving linear systems, computing matrix functions, and many other applications in mathematics and engineering.
For an invertible matrix , the system has the unique solution:
Multiply both sides of by :
Solve .
In matrix form:
Using the inverse:
While is theoretically correct, in practice it's usually more efficient to solve directly via Gaussian elimination rather than computing first. However, if you need to solve for many different , computing once may be more efficient.
The columns of an invertible matrix form a basis for . Conversely, if the columns of form a basis, then is invertible.
If is a basis, the change of basis matrix from to standard basis has columns .
Then converts standard coordinates to -coordinates: .
For invertible and , the matrix equation has unique solution , and has unique solution .
If where is invertible, then .
This is useful for solving systems of matrix equations and finding matrix square roots (when ).
For an n×n matrix A, invertibility is equivalent to: det(A) ≠ 0, rank(A) = n, columns/rows linearly independent, Ax = 0 has only trivial solution, Ax = b has unique solution for all b, A is product of elementary matrices, A is row equivalent to I, and more.
Each step of Gaussian elimination is a row operation, which equals left-multiplying by an elementary matrix. The entire process is E_k ⋯ E_1 A = U, so Gaussian elimination is just matrix multiplication!
This beautiful theorem has multiple proofs. One key insight: both equal dim(im A) = dim(im A^T), viewing A as a linear map. Row operations preserve row rank but show column rank equals it.
The dual space V* is the space of all linear functionals (linear maps from V to F). It has the same dimension as V and provides a natural way to study vectors through their 'evaluations' on functionals. The double dual V** is naturally isomorphic to V.
The annihilator W^0 of a subspace W ⊆ V is the set of all functionals that vanish on W. If dim(V) = n and dim(W) = k, then dim(W^0) = n - k. This follows from the rank-nullity theorem applied to the evaluation map.