Every linear map between finite-dimensional spaces can be represented by a matrix—once we choose bases. This fundamental correspondence bridges abstract linear algebra with concrete numerical computation.
The connection between linear maps and matrices was gradually understood through the 19th century. Arthur Cayley (1821–1895) developed matrix algebra in 1858, treating matrices as algebraic objects in their own right. He recognized that matrix multiplication corresponds to composition of linear transformations.
The modern perspective—that matrices are representations of abstract linear maps—emerged in the early 20th century with the axiomatization of vector spaces. This view emphasizes that the matrix depends on the choice of basis, while the underlying linear map does not.
Understanding this representation is crucial: it allows us to compute with linear maps using matrix arithmetic, while recognizing that properties like rank and eigenvalues are intrinsic to the map itself, independent of any particular matrix representation.
We begin with a fundamental question: how can we represent abstract linear maps with concrete numbers? The key insight is that a linear map is completely determined by its action on a basis. Once we choose bases for the domain and codomain, we can record the images of basis vectors as columns of a matrix.
Consider first the special case of linear maps . Any such map satisfies:
If we let , then where is the matrix with columns .
Any linear map can be written as for a unique matrix .
Existence: Define by setting column equal to . Then by linearity:
Uniqueness: If for all , taking shows that columns of and are equal.
For abstract vector spaces and , we use coordinate maps to reduce to the case above. Given bases, the coordinate map is an isomorphism, and we can represent any linear map via the commutative diagram:
Let where and . Let be a basis of and be a basis of .
Since each , we can write:
The matrix representation of with respect to and is:
Column contains the coordinates of in basis .
We often write the definition compactly as:
This notation emphasizes that we multiply the row of basis vectors by the matrix to get the row of images.
The matrix is where:
Note the reversal: rows correspond to the codomain, columns to the domain.
The matrix representation has a crucial property: it transforms coordinates. If we know the coordinates of a vector in the domain basis, matrix multiplication gives us the coordinates of its image in the codomain basis.
Let have matrix with respect to bases and . If has coordinates (in ) and has coordinates (in ), then:
Let . By linearity:
Substituting the expressions for in terms of :
The coefficient of is , which is the -th entry of .
For fixed bases, there is a one-to-one correspondence between linear maps and matrices . Each linear map has a unique matrix, and each matrix determines a unique linear map.
The matrix representation is more than just a correspondence—it is an isomorphism of vector spaces. This means we can transfer questions about linear maps to questions about matrices.
Let and be vector spaces of dimensions and over . Then:
In particular, .
The matrix representation defined by is:
The space has a natural basis: the matrices with a 1 in position and 0 elsewhere. There are such matrices.
Problem: Let with .
Find the matrix in the standard basis.
Solution: Compute the images of basis vectors:
The matrix is formed by placing these as columns:
Problem: Find the matrix of rotation by angle counterclockwise.
Solution: The rotation sends:
Therefore:
Problem: Find the matrix of orthogonal projection onto the line in .
Solution: The projection formula onto a unit vector is .
The projection matrix is:
Problem: Find the matrix of where , using standard bases.
Solution: Basis of : . Basis of : .
In coordinates:
What happens to the matrix when we change bases? This is fundamental for finding nice representations.
Let and be bases of . The change of basis matrix from to is the matrix such that:
Column of contains the coordinates of the -th vector of in basis .
Let have matrix with respect to bases . If we change to new bases , the new matrix is:
where is the change from to , and is the change from to .
For any , we have . Using and :
Therefore , so .
For a linear operator , if we use the same basis for domain and codomain, changing to a new basis gives:
Two matrices related by are called similar.
Similar matrices share many properties (they represent the same operator!):
Problem: For , find and .
Solution:
These three vectors span , so and is surjective.
For the kernel, solve :
This gives , so .
Problem: The reflection about the line has matrix in the standard basis. Find its matrix in the basis .
Solution: The change of basis matrix from to standard is:
The new matrix is:
The matrix is diagonal! The basis vectors are eigenvectors.
Problem: Let and . Find .
Solution: First, find the individual matrices:
Then multiply (in the correct order!):
Problem: Find the matrix of (differentiation) using basis for and for .
Solution: Compute derivatives:
The matrix is:
A map with and has an matrix, NOT . Rows = codomain, columns = domain.
Columns of the matrix are the images of basis vectors, not rows. Writing as a row instead of a column produces the transpose of the correct matrix.
For , the matrix is , not . The rightmost matrix acts first on the vector.
For operators, the formula is , not or . The order of and matters!
A matrix alone is meaningless—you must specify the bases. The same linear map has different matrices in different bases. Always track which bases are being used.
Column of is the coordinate vector . Apply to each domain basis vector and express the result in the codomain basis.
. The matrix transforms input coordinates to output coordinates via matrix-vector multiplication.
. Linear maps and matrices are in one-to-one correspondence (for fixed bases).
(general) or (operators). Similar matrices represent the same linear map in different bases.
Problem 1
Find the matrix of defined by in the standard bases.
Problem 2
Let be the matrix of in standard basis. Find the matrix of in the basis .
Problem 3
Find the matrix of the linear operator defined by using the standard basis .
Problem 4
Show that the matrices and are similar by finding a matrix .
Problem 5
Find the matrix of the integration operator defined by .
Problem 6
Prove that if is similar to a diagonal matrix, then is also similar to a diagonal matrix for all .
Matrix representation is the bridge between abstract linear algebra and computation. It connects to virtually every other topic in linear algebra.
The kernel of corresponds to the null space of its matrix : solutions to . The image of is the column space of . These correspondences allow us to compute kernels and images using row reduction.
For the matrix of :
The rank equals the number of pivot columns, and nullity equals the number of free variables.
For a linear operator , the determinant is independent of the choice of basis (similar matrices have equal determinants). It measures the "volume scaling factor" of the transformation.
If we can find a basis of eigenvectors, the matrix in that basis is diagonal. Finding the "right" basis to simplify the matrix is a central theme: eigenvalue decomposition, Jordan form, and SVD are all about choosing bases to reveal structure.
In an orthonormal basis, inner products become dot products: . The adjoint has matrix . Self-adjoint operators have real eigenvalues and orthogonal eigenvectors.
The matrix representation respects linear algebra operations:
(1) and (2): Follow from the linearity of the representation map.
(3): For any :
This equals .
(4): has coordinates , so column is .
(5): From , we get .
The following properties of a linear operator are independent of the choice of basis:
Properties that don't depend on the basis choice are called intrinsic—they belong to the linear map itself. Properties that depend on the basis (like individual matrix entries) are extrinsic. The goal of much of linear algebra is to find intrinsic properties and canonical forms.
If , then:
For trace: using .
For determinant: .
Matrix representation gives concrete geometric meaning to linear transformations.
The columns of a matrix show where the standard basis vectors go. For a matrix:
Column 1 is : where gets mapped.
Column 2 is : where gets mapped.
To visualize the transformation, draw the unit square and see where it maps—the columns are the images of the two edges from the origin.
Rotation by θ:
Reflection about x-axis:
Scaling by factors a and b:
Shear (horizontal):
Projection onto x-axis:
For a matrix, is the factor by which areas scale. For , it's the volume scaling factor. A negative determinant indicates orientation reversal (like a reflection).
If , the transformation collapses dimension—the image is a lower-dimensional subspace (like projecting 3D to a plane).
Let . Prove that if , then for any polynomial :
Hint: First prove for by induction, then extend by linearity.
Prove that for any :
Conclude that no matrices , satisfy .
Let and . Prove:
Hint: Think about the column space of relative to that of .
Always construct matrices one column at a time. Each column answers: "Where does this basis vector go, expressed in the target basis?"
Before computing, verify that dimensions match. A map needs an matrix.
After computing a matrix, verify by testing: compute directly and via. They should match.
Always write down which bases you're using. Notations like help prevent confusion between different matrix representations.
| Concept | Formula/Description |
|---|---|
| Matrix Representation | Column = |
| Matrix Dimensions | where , |
| Coordinate Transform | |
| Isomorphism | , |
| Change of Basis (General) | |
| Change of Basis (Operator) | |
| Matrix of Composition | |
| Similar Matrices | Same det, trace, eigenvalues, char. poly, rank |
The Development of Matrix Notation: The word "matrix" was coined by James Joseph Sylvester in 1850, derived from the Latin word for "womb" (as a matrix is a rectangular array from which determinants can be "born"). Arthur Cayley then developed matrix algebra in his 1858 paper "A Memoir on the Theory of Matrices."
From Equations to Transformations: Initially, matrices were viewed as abbreviations for systems of linear equations. The shift to viewing them as representations of linear transformations came later, influenced by the work of mathematicians like Hermann Grassmann and Giuseppe Peano on abstract vector spaces.
The Cambridge School: Cayley and Sylvester, working in Cambridge, laid the foundations of matrix theory. They discovered that matrices form a ring under addition and multiplication, and that matrix multiplication is generally non-commutative—a surprising departure from ordinary number arithmetic.
Modern Perspective: The 20th century saw the abstraction of linear algebra, with emphasis on vector spaces defined axiomatically. In this view, matrices are representations that depend on basis choice—a powerful perspective that unifies seemingly different matrices as representations of the same abstract linear map.
Problem: Find the matrix of the left shift operator defined by .
Solution: Compute images of basis vectors:
The matrix is:
This is a nilpotent matrix: .
Problem: Find the matrix of .
Solution: Use basis for and for .
The matrix (a row vector) is:
Problem: Verify that gives the same result via direct computation and matrix multiplication.
Solution: Matrix in standard basis:
For :
Now that you understand how to represent linear maps as matrices, the next topics build on this foundation:
These concepts transform abstract linear algebra into computational techniques that power everything from computer graphics to machine learning.
The matrix encodes how the linear map transforms coordinates. Different bases mean different coordinate systems, so the same abstract map gets different numerical representations. The key insight is that while the matrix changes, the intrinsic properties of the map (rank, nullity, eigenvalues) remain unchanged.
Similar matrices represent the same linear operator in different bases. They share many properties: determinant, trace, eigenvalues, characteristic polynomial, and rank. If $B = P^{-1}AP$, then $A$ and $B$ are similar, connected by the change of basis matrix $P$.
Write each vector of the new basis as a linear combination of the old basis vectors. The coefficients form the columns of the change of basis matrix. For example, if $b_j = \sum_i p_{ij} c_i$, then column $j$ of $P$ contains the coefficients $p_{1j}, p_{2j}, \ldots$.
It depends on the problem. For eigenvalue problems, an eigenbasis diagonalizes the matrix. For inner product calculations, an orthonormal basis simplifies computations. For computational efficiency, sparse or structured bases may be preferred.
Yes! The abstract approach works without coordinates—defining linear maps, kernels, images, etc. without choosing bases. However, for numerical computation, we eventually need bases and matrices. The power of the theory is that results (like rank) don't depend on the arbitrary basis choice.
The matrix must multiply an $n$-dimensional coordinate vector (column with $n$ entries) and produce an $m$-dimensional result. For this multiplication to work, the matrix must have $m$ rows and $n$ columns: $(m \times n) \cdot (n \times 1) = (m \times 1)$.
Matrix multiplication corresponds exactly to composition of linear maps. If $T$ has matrix $A$ and $S$ has matrix $B$, then $S \circ T$ has matrix $BA$ (note the order!). This is why matrix multiplication is associative but not commutative.
The notation $[T]_B^C$ denotes the matrix of $T: V \to W$ using basis $B$ for the domain $V$ and basis $C$ for the codomain $W$. When $T: V \to V$ is an operator and we use the same basis for both, we often write $[T]_B$ as shorthand for $[T]_B^B$.
Yes, but only if we use different bases! For fixed bases, the correspondence between linear maps and matrices is one-to-one (an isomorphism). Two maps with the same matrix in the same bases are identical.
Check that for each basis vector $v_j$, the $j$th column equals $[T(v_j)]_C$ (the coordinates of the image in the codomain basis). You can also verify by computing $[T(v)]_C = A[v]_B$ for a few test vectors.