The kernel tells us what a linear map annihilates; the image tells us what it can produce. Together, they completely characterize the map's behavior and reveal its fundamental structure.
The concepts of kernel and image emerged as mathematicians sought to understand the structure of linear transformations. The term "kernel" comes from the German "Kern" (core), reflecting the idea that the kernel captures the "core" of what gets collapsed to zero.
These concepts are central to the First Isomorphism Theorem, which states that the image of a linear map is isomorphic to the quotient of the domain by the kernel: .
Every linear map has two fundamental subspaces associated with it: the kernel (what gets sent to zero) and the image (what can be reached).
The kernel (or null space) of a linear map is the set of all vectors in that maps to zero:
The image (or range) of a linear map is the set of all vectors in that are outputs of :
Different textbooks use different notation:
Zero map :
Identity map :
For any linear map , the kernel is a subspace of .
We verify the three subspace conditions:
For any linear map , the image is a subspace of .
We verify the three subspace conditions:
If is a basis of , then:
Any equals for some . By linearity, , which is in the span.
The kernel and image provide elegant characterizations of when a linear map is injective (one-to-one) or surjective (onto).
A linear map is injective if and only if .
(⇒) Suppose is injective. If , then . By injectivity, . So .
(⇐) Suppose . If , then , so . Thus , and is injective.
A linear map is surjective if and only if .
This is essentially the definition of surjectivity: is surjective iff every element of is hit by some element of , which is exactly when .
is bijective (an isomorphism) iff and .
Here we present systematic methods for computing these fundamental subspaces.
Problem: Find for defined by .
Solution: We need :
From the first equation: . From the second: .
So .
Problem: Find for the same .
Solution: Apply to the standard basis:
. Since and are linearly independent, .
For the projection defined by :
Note: .
For differentiation defined by :
For a linear map :
For a linear map with finite-dimensional:
The domain dimension "splits" into two parts: what gets collapsed to zero (nullity) and what survives as output (rank). This is the central dimension formula of linear algebra.
(in the domain), while (in the codomain). They live in different spaces!
means is injective, not trivial. The trivial (zero) map has !
is often a proper subspace of . Only when is surjective do we have .
and always. Neither kernel nor image is ever empty.
. Subspace of . iff is injective.
. Subspace of . iff is surjective.
. The fundamental relationship.
For matrix : = null space, = column space.
Problem: Let be defined by . Find and .
Solution:
For the kernel, solve :
The second equation is 2× the first, so we have one independent equation: , i.e., .
For the image, note , so .
Verification: ✓
Problem: Let be defined by . Find and .
Solution:
For :
= constant polynomials =
Verification: ✓
Problem: Let be the trace map. Find its kernel and image.
Solution:
= traceless matrices
Dimension: (one constraint on entries)
since for any .
Verification: ✓
Problem: Is defined by injective?
Solution: Check if .
Solve :
From equations 1 and 2: and .
So , and is injective.
Problem: Is defined by surjective?
Solution: Check if .
Compute on the standard basis:
Since and are in the span, .
is surjective.
Let be linear and be a subspace. Then the preimage is a subspace of .
Let be linear and be a subspace. Then is a subspace of .
For linear maps and :
The First Isomorphism Theorem states that for any linear map :
This says the quotient of by its kernel is isomorphic to the image. The map is a well-defined isomorphism.
When a linear map is represented by a matrix, the kernel and image have concrete interpretations.
For a matrix :
To find the null space of :
To find the column space of :
Problem: Find the null space and column space of
Solution:
Row reduce:
Pivot column: 1. Free variables: .
(first column of original )
| Concept | Definition | Key Property |
|---|---|---|
| Kernel | Subspace of | |
| Image | Subspace of | |
| Rank | = column rank of matrix | |
| Nullity | = # free variables | |
| Injective | nullity = 0 | |
| Surjective | rank = dim W |
Problem 1
Let be defined by . Find bases for and .
Problem 2
Let be defined by . Determine if is injective, surjective, or neither.
Problem 3
Let be defined by . Find and .
Problem 4
If satisfies (nilpotent of order 2), prove that .
Problem 5
Let be linear operators with . Prove that .
Problem 6
Let be linear with . Prove that if , then maps any set of linearly independent vectors to a linearly dependent set.
For a linear operator :
This chain eventually stabilizes: there exists such that
For a linear operator :
This chain also stabilizes eventually.
If is idempotent (), then:
Moreover, and .
Direct sum: We show .
If , then and for some .
Thus .
Spanning: Any can be written as:
where , so .
Idempotent operators are called projections. The decomposition shows that projections split the space into two complementary subspaces. This is fundamental in many areas, including functional analysis and quantum mechanics.
For and linear:
Problem: Let be defined by . Find and .
Solution:
For :
since implies .
= polynomials with zero constant term.
Conclusion: is injective but not surjective.
Problem: Let be defined by . Find .
Solution:
since implies .
since for any .
Conclusion: is an isomorphism (bijective linear map).
Problem: Let be defined by . Analyze .
Solution:
= symmetric matrices
= skew-symmetric matrices
For : ,
Check: ✓
Problem: Let be defined by . Analyze .
Solution:
A polynomial of degree with roots must be zero.
So , and is injective.
Since , is also surjective.
Conclusion: is an isomorphism.
For a linear system :
For a linear differential operator :
Transformations in computer graphics use kernel/image concepts:
Linear codes use kernel and image:
Understanding kernel and image geometrically provides powerful intuition for linear maps.
Consider projection from onto the xy-plane:
Consider rotation by angle in :
Consider reflection about the x-axis in :
Consider the shear :
After computing kernel and image, check that . This catches computational errors.
When computing the image, apply to the standard basis vectors first. The image is the span of these outputs.
Finding the kernel always reduces to solving . For matrix maps, this is —use row reduction!
Visualize the kernel as what gets "crushed" to zero, and the image as the "shadow" or "footprint" of the domain in the codomain.
Now that you understand kernel and image, the next steps explore:
The Rank-Nullity Theorem is the crown jewel of this material—it precisely quantifies the trade-off between kernel and image. Understanding this trade-off is essential for all of linear algebra and its applications.
Because it's the set of vectors T sends to zero (null). The terminology 'null space' is common for matrices, while 'kernel' is used for abstract linear maps. Both refer to the same concept.
For T represented by matrix A: ker(T) = null space of A (solution space of Ax = 0), im(T) = column space of A. This connects abstract linear algebra to matrix computations and row reduction.
The kernel is what T 'collapses' to zero. For a projection onto a plane, the kernel is the line perpendicular to that plane. For differentiation, the kernel is constant functions. It measures the 'dimension loss' of the map.
For T: V → V, yes! If v is in both ker(T) and im(T), then T(v) = 0 and v = T(u) for some u. This means T²(u) = 0. Nilpotent operators have this property.
rank(T) = dim(im(T)). This equals the column rank of any matrix representing T. The Rank-Nullity theorem says dim(V) = rank(T) + nullity(T), where nullity(T) = dim(ker T).
Set T(v) = 0 and solve for v. For maps given by matrices, this reduces to solving the homogeneous system Ax = 0 using row reduction.
Method 1: Apply T to a basis of V and take the span. Method 2: For matrix A, the image is the column space—find a basis by identifying pivot columns after row reduction.
For S ∘ T: ker(T) ⊆ ker(S ∘ T) and im(S ∘ T) ⊆ im(S). The composition can only make the kernel larger and the image smaller.
Not by kernel alone! Knowing ker(T) tells you what T sends to 0, but not where other vectors go. However, if you also know im(T), the Rank-Nullity theorem constrains the possibilities.
T is invertible iff ker(T) = {0} AND im(T) = W. For maps between spaces of the same finite dimension, either condition implies both (and invertibility).