MathIsimple
LA-3.2
Available

Kernel & Image

The kernel tells us what a linear map annihilates; the image tells us what it can produce. Together, they completely characterize the map's behavior and reveal its fundamental structure.

3-4 hours Core Level 10 Objectives
Learning Objectives
  • Define the kernel and image of a linear map
  • Prove that kernel and image are always subspaces
  • Characterize injectivity through the kernel
  • Characterize surjectivity through the image
  • Compute kernels and images for specific linear maps
  • Understand the geometric meaning of kernel and image
  • Calculate the rank and nullity of linear maps
  • Apply kernel-image concepts to determine map properties
  • Prove fundamental properties of kernel and image
  • Relate kernel and image to matrix null space and column space
Prerequisites
  • Linear map definition (LA-3.1)
  • Vector spaces and subspaces (LA-2.2)
  • Span and linear combinations (LA-2.3)
  • Basis and dimension (LA-2.4)
  • Linear independence
Historical Context

The concepts of kernel and image emerged as mathematicians sought to understand the structure of linear transformations. The term "kernel" comes from the German "Kern" (core), reflecting the idea that the kernel captures the "core" of what gets collapsed to zero.

These concepts are central to the First Isomorphism Theorem, which states that the image of a linear map is isomorphic to the quotient of the domain by the kernel: V/ker(T)im(T)V/\ker(T) \cong \text{im}(T).

1. Definitions

Every linear map T:VWT: V \to W has two fundamental subspaces associated with it: the kernel (what gets sent to zero) and the image (what can be reached).

Definition 3.3: Kernel (Null Space)

The kernel (or null space) of a linear map T:VWT: V \to W is the set of all vectors in VV that TT maps to zero:

ker(T)=null(T)={vV:T(v)=0W}\ker(T) = \text{null}(T) = \{v \in V : T(v) = 0_W\}
Definition 3.4: Image (Range)

The image (or range) of a linear map T:VWT: V \to W is the set of all vectors in WW that are outputs of TT:

im(T)=range(T)=T(V)={T(v):vV}\text{im}(T) = \text{range}(T) = T(V) = \{T(v) : v \in V\}
Remark 3.5: Notation Variations

Different textbooks use different notation:

  • Kernel: ker(T)\ker(T), null(T)\text{null}(T), N(T)N(T)
  • Image: im(T)\text{im}(T), range(T)\text{range}(T), R(T)R(T), T(V)T(V)
Example 3.9: Basic Examples

Zero map 0:VW0: V \to W:

  • ker(0)=V\ker(0) = V (everything maps to zero)
  • im(0)={0W}\text{im}(0) = \{0_W\} (only zero is produced)

Identity map IV:VVI_V: V \to V:

  • ker(IV)={0V}\ker(I_V) = \{0_V\} (only zero maps to zero)
  • im(IV)=V\text{im}(I_V) = V (everything is reachable)

2. Kernel and Image are Subspaces

Theorem 3.12: Kernel is a Subspace

For any linear map T:VWT: V \to W, the kernel ker(T)\ker(T) is a subspace of VV.

Proof:

We verify the three subspace conditions:

  1. Non-empty: T(0V)=0WT(0_V) = 0_W, so 0Vker(T)0_V \in \ker(T).
  2. Closed under addition: If u,vker(T)u, v \in \ker(T), then T(u+v)=T(u)+T(v)=0+0=0T(u + v) = T(u) + T(v) = 0 + 0 = 0, so u+vker(T)u + v \in \ker(T).
  3. Closed under scalar multiplication: If vker(T)v \in \ker(T) and αF\alpha \in F, then T(αv)=αT(v)=α0=0T(\alpha v) = \alpha T(v) = \alpha \cdot 0 = 0, so αvker(T)\alpha v \in \ker(T).
Theorem 3.13: Image is a Subspace

For any linear map T:VWT: V \to W, the image im(T)\text{im}(T) is a subspace of WW.

Proof:

We verify the three subspace conditions:

  1. Non-empty: T(0V)=0WT(0_V) = 0_W, so 0Wim(T)0_W \in \text{im}(T).
  2. Closed under addition: If w1,w2im(T)w_1, w_2 \in \text{im}(T), then w1=T(v1)w_1 = T(v_1) and w2=T(v2)w_2 = T(v_2) for some v1,v2Vv_1, v_2 \in V. Thus w1+w2=T(v1)+T(v2)=T(v1+v2)im(T)w_1 + w_2 = T(v_1) + T(v_2) = T(v_1 + v_2) \in \text{im}(T).
  3. Closed under scalar multiplication: If wim(T)w \in \text{im}(T), then w=T(v)w = T(v) for some vVv \in V. Thus αw=αT(v)=T(αv)im(T)\alpha w = \alpha T(v) = T(\alpha v) \in \text{im}(T).
Theorem 3.14: Image Equals Span of Basis Images

If {v1,,vn}\{v_1, \ldots, v_n\} is a basis of VV, then:

im(T)=span{T(v1),,T(vn)}\text{im}(T) = \text{span}\{T(v_1), \ldots, T(v_n)\}
Proof:

Any wim(T)w \in \text{im}(T) equals T(v)T(v) for some v=αiviVv = \sum \alpha_i v_i \in V. By linearity, w=T(αivi)=αiT(vi)w = T(\sum \alpha_i v_i) = \sum \alpha_i T(v_i), which is in the span.

3. Characterizing Injectivity and Surjectivity

The kernel and image provide elegant characterizations of when a linear map is injective (one-to-one) or surjective (onto).

Theorem 3.15: Injectivity Characterization

A linear map T:VWT: V \to W is injective if and only if ker(T)={0}\ker(T) = \{0\}.

Proof:

(⇒) Suppose TT is injective. If vker(T)v \in \ker(T), then T(v)=0=T(0)T(v) = 0 = T(0). By injectivity, v=0v = 0. So ker(T)={0}\ker(T) = \{0\}.

(⇐) Suppose ker(T)={0}\ker(T) = \{0\}. If T(u)=T(v)T(u) = T(v), then T(uv)=T(u)T(v)=0T(u - v) = T(u) - T(v) = 0, so uvker(T)={0}u - v \in \ker(T) = \{0\}. Thus u=vu = v, and TT is injective.

Theorem 3.16: Surjectivity Characterization

A linear map T:VWT: V \to W is surjective if and only if im(T)=W\text{im}(T) = W.

Proof:

This is essentially the definition of surjectivity: TT is surjective iff every element of WW is hit by some element of VV, which is exactly when im(T)=W\text{im}(T) = W.

Corollary 3.2: Bijectivity

TT is bijective (an isomorphism) iff ker(T)={0}\ker(T) = \{0\} and im(T)=W\text{im}(T) = W.

4. Computing Kernel and Image

Here we present systematic methods for computing these fundamental subspaces.

Example 3.10: Computing Kernel

Problem: Find ker(T)\ker(T) for T:R3R2T: \mathbb{R}^3 \to \mathbb{R}^2 defined by T(x,y,z)=(x+y,2zx)T(x, y, z) = (x + y, 2z - x).

Solution: We need T(x,y,z)=(0,0)T(x, y, z) = (0, 0):

{x+y=02zx=0\begin{cases} x + y = 0 \\ 2z - x = 0 \end{cases}

From the first equation: y=xy = -x. From the second: z=x/2z = x/2.

So ker(T)={(t,t,t/2):tR}=span{(1,1,1/2)}\ker(T) = \{(t, -t, t/2) : t \in \mathbb{R}\} = \text{span}\{(1, -1, 1/2)\}.

Example 3.11: Computing Image

Problem: Find im(T)\text{im}(T) for the same TT.

Solution: Apply TT to the standard basis:

  • T(1,0,0)=(1,1)T(1,0,0) = (1, -1)
  • T(0,1,0)=(1,0)T(0,1,0) = (1, 0)
  • T(0,0,1)=(0,2)T(0,0,1) = (0, 2)

im(T)=span{(1,1),(1,0),(0,2)}\text{im}(T) = \text{span}\{(1,-1), (1,0), (0,2)\}. Since (1,1)(1,-1) and (0,2)(0,2) are linearly independent, im(T)=R2\text{im}(T) = \mathbb{R}^2.

Example 3.12: Projection Kernel and Image

For the projection P:R3R3P: \mathbb{R}^3 \to \mathbb{R}^3 defined by P(x,y,z)=(x,y,0)P(x, y, z) = (x, y, 0):

  • ker(P)={(0,0,z):zR}\ker(P) = \{(0, 0, z) : z \in \mathbb{R}\} (the z-axis)
  • im(P)={(x,y,0):x,yR}\text{im}(P) = \{(x, y, 0) : x, y \in \mathbb{R}\} (the xy-plane)

Note: R3=ker(P)im(P)\mathbb{R}^3 = \ker(P) \oplus \text{im}(P).

Example 3.13: Differentiation Kernel and Image

For differentiation D:P3(R)P2(R)D: P_3(\mathbb{R}) \to P_2(\mathbb{R}) defined by D(p)=pD(p) = p':

  • ker(D)={c:cR}\ker(D) = \{c : c \in \mathbb{R}\} (constant polynomials, dimension 1)
  • im(D)=P2(R)\text{im}(D) = P_2(\mathbb{R}) (D is surjective)

5. Rank and Nullity

Definition 3.5: Rank and Nullity

For a linear map T:VWT: V \to W:

  • The rank of TT is rank(T)=dim(im(T))\text{rank}(T) = \dim(\text{im}(T))
  • The nullity of TT is nullity(T)=dim(ker(T))\text{nullity}(T) = \dim(\ker(T))
Theorem 3.17: Rank-Nullity Theorem (Preview)

For a linear map T:VWT: V \to W with VV finite-dimensional:

dim(V)=rank(T)+nullity(T)=dim(im T)+dim(kerT)\dim(V) = \text{rank}(T) + \text{nullity}(T) = \dim(\text{im } T) + \dim(\ker T)
Remark 3.6: Intuition

The domain dimension "splits" into two parts: what gets collapsed to zero (nullity) and what survives as output (rank). This is the central dimension formula of linear algebra.

6. Common Mistakes

Mistake 1: Confusing Kernel and Image Spaces

ker(T)V\ker(T) \subseteq V (in the domain), while im(T)W\text{im}(T) \subseteq W (in the codomain). They live in different spaces!

Mistake 2: Trivial Kernel Means Trivial Map

ker(T)={0}\ker(T) = \{0\} means TT is injective, not trivial. The trivial (zero) map has ker(T)=V\ker(T) = V!

Mistake 3: Image Equals Codomain

im(T)\text{im}(T) is often a proper subspace of WW. Only when TT is surjective do we have im(T)=W\text{im}(T) = W.

Mistake 4: Forgetting 0 is Always in Both

0ker(T)0 \in \ker(T) and 0im(T)0 \in \text{im}(T) always. Neither kernel nor image is ever empty.

7. Key Takeaways

Kernel

ker(T)={v:T(v)=0}\ker(T) = \{v : T(v) = 0\}. Subspace of VV.ker(T)={0}\ker(T) = \{0\} iff TT is injective.

Image

im(T)=T(V)\text{im}(T) = T(V). Subspace of WW.im(T)=W\text{im}(T) = W iff TT is surjective.

Dimension Formula

dimV=rank(T)+nullity(T)\dim V = \text{rank}(T) + \text{nullity}(T). The fundamental relationship.

Matrix Connection

For matrix AA: ker(A)\ker(A) = null space, im(A)\text{im}(A) = column space.

8. Worked Examples

Example 1: Matrix Transformation

Problem: Let T:R3R2T: \mathbb{R}^3 \to \mathbb{R}^2 be defined by T(x,y,z)=(x+2yz,2x+4y2z)T(x, y, z) = (x + 2y - z, 2x + 4y - 2z). Find ker(T)\ker(T) and im(T)\text{im}(T).

Solution:

For the kernel, solve T(x,y,z)=(0,0)T(x, y, z) = (0, 0):

{x+2yz=02x+4y2z=0\begin{cases} x + 2y - z = 0 \\ 2x + 4y - 2z = 0 \end{cases}

The second equation is 2× the first, so we have one independent equation: x+2yz=0x + 2y - z = 0, i.e., z=x+2yz = x + 2y.

ker(T)={(x,y,x+2y):x,yR}=span{(1,0,1),(0,1,2)}\ker(T) = \{(x, y, x + 2y) : x, y \in \mathbb{R}\} = \text{span}\{(1, 0, 1), (0, 1, 2)\}

For the image, note T(x,y,z)=(x+2yz)(1,2)T(x,y,z) = (x + 2y - z)(1, 2), so im(T)=span{(1,2)}\text{im}(T) = \text{span}\{(1, 2)\}.

Verification: dim(kerT)+dim(im T)=2+1=3=dimR3\dim(\ker T) + \dim(\text{im } T) = 2 + 1 = 3 = \dim \mathbb{R}^3

Example 2: Polynomial Space

Problem: Let T:P2P2T: P_2 \to P_2 be defined by T(p)(x)=p(x)p(0)T(p)(x) = p(x) - p(0). Find ker(T)\ker(T) and im(T)\text{im}(T).

Solution:

For p(x)=a+bx+cx2p(x) = a + bx + cx^2: T(p)(x)=(a+bx+cx2)a=bx+cx2T(p)(x) = (a + bx + cx^2) - a = bx + cx^2

ker(T)={p:T(p)=0}={p:p(x)=p(0)}\ker(T) = \{p : T(p) = 0\} = \{p : p(x) = p(0)\} = constant polynomials = span{1}\text{span}\{1\}

im(T)={bx+cx2:b,cR}=span{x,x2}\text{im}(T) = \{bx + cx^2 : b, c \in \mathbb{R}\} = \text{span}\{x, x^2\}

Verification: 1+2=3=dimP21 + 2 = 3 = \dim P_2

Example 3: Trace Map

Problem: Let tr:Mn(R)R\text{tr}: M_n(\mathbb{R}) \to \mathbb{R} be the trace map. Find its kernel and image.

Solution:

ker(tr)={AMn:tr(A)=0}\ker(\text{tr}) = \{A \in M_n : \text{tr}(A) = 0\} = traceless matrices

Dimension: n21n^2 - 1 (one constraint on n2n^2 entries)

im(tr)=R\text{im}(\text{tr}) = \mathbb{R} since tr(cI)=cn\text{tr}(cI) = cn for any cc.

Verification: (n21)+1=n2=dimMn(n^2 - 1) + 1 = n^2 = \dim M_n

Example 4: Determining Injectivity

Problem: Is T:R2R3T: \mathbb{R}^2 \to \mathbb{R}^3 defined by T(x,y)=(x+y,xy,2x)T(x, y) = (x + y, x - y, 2x) injective?

Solution: Check if ker(T)={0}\ker(T) = \{0\}.

Solve T(x,y)=(0,0,0)T(x, y) = (0, 0, 0):

{x+y=0xy=02x=0\begin{cases} x + y = 0 \\ x - y = 0 \\ 2x = 0 \end{cases}

From equations 1 and 2: x=0x = 0 and y=0y = 0.

So ker(T)={(0,0)}\ker(T) = \{(0, 0)\}, and TT is injective.

Example 5: Determining Surjectivity

Problem: Is T:R3R2T: \mathbb{R}^3 \to \mathbb{R}^2 defined by T(x,y,z)=(xy,y+z)T(x, y, z) = (x - y, y + z) surjective?

Solution: Check if im(T)=R2\text{im}(T) = \mathbb{R}^2.

Compute TT on the standard basis:

  • T(1,0,0)=(1,0)T(1,0,0) = (1, 0)
  • T(0,1,0)=(1,1)T(0,1,0) = (-1, 1)
  • T(0,0,1)=(0,1)T(0,0,1) = (0, 1)

im(T)=span{(1,0),(1,1),(0,1)}\text{im}(T) = \text{span}\{(1,0), (-1,1), (0,1)\}

Since (1,0)(1,0) and (0,1)(0,1) are in the span, im(T)=R2\text{im}(T) = \mathbb{R}^2.

TT is surjective.

9. Advanced Topics

Theorem 3.18: Preimage of a Subspace

Let T:VWT: V \to W be linear and UWU \leq W be a subspace. Then the preimageT1(U)={vV:T(v)U}T^{-1}(U) = \{v \in V : T(v) \in U\} is a subspace of VV.

Proof:
  1. T(0)=0UT(0) = 0 \in U, so 0T1(U)0 \in T^{-1}(U)
  2. If v1,v2T1(U)v_1, v_2 \in T^{-1}(U), then T(v1),T(v2)UT(v_1), T(v_2) \in U, so T(v1+v2)=T(v1)+T(v2)UT(v_1 + v_2) = T(v_1) + T(v_2) \in U
  3. If vT1(U)v \in T^{-1}(U), then T(αv)=αT(v)UT(\alpha v) = \alpha T(v) \in U
Theorem 3.19: Image of a Subspace

Let T:VWT: V \to W be linear and UVU \leq V be a subspace. Then T(U)={T(u):uU}T(U) = \{T(u) : u \in U\} is a subspace of WW.

Theorem 3.20: Composition and Kernel/Image

For linear maps S:WUS: W \to U and T:VWT: V \to W:

  • ker(T)ker(ST)\ker(T) \subseteq \ker(S \circ T)
  • im(ST)im(S)\text{im}(S \circ T) \subseteq \text{im}(S)
  • If TT is surjective, im(ST)=im(S)\text{im}(S \circ T) = \text{im}(S)
  • If SS is injective, ker(ST)=ker(T)\ker(S \circ T) = \ker(T)
Remark 3.7: First Isomorphism Theorem

The First Isomorphism Theorem states that for any linear map T:VWT: V \to W:

V/ker(T)im(T)V / \ker(T) \cong \text{im}(T)

This says the quotient of VV by its kernel is isomorphic to the image. The map v+ker(T)T(v)v + \ker(T) \mapsto T(v) is a well-defined isomorphism.

10. Connection to Matrices

When a linear map is represented by a matrix, the kernel and image have concrete interpretations.

Definition 3.6: Null Space and Column Space

For a matrix AMm×n(F)A \in M_{m \times n}(F):

  • Null space: null(A)={xFn:Ax=0}\text{null}(A) = \{x \in F^n : Ax = 0\}
  • Column space: col(A)={Ax:xFn}=span of columns of A\text{col}(A) = \{Ax : x \in F^n\} = \text{span of columns of } A
Remark 3.8: Computing via Row Reduction

To find the null space of AA:

  1. Row reduce AA to echelon form
  2. Identify free variables (columns without pivots)
  3. Express the general solution parametrically

To find the column space of AA:

  1. Row reduce AA to echelon form
  2. The pivot columns of the original matrix form a basis
Example 3.14: Matrix Example

Problem: Find the null space and column space of

A=(121242)A = \begin{pmatrix} 1 & 2 & 1 \\ 2 & 4 & 2 \end{pmatrix}

Solution:

Row reduce: A(121000)A \to \begin{pmatrix} 1 & 2 & 1 \\ 0 & 0 & 0 \end{pmatrix}

Pivot column: 1. Free variables: x2,x3x_2, x_3.

null(A)=span{(2,1,0),(1,0,1)}\text{null}(A) = \text{span}\{(-2, 1, 0), (-1, 0, 1)\}

col(A)=span{(1,2)}\text{col}(A) = \text{span}\{(1, 2)\} (first column of original AA)

11. Quick Reference Summary

ConceptDefinitionKey Property
Kernelker(T)={v:T(v)=0}\ker(T) = \{v : T(v) = 0\}Subspace of VV
Imageim(T)={T(v):vV}\text{im}(T) = \{T(v) : v \in V\}Subspace of WW
Rankdim(im T)\dim(\text{im } T)= column rank of matrix
Nullitydim(kerT)\dim(\ker T)= # free variables
Injectiveker(T)={0}\ker(T) = \{0\}nullity = 0
Surjectiveim(T)=W\text{im}(T) = Wrank = dim W

12. Additional Practice Problems

Problem 1

Let T:R4R3T: \mathbb{R}^4 \to \mathbb{R}^3 be defined by T(x,y,z,w)=(x+y,y+z,z+w)T(x, y, z, w) = (x + y, y + z, z + w). Find bases for ker(T)\ker(T) and im(T)\text{im}(T).

Problem 2

Let T:P3P3T: P_3 \to P_3 be defined by T(p)(x)=xp(x)T(p)(x) = xp'(x). Determine if TT is injective, surjective, or neither.

Problem 3

Let T:M2×2M2×2T: M_{2 \times 2} \to M_{2 \times 2} be defined by T(A)=AATT(A) = A - A^T. Find ker(T)\ker(T) and im(T)\text{im}(T).

Problem 4

If T:VVT: V \to V satisfies T2=0T^2 = 0 (nilpotent of order 2), prove that im(T)ker(T)\text{im}(T) \subseteq \ker(T).

Problem 5

Let S,T:VVS, T: V \to V be linear operators with ST=0ST = 0. Prove that im(T)ker(S)\text{im}(T) \subseteq \ker(S).

Problem 6

Let T:VWT: V \to W be linear with dimV=n\dim V = n. Prove that if dim(kerT)=k\dim(\ker T) = k, then TT maps any set of nk+1n - k + 1linearly independent vectors to a linearly dependent set.

13. Theoretical Insights

Theorem 3.21: Kernel Chain

For a linear operator T:VVT: V \to V:

{0}=ker(T0)ker(T)ker(T2)ker(T3)\{0\} = \ker(T^0) \subseteq \ker(T) \subseteq \ker(T^2) \subseteq \ker(T^3) \subseteq \cdots

This chain eventually stabilizes: there exists kk such that ker(Tk)=ker(Tk+1)=\ker(T^k) = \ker(T^{k+1}) = \cdots

Theorem 3.22: Image Chain

For a linear operator T:VVT: V \to V:

V=im(T0)im(T)im(T2)im(T3)V = \text{im}(T^0) \supseteq \text{im}(T) \supseteq \text{im}(T^2) \supseteq \text{im}(T^3) \supseteq \cdots

This chain also stabilizes eventually.

Theorem 3.23: Idempotent Decomposition

If P:VVP: V \to V is idempotent (P2=PP^2 = P), then:

V=ker(P)im(P)V = \ker(P) \oplus \text{im}(P)

Moreover, ker(P)=im(IP)\ker(P) = \text{im}(I - P) and im(P)=ker(IP)\text{im}(P) = \ker(I - P).

Proof:

Direct sum: We show ker(P)im(P)={0}\ker(P) \cap \text{im}(P) = \{0\}.

If vker(P)im(P)v \in \ker(P) \cap \text{im}(P), then P(v)=0P(v) = 0 and v=P(u)v = P(u) for some uu.

Thus v=P(u)=P2(u)=P(P(u))=P(v)=0v = P(u) = P^2(u) = P(P(u)) = P(v) = 0.

Spanning: Any vVv \in V can be written as:

v=(vP(v))+P(v)v = (v - P(v)) + P(v)

where P(vP(v))=P(v)P2(v)=P(v)P(v)=0P(v - P(v)) = P(v) - P^2(v) = P(v) - P(v) = 0, so vP(v)ker(P)v - P(v) \in \ker(P).

Remark 3.9: Projections and Complements

Idempotent operators are called projections. The decomposition V=ker(P)im(P)V = \ker(P) \oplus \text{im}(P)shows that projections split the space into two complementary subspaces. This is fundamental in many areas, including functional analysis and quantum mechanics.

Theorem 3.24: Dimension Bounds for Composition

For T:VWT: V \to W and S:WUS: W \to U linear:

  • rank(ST)min{rank(S),rank(T)}\text{rank}(S \circ T) \leq \min\{\text{rank}(S), \text{rank}(T)\}
  • nullity(ST)nullity(T)\text{nullity}(S \circ T) \geq \text{nullity}(T)
  • rank(ST)rank(S)+rank(T)dimW\text{rank}(S \circ T) \geq \text{rank}(S) + \text{rank}(T) - \dim W (Sylvester)

14. More Worked Examples

Example: Integration Operator

Problem: Let T:P2P3T: P_2 \to P_3 be defined by T(p)(x)=0xp(t)dtT(p)(x) = \int_0^x p(t)\,dt. Find ker(T)\ker(T) and im(T)\text{im}(T).

Solution:

For p(x)=a+bx+cx2p(x) = a + bx + cx^2:

T(p)(x)=ax+b2x2+c3x3T(p)(x) = ax + \frac{b}{2}x^2 + \frac{c}{3}x^3

ker(T)={0}\ker(T) = \{0\} since T(p)=0T(p) = 0 implies a=b=c=0a = b = c = 0.

im(T)={ax+bx2+cx3:a,b,cR}\text{im}(T) = \{ax + bx^2 + cx^3 : a, b, c \in \mathbb{R}\} = polynomials with zero constant term.

Conclusion: TT is injective but not surjective.

Example: Transpose Operator

Problem: Let T:MnMnT: M_n \to M_n be defined by T(A)=ATT(A) = A^T. Find ker(T)\ker(T).

Solution:

ker(T)={A:AT=0}={0}\ker(T) = \{A : A^T = 0\} = \{0\}

since AT=0A^T = 0 implies A=(AT)T=0T=0A = (A^T)^T = 0^T = 0.

im(T)=Mn\text{im}(T) = M_n since T(AT)=AT(A^T) = A for any AA.

Conclusion: TT is an isomorphism (bijective linear map).

Example: Skew-Symmetric Part

Problem: Let T:MnMnT: M_n \to M_n be defined by T(A)=12(AAT)T(A) = \frac{1}{2}(A - A^T). Analyze TT.

Solution:

ker(T)={A:A=AT}\ker(T) = \{A : A = A^T\} = symmetric matrices

im(T)={B:B=BT}\text{im}(T) = \{B : B = -B^T\} = skew-symmetric matrices

For n=2n = 2: dim(kerT)=3\dim(\ker T) = 3, dim(im T)=1\dim(\text{im } T) = 1

Check: 3+1=4=dimM23 + 1 = 4 = \dim M_2

Example: Evaluation Map

Problem: Let T:PnRn+1T: P_n \to \mathbb{R}^{n+1} be defined by T(p)=(p(0),p(1),,p(n))T(p) = (p(0), p(1), \ldots, p(n)). Analyze TT.

Solution:

A polynomial pp of degree n\leq n with n+1n + 1 roots must be zero.

So ker(T)={0}\ker(T) = \{0\}, and TT is injective.

Since dimPn=n+1=dimRn+1\dim P_n = n + 1 = \dim \mathbb{R}^{n+1}, TT is also surjective.

Conclusion: TT is an isomorphism.

15. Applications

Solving Linear Systems

For a linear system Ax=bAx = b:

  • Existence: Solutions exist iff bcol(A)b \in \text{col}(A)
  • Uniqueness: Solution is unique iff null(A)={0}\text{null}(A) = \{0\}
  • General solution: x=xp+xhx = x_p + x_h where xhnull(A)x_h \in \text{null}(A)
Differential Equations

For a linear differential operator L:CCL: C^\infty \to C^\infty:

  • ker(L)\ker(L) = solution space of the homogeneous equation L(y)=0L(y) = 0
  • For L(y)=fL(y) = f: general solution = particular solution + ker(L)\ker(L)
Computer Graphics

Transformations in computer graphics use kernel/image concepts:

  • Projection: 3D to 2D has 1-dimensional kernel (the viewing direction)
  • Shadows: Projection onto a plane, kernel = light ray direction
  • Data compression: Projecting to lower-dimensional subspace
Coding Theory

Linear codes use kernel and image:

  • Code: C=ker(H)C = \ker(H) for parity check matrix HH
  • Encoding: Maps messages to codewords (image of generator matrix)
  • Error detection: Hy=0Hy = 0 iff yy is a valid codeword

17. Geometric Interpretation

Understanding kernel and image geometrically provides powerful intuition for linear maps.

Projection onto a Plane

Consider projection from R3\mathbb{R}^3 onto the xy-plane:

  • Kernel: The z-axis (what gets "flattened" to the origin)
  • Image: The xy-plane (all possible outputs)
  • Geometry: Points on a vertical line all map to the same point
Rotation

Consider rotation by angle θ\theta in R2\mathbb{R}^2:

  • Kernel: {0}\{0\} only (rotation preserves distances)
  • Image: All of R2\mathbb{R}^2 (every point is reachable)
  • Conclusion: Rotation is an isomorphism
Reflection

Consider reflection about the x-axis in R2\mathbb{R}^2:

  • Kernel: {0}\{0\} (only the origin is fixed with opposite sign)
  • Image: All of R2\mathbb{R}^2
  • Fixed points: The x-axis (where T(v)=vT(v) = v)
Shear

Consider the shear T(x,y)=(x+ky,y)T(x, y) = (x + ky, y):

  • Kernel: {0}\{0\} (shear is injective)
  • Image: All of R2\mathbb{R}^2 (shear is surjective)
  • Geometry: Horizontal lines slide along themselves

18. Study Tips

Tip 1: Always Verify with Rank-Nullity

After computing kernel and image, check that dim(kerT)+dim(im T)=dimV\dim(\ker T) + \dim(\text{im } T) = \dim V. This catches computational errors.

Tip 2: Start with Standard Basis

When computing the image, apply TT to the standard basis vectors first. The image is the span of these outputs.

Tip 3: Kernel = Solve Homogeneous System

Finding the kernel always reduces to solving T(v)=0T(v) = 0. For matrix maps, this is Ax=0Ax = 0—use row reduction!

Tip 4: Think Geometrically

Visualize the kernel as what gets "crushed" to zero, and the image as the "shadow" or "footprint" of the domain in the codomain.

What's Next?

Now that you understand kernel and image, the next steps explore:

  • Rank-Nullity Theorem: The fundamental dimension formula connecting kernel and image
  • Isomorphisms: Bijective linear maps and when spaces are "the same"
  • Matrix Representation: Representing linear maps as matrices
  • Change of Basis: How matrix representations change with basis choice
  • Dual Spaces: Linear functionals and the dual perspective

The Rank-Nullity Theorem is the crown jewel of this material—it precisely quantifies the trade-off between kernel and image. Understanding this trade-off is essential for all of linear algebra and its applications.

Kernel & Image Practice
12
Questions
0
Correct
0%
Accuracy
1
If T:R3R2T: \mathbb{R}^3 \to \mathbb{R}^2 is linear and ker(T)={0}\ker(T) = \{0\}, is TT injective?
Hard
Not attempted
2
What is ker(T)\ker(T) where T(x,y)=(x+y,x+y)T(x,y) = (x+y, x+y)?
Medium
Not attempted
3
If TT is linear and injective, what is dim(kerT)\dim(\ker T)?
Easy
Not attempted
4
Is the image of a linear map always a subspace?
Easy
Not attempted
5
T:R3R3T: \mathbb{R}^3 \to \mathbb{R}^3, T(x,y,z)=(x,y,0)T(x,y,z) = (x,y,0). What is im(T)\text{im}(T)?
Medium
Not attempted
6
If im(T)=W\text{im}(T) = W, then TT is:
Easy
Not attempted
7
What is ker(IV)\ker(I_V) where IVI_V is the identity on VV?
Easy
Not attempted
8
If ker(T){0}\ker(T) \neq \{0\}, then TT is:
Medium
Not attempted
9
What is ker(0)\ker(0) where 0:VW0: V \to W is the zero map?
Easy
Not attempted
10
If dimV=5\dim V = 5 and dim(kerT)=2\dim(\ker T) = 2, what is dim(im T)\dim(\text{im } T)?
Medium
Not attempted
11
For the differentiation map D:P3P2D: P_3 \to P_2, what is ker(D)\ker(D)?
Medium
Not attempted
12
If T:VVT: V \to V satisfies T2=TT^2 = T, then V=?V = ?
Hard
Not attempted

Frequently Asked Questions

Why is ker(T) also called the null space?

Because it's the set of vectors T sends to zero (null). The terminology 'null space' is common for matrices, while 'kernel' is used for abstract linear maps. Both refer to the same concept.

How do kernel and image relate to matrices?

For T represented by matrix A: ker(T) = null space of A (solution space of Ax = 0), im(T) = column space of A. This connects abstract linear algebra to matrix computations and row reduction.

What's the geometric meaning of the kernel?

The kernel is what T 'collapses' to zero. For a projection onto a plane, the kernel is the line perpendicular to that plane. For differentiation, the kernel is constant functions. It measures the 'dimension loss' of the map.

Can the kernel and image intersect non-trivially?

For T: V → V, yes! If v is in both ker(T) and im(T), then T(v) = 0 and v = T(u) for some u. This means T²(u) = 0. Nilpotent operators have this property.

What's the rank of a linear map?

rank(T) = dim(im(T)). This equals the column rank of any matrix representing T. The Rank-Nullity theorem says dim(V) = rank(T) + nullity(T), where nullity(T) = dim(ker T).

How do I compute the kernel of a linear map?

Set T(v) = 0 and solve for v. For maps given by matrices, this reduces to solving the homogeneous system Ax = 0 using row reduction.

How do I compute the image of a linear map?

Method 1: Apply T to a basis of V and take the span. Method 2: For matrix A, the image is the column space—find a basis by identifying pivot columns after row reduction.

What happens to kernel and image under composition?

For S ∘ T: ker(T) ⊆ ker(S ∘ T) and im(S ∘ T) ⊆ im(S). The composition can only make the kernel larger and the image smaller.

When is a linear map determined by its kernel?

Not by kernel alone! Knowing ker(T) tells you what T sends to 0, but not where other vectors go. However, if you also know im(T), the Rank-Nullity theorem constrains the possibilities.

What's the relationship between kernel, image, and invertibility?

T is invertible iff ker(T) = {0} AND im(T) = W. For maps between spaces of the same finite dimension, either condition implies both (and invertibility).