MathIsimple
Course 5
Available

Rank-Nullity & Isomorphisms

The Rank-Nullity theorem is the fundamental dimension formula for linear maps. Isomorphisms show when vector spaces are structurally identical. Dual spaces provide the language for coordinates and constraints.

1. Rank-Nullity Theorem

Theorem 5.1: Rank-Nullity Theorem

Let T:VWT: V \to W be a linear map between finite-dimensional vector spaces. Then:

dim(V)=dim(kerT)+dim(im T)\dim(V) = \dim(\ker T) + \dim(\text{im } T)

Equivalently: dim(V)=nullity(T)+rank(T)\dim(V) = \text{nullity}(T) + \text{rank}(T), where nullity(T)=dim(kerT)\text{nullity}(T) = \dim(\ker T) and rank(T)=dim(im T)\text{rank}(T) = \dim(\text{im } T).

Proof of Theorem 5.1:

Let dim(kerT)=k\dim(\ker T) = k and let {u1,,uk}\{u_1, \ldots, u_k\} be a basis of kerT\ker T.

Extend this to a basis {u1,,uk,v1,,vm}\{u_1, \ldots, u_k, v_1, \ldots, v_m\} of VV.

We claim {T(v1),,T(vm)}\{T(v_1), \ldots, T(v_m)\} is a basis of im T\text{im } T.

Spanning: Any wim Tw \in \text{im } T equals T(v)T(v) for some:

v=i=1kαiui+j=1mβjvjv = \sum_{i=1}^k \alpha_i u_i + \sum_{j=1}^m \beta_j v_j

Then:

w=T(v)=i=1kαiT(ui)+j=1mβjT(vj)=j=1mβjT(vj)w = T(v) = \sum_{i=1}^k \alpha_i T(u_i) + \sum_{j=1}^m \beta_j T(v_j) = \sum_{j=1}^m \beta_j T(v_j)

Independence: If βjT(vj)=0\sum \beta_j T(v_j) = 0, then:

T(βjvj)=0T\left(\sum \beta_j v_j\right) = 0

So βjvjkerT\sum \beta_j v_j \in \ker T, hence βjvj=αiui\sum \beta_j v_j = \sum \alpha_i u_i.

By independence of the full basis, all coefficients are zero.

Therefore dim(im T)=m\dim(\text{im } T) = m and:

dim(V)=k+m=dim(kerT)+dim(im T)\dim(V) = k + m = \dim(\ker T) + \dim(\text{im } T)
Corollary 5.1: Consequences

For a linear map T:VWT: V \to W with dimV=n\dim V = n:

  1. TT is injective iff dim(kerT)=0\dim(\ker T) = 0 iff dim(im T)=n\dim(\text{im } T) = n
  2. TT is surjective iff dim(im T)=dimW\dim(\text{im } T) = \dim W
  3. If dimV=dimW\dim V = \dim W, then TT is injective iff surjective iff bijective
Example 5.1: Applying Rank-Nullity

Let T:R5R3T: \mathbb{R}^5 \to \mathbb{R}^3 with dim(kerT)=2\dim(\ker T) = 2.

By Rank-Nullity: 5=2+dim(im T)5 = 2 + \dim(\text{im } T), so dim(im T)=3\dim(\text{im } T) = 3.

Since dim(im T)=dim(R3)\dim(\text{im } T) = \dim(\mathbb{R}^3), TT is surjective.

Corollary 1.1: Dimension Bounds

For T:VWT: V \to W with dimV=n\dim V = n and dimW=m\dim W = m:

  • dim(kerT)nm\dim(\ker T) \geq n - m (at least this many vectors must map to zero)
  • dim(im T)min(n,m)\dim(\text{im } T) \leq \min(n, m) (image cannot exceed domain or codomain dimension)
Theorem 1.1: Rank-Nullity for Compositions

For T:UVT: U \to V and S:VWS: V \to W:

dim(ker(ST))=dim(ker(T))+dim(ker(S)im(T))\dim(\ker(S \circ T)) = \dim(\ker(T)) + \dim(\ker(S) \cap \text{im}(T))

2. Isomorphisms

Definition 5.2: Isomorphism

A linear map T:VWT: V \to W is an isomorphism if it is bijective (both injective and surjective).

If such an isomorphism exists, we say VV and WW are isomorphic, written VWV \cong W.

Theorem 5.2: Inverse of Isomorphism is Linear

If T:VWT: V \to W is an isomorphism, then T1:WVT^{-1}: W \to V is also linear.

Proof:

For w1,w2Ww_1, w_2 \in W and α,βF\alpha, \beta \in F:

T1(αw1+βw2)=T1(αT(T1(w1))+βT(T1(w2)))T^{-1}(\alpha w_1 + \beta w_2) = T^{-1}(\alpha T(T^{-1}(w_1)) + \beta T(T^{-1}(w_2)))
=T1(T(αT1(w1)+βT1(w2)))=αT1(w1)+βT1(w2)= T^{-1}(T(\alpha T^{-1}(w_1) + \beta T^{-1}(w_2))) = \alpha T^{-1}(w_1) + \beta T^{-1}(w_2)
Theorem 5.3: Classification of Finite-Dimensional Vector Spaces

Two finite-dimensional vector spaces over the same field FF are isomorphic if and only if they have the same dimension.

Proof:

(⇒) If VWV \cong W via isomorphism TT, then kerT={0}\ker T = \{0\} and im T=W\text{im } T = W.

By Rank-Nullity: dimV=0+dimW\dim V = 0 + \dim W, so dimV=dimW\dim V = \dim W.

(⇐) If dimV=dimW=n\dim V = \dim W = n, choose bases {v1,,vn}\{v_1, \ldots, v_n\} of VV and {w1,,wn}\{w_1, \ldots, w_n\} of WW.

Define T(vi)=wiT(v_i) = w_i and extend linearly. This gives an isomorphism.

Corollary 5.2: V ≅ Fⁿ

Every nn-dimensional vector space VV over FF is isomorphic to FnF^n.

Example 5.2: Isomorphism Examples
  • R2C\mathbb{R}^2 \cong \mathbb{C} as real vector spaces (both 2-dimensional)
  • P2(R)R3P_2(\mathbb{R}) \cong \mathbb{R}^3 via a+bx+cx2(a,b,c)a + bx + cx^2 \mapsto (a, b, c)
  • M2×3(R)R6M_{2 \times 3}(\mathbb{R}) \cong \mathbb{R}^6 via flattening the matrix
Example 5.3: More Isomorphism Examples
  • RnMn×1(R)\mathbb{R}^n \cong M_{n \times 1}(\mathbb{R}) (column vectors)
  • Mn(R)Rn2M_n(\mathbb{R}) \cong \mathbb{R}^{n^2} via A(a11,a12,,ann)A \mapsto (a_{11}, a_{12}, \ldots, a_{nn})
  • Symn(R)Rn(n+1)/2\text{Sym}_n(\mathbb{R}) \cong \mathbb{R}^{n(n+1)/2} (symmetric matrices)
Theorem 2.1: Isomorphism Preserves Structure

If T:VWT: V \to W is an isomorphism, then:

  1. {v1,,vk}\{v_1, \ldots, v_k\} is independent in VV iff {T(v1),,T(vk)}\{T(v_1), \ldots, T(v_k)\} is independent in WW
  2. {v1,,vk}\{v_1, \ldots, v_k\} spans VV iff {T(v1),,T(vk)}\{T(v_1), \ldots, T(v_k)\} spans WW
  3. {v1,,vn}\{v_1, \ldots, v_n\} is a basis of VV iff {T(v1),,T(vn)}\{T(v_1), \ldots, T(v_n)\} is a basis of WW
Definition 5.3: Automorphism

An automorphism is an isomorphism from a vector space to itself. The set of all automorphisms of VV, denoted Aut(V)\text{Aut}(V) or GL(V)GL(V), forms a group under composition.

3. Dual Spaces

Definition 5.4: Linear Functional and Dual Space

A linear functional on a vector space VV over FF is a linear map ϕ:VF\phi: V \to F.

The dual space of VV, denoted VV^*, is the vector space of all linear functionals on VV:

V=L(V,F)V^* = \mathcal{L}(V, F)
Theorem 5.4: Dimension of Dual Space

If VV is finite-dimensional, then dim(V)=dim(V)\dim(V^*) = \dim(V).

Proof:

V=L(V,F)V^* = \mathcal{L}(V, F), so dim(V)=dim(V)dim(F)=dim(V)1=dim(V)\dim(V^*) = \dim(V) \cdot \dim(F) = \dim(V) \cdot 1 = \dim(V).

Definition 5.5: Dual Basis

Let B={v1,,vn}\mathcal{B} = \{v_1, \ldots, v_n\} be a basis of VV. The dual basis B={v1,,vn}\mathcal{B}^* = \{v_1^*, \ldots, v_n^*\} of VV^* is defined by:

vi(vj)=δij={1if i=j0if ijv_i^*(v_j) = \delta_{ij} = \begin{cases} 1 & \text{if } i = j \\ 0 & \text{if } i \neq j \end{cases}
Example 5.3: Dual Basis Example

For the standard basis {e1,e2}\{e_1, e_2\} of R2\mathbb{R}^2:

  • e1(x,y)=xe_1^*(x, y) = x (extracts first coordinate)
  • e2(x,y)=ye_2^*(x, y) = y (extracts second coordinate)
Definition 5.6: Dual Map

For a linear map T:VWT: V \to W, the dual map (or transpose) T:WVT^*: W^* \to V^* is defined by:

T(ϕ)=ϕTT^*(\phi) = \phi \circ T

for all ϕW\phi \in W^*.

Theorem 3.1: Dual Basis is a Basis

If B={v1,,vn}\mathcal{B} = \{v_1, \ldots, v_n\} is a basis of VV, then B={v1,,vn}\mathcal{B}^* = \{v_1^*, \ldots, v_n^*\} is a basis of VV^*.

Proof:

For any ϕV\phi \in V^*, write v=αiviv = \sum \alpha_i v_i. Then:

ϕ(v)=ϕ(αivi)=αiϕ(vi)=ϕ(vi)vi(v)\phi(v) = \phi\left(\sum \alpha_i v_i\right) = \sum \alpha_i \phi(v_i) = \sum \phi(v_i) v_i^*(v)

So ϕ=ϕ(vi)vi\phi = \sum \phi(v_i) v_i^*, showing B\mathcal{B}^* spans. Independence follows from evaluating on the vjv_j.

Remark 5.1: Double Dual

The double dual VV^{**} is the dual of VV^*. For finite-dimensional VV, there is a natural isomorphism VVV \cong V^{**} given by vevvv \mapsto \text{ev}_v where evv(ϕ)=ϕ(v)\text{ev}_v(\phi) = \phi(v).

4. Applications of Rank-Nullity

The Rank-Nullity Theorem has numerous applications in solving linear systems, understanding matrix properties, and characterizing linear maps.

Theorem 4.1: System of Linear Equations

For a system Ax=bAx = b with AMm×n(F)A \in M_{m \times n}(F):

  • If rank(A)=m\text{rank}(A) = m (full row rank), the system is consistent for all bb
  • If rank(A)=n\text{rank}(A) = n (full column rank), the system has at most one solution
  • If rank(A)=n=m\text{rank}(A) = n = m, the system has a unique solution for all bb
Proof:

The system Ax=bAx = b is consistent iff bim(TA)b \in \text{im}(T_A) where TA(x)=AxT_A(x) = Ax.

If rank(A)=m\text{rank}(A) = m, then dim(im(TA))=m=dim(Fm)\dim(\text{im}(T_A)) = m = \dim(F^m), so im(TA)=Fm\text{im}(T_A) = F^m (surjective).

If rank(A)=n\text{rank}(A) = n, then dim(ker(TA))=0\dim(\ker(T_A)) = 0, so TAT_A is injective (at most one solution).

Example 4.1: Determining Solvability

For AM5×3(R)A \in M_{5 \times 3}(\mathbb{R}) with rank(A)=2\text{rank}(A) = 2:

By Rank-Nullity: dim(ker(A))=32=1\dim(\ker(A)) = 3 - 2 = 1.

The system Ax=bAx = b has either no solution (if bim(A)b \notin \text{im}(A)) or infinitely many solutions (if consistent, since kernel has dimension 1).

Theorem 4.2: Dimension of Solution Space

For a consistent system Ax=bAx = b, the solution set is an affine space of dimension dim(ker(A))\dim(\ker(A)).

Example 4.2: Application to Matrix Factorization

If AMn×n(F)A \in M_{n \times n}(F) has rank(A)=r\text{rank}(A) = r, then AA can be factored as A=BCA = BC where BMn×r(F)B \in M_{n \times r}(F) and CMr×n(F)C \in M_{r \times n}(F).

This follows from dim(im(A))=r\dim(\text{im}(A)) = r, so we can choose rr linearly independent columns for BB.

5. Dual Maps and Transpose

Every linear map induces a dual map (transpose) that reverses direction. This fundamental construction connects linear maps with their matrix representations and reveals deep symmetries.

Theorem 5.1: Properties of Dual Map

For linear maps T:VWT: V \to W and S:WXS: W \to X:

  1. (ST)=TS(S \circ T)^* = T^* \circ S^* (composition reverses)
  2. (IV)=IV(I_V)^* = I_{V^*} (identity dual is identity)
  3. If TT is an isomorphism, then (T1)=(T)1(T^{-1})^* = (T^*)^{-1}
Proof:

(1) For ϕX\phi \in X^*:

(ST)(ϕ)=ϕ(ST)=(ϕS)T=T(S(ϕ))=(TS)(ϕ)(S \circ T)^*(\phi) = \phi \circ (S \circ T) = (\phi \circ S) \circ T = T^*(S^*(\phi)) = (T^* \circ S^*)(\phi)

(3) Since T(T1)=(T1T)=IV=IVT^* \circ (T^{-1})^* = (T^{-1} \circ T)^* = I_V^* = I_{V^*}, we have (T)1=(T1)(T^*)^{-1} = (T^{-1})^*.

Theorem 5.2: Kernel and Image of Dual Map

For T:VWT: V \to W:

  1. ker(T)=(im(T))0\ker(T^*) = (\text{im}(T))^0 (annihilator of image)
  2. im(T)=(ker(T))0\text{im}(T^*) = (\ker(T))^0 (annihilator of kernel)
  3. rank(T)=rank(T)\text{rank}(T^*) = \text{rank}(T)
Proof:

(1) ϕker(T)\phi \in \ker(T^*) iff T(ϕ)=0T^*(\phi) = 0 iff ϕT=0\phi \circ T = 0 iff ϕ(w)=0\phi(w) = 0 for all wim(T)w \in \text{im}(T) iff ϕ(im(T))0\phi \in (\text{im}(T))^0.

(3) By (1) and dimension formula: dim(im(T))=dim(W)dim((im(T))0)=dim(W)(dim(W)dim(im(T)))=dim(im(T))\dim(\text{im}(T^*)) = \dim(W^*) - \dim((\text{im}(T))^0) = \dim(W) - (\dim(W) - \dim(\text{im}(T))) = \dim(\text{im}(T)).

Example 5.1: Dual Map in Coordinates

If T:RnRmT: \mathbb{R}^n \to \mathbb{R}^m is given by T(x)=AxT(x) = Ax for matrix AA, then TT^* corresponds to ATA^T (transpose).

This is why the dual map is also called the "transpose" of a linear map.

6. Annihilators and Double Dual

Annihilators provide a way to understand subspaces through their duals. The double dual gives a natural way to identify a vector space with its "dual of the dual", revealing a fundamental symmetry.

Definition 6.1: Annihilator

For a subset SVS \subseteq V, the annihilator of SS is:

S0={ϕV:ϕ(s)=0 for all sS}S^0 = \{\phi \in V^* : \phi(s) = 0 \text{ for all } s \in S\}
Theorem 6.1: Annihilator is a Subspace

For any SVS \subseteq V, S0S^0 is a subspace of VV^*.

Proof:

If ϕ,ψS0\phi, \psi \in S^0 and αF\alpha \in F, then for any sSs \in S:

(αϕ+ψ)(s)=αϕ(s)+ψ(s)=0(\alpha \phi + \psi)(s) = \alpha \phi(s) + \psi(s) = 0

So αϕ+ψS0\alpha \phi + \psi \in S^0.

Theorem 6.2: Dimension of Annihilator

If WW is a subspace of a finite-dimensional VV, then:

dim(W0)=dim(V)dim(W)\dim(W^0) = \dim(V) - \dim(W)
Proof:

Let {w1,,wk}\{w_1, \ldots, w_k\} be a basis for WW, extended to a basis {w1,,wk,v1,,vn}\{w_1, \ldots, w_k, v_1, \ldots, v_n\} for VV.

Then {v1,,vn}\{v_1^*, \ldots, v_n^*\} (from the dual basis) is a basis for W0W^0, so dim(W0)=n=dim(V)k\dim(W^0) = n = \dim(V) - k.

Definition 6.2: Double Dual and Evaluation Map

The double dual VV^{**} is (V)(V^*)^*. For each vVv \in V, define the evaluation map:

evv:VF,evv(ϕ)=ϕ(v)\text{ev}_v: V^* \to F, \quad \text{ev}_v(\phi) = \phi(v)

This gives a map ev:VV\text{ev}: V \to V^{**} defined by ev(v)=evv\text{ev}(v) = \text{ev}_v.

Theorem 6.3: Natural Isomorphism V ≅ V**

For finite-dimensional VV, the evaluation map ev:VV\text{ev}: V \to V^{**} is an isomorphism.

Proof:

ev\text{ev} is linear: ev(αv+u)(ϕ)=ϕ(αv+u)=αϕ(v)+ϕ(u)=(αev(v)+ev(u))(ϕ)\text{ev}(\alpha v + u)(\phi) = \phi(\alpha v + u) = \alpha \phi(v) + \phi(u) = (\alpha \text{ev}(v) + \text{ev}(u))(\phi).

Since dim(V)=dim(V)=dim(V)\dim(V) = \dim(V^*) = \dim(V^{**}), it suffices to show injectivity.

If ev(v)=0\text{ev}(v) = 0, then ϕ(v)=0\phi(v) = 0 for all ϕV\phi \in V^*. If v0v \neq 0, extend to a basis and use the dual basis to get a contradiction. So v=0v = 0.

Example 6.1: Annihilator Example

In R3\mathbb{R}^3, let W={(x,y,0):x,yR}W = \{(x, y, 0) : x, y \in \mathbb{R}\} (xy-plane).

Then W0={ϕ:ϕ(x,y,0)=0}W^0 = \{\phi : \phi(x, y, 0) = 0\}. If ϕ(x,y,z)=ax+by+cz\phi(x, y, z) = ax + by + cz, then ϕW0\phi \in W^0 iff a=b=0a = b = 0.

So W0={ϕ:ϕ(x,y,z)=cz}W^0 = \{\phi : \phi(x, y, z) = cz\} is 1-dimensional, and dim(W)+dim(W0)=2+1=3=dim(R3)\dim(W) + \dim(W^0) = 2 + 1 = 3 = \dim(\mathbb{R}^3).

Theorem 6.4: Double Annihilator

For a subspace WVW \subseteq V, (W0)0=W(W^0)^0 = W (under the natural identification VVV \cong V^{**}).

Frequently Asked Questions

Why is the Rank-Nullity theorem so important?

It's the fundamental constraint on linear maps. It tells us that 'dimension lost' (kernel) plus 'dimension gained' (image) equals the starting dimension. Everything about existence and uniqueness of solutions follows from this.

What does 'isomorphic' really mean?

Isomorphic spaces are 'the same' for all linear algebra purposes. They have identical algebraic structure—same dimension, same types of subspaces, same linear maps. Only the 'names' of elements differ.

Why is V ≅ Fⁿ so important?

It means every finite-dimensional space can be studied using coordinates. Abstract theorems about V translate to concrete matrix computations in Fⁿ. This is the bridge between abstract and computational linear algebra.

What's a linear functional?

A linear map from V to the base field F. It assigns a scalar to each vector, linearly. Examples: evaluation at a point, integration, trace of a matrix.

How does Rank-Nullity connect to matrix rank?

For a matrix A representing T, rank(A) = rank(T) = dim(im T) = column rank = row rank. The nullity equals the number of free variables, which is n - rank(A).

Rank-Nullity & Isomorphisms Practice
10
Questions
0
Correct
0%
Accuracy
1
If T:R5R3T: \mathbb{R}^5 \to \mathbb{R}^3 and dim(kerT)=2\dim(\ker T) = 2, what is dim(im T)\dim(\text{im } T)?
Easy
Not attempted
2
If T:R4R4T: \mathbb{R}^4 \to \mathbb{R}^4 is surjective, is it injective?
Medium
Not attempted
3
Can a linear map T:R3R5T: \mathbb{R}^3 \to \mathbb{R}^5 be surjective?
Medium
Not attempted
4
If T:VWT: V \to W is an isomorphism, what is dim(kerT)\dim(\ker T)?
Easy
Not attempted
5
Are R2\mathbb{R}^2 and C\mathbb{C} isomorphic as real vector spaces?
Medium
Not attempted
6
If dimV=dimW\dim V = \dim W, is every linear T:VWT: V \to W an isomorphism?
Easy
Not attempted
7
What is dim(V)\dim(V^*) if dim(V)=n\dim(V) = n?
Easy
Not attempted
8
The dual of a linear map T:VWT: V \to W is T:??T^*: ? \to ?
Medium
Not attempted
9
If {e1,e2}\{e_1, e_2\} is a basis of VV, what is e1(e2)e_1^*(e_2)?
Easy
Not attempted
10
If dimV=5\dim V = 5 and dim(kerT)=2\dim(\ker T) = 2, what is dim(im T)\dim(\text{im } T)?
Medium
Not attempted