The Rank-Nullity theorem is the fundamental dimension formula for linear maps. Isomorphisms show when vector spaces are structurally identical. Dual spaces provide the language for coordinates and constraints.
Let be a linear map between finite-dimensional vector spaces. Then:
Equivalently: , where and .
Let and let be a basis of .
Extend this to a basis of .
We claim is a basis of .
Spanning: Any equals for some:
Then:
Independence: If , then:
So , hence .
By independence of the full basis, all coefficients are zero.
Therefore and:
For a linear map with :
Let with .
By Rank-Nullity: , so .
Since , is surjective.
For with and :
For and :
A linear map is an isomorphism if it is bijective (both injective and surjective).
If such an isomorphism exists, we say and are isomorphic, written .
If is an isomorphism, then is also linear.
For and :
Two finite-dimensional vector spaces over the same field are isomorphic if and only if they have the same dimension.
(⇒) If via isomorphism , then and .
By Rank-Nullity: , so .
(⇐) If , choose bases of and of .
Define and extend linearly. This gives an isomorphism.
Every -dimensional vector space over is isomorphic to .
If is an isomorphism, then:
An automorphism is an isomorphism from a vector space to itself. The set of all automorphisms of , denoted or , forms a group under composition.
A linear functional on a vector space over is a linear map .
The dual space of , denoted , is the vector space of all linear functionals on :
If is finite-dimensional, then .
, so .
Let be a basis of . The dual basis of is defined by:
For the standard basis of :
For a linear map , the dual map (or transpose) is defined by:
for all .
If is a basis of , then is a basis of .
For any , write . Then:
So , showing spans. Independence follows from evaluating on the .
The double dual is the dual of . For finite-dimensional , there is a natural isomorphism given by where .
The Rank-Nullity Theorem has numerous applications in solving linear systems, understanding matrix properties, and characterizing linear maps.
For a system with :
The system is consistent iff where .
If , then , so (surjective).
If , then , so is injective (at most one solution).
For with :
By Rank-Nullity: .
The system has either no solution (if ) or infinitely many solutions (if consistent, since kernel has dimension 1).
For a consistent system , the solution set is an affine space of dimension .
If has , then can be factored as where and .
This follows from , so we can choose linearly independent columns for .
Every linear map induces a dual map (transpose) that reverses direction. This fundamental construction connects linear maps with their matrix representations and reveals deep symmetries.
For linear maps and :
(1) For :
(3) Since , we have .
For :
(1) iff iff iff for all iff .
(3) By (1) and dimension formula: .
If is given by for matrix , then corresponds to (transpose).
This is why the dual map is also called the "transpose" of a linear map.
Annihilators provide a way to understand subspaces through their duals. The double dual gives a natural way to identify a vector space with its "dual of the dual", revealing a fundamental symmetry.
For a subset , the annihilator of is:
For any , is a subspace of .
If and , then for any :
So .
If is a subspace of a finite-dimensional , then:
Let be a basis for , extended to a basis for .
Then (from the dual basis) is a basis for , so .
The double dual is . For each , define the evaluation map:
This gives a map defined by .
For finite-dimensional , the evaluation map is an isomorphism.
is linear: .
Since , it suffices to show injectivity.
If , then for all . If , extend to a basis and use the dual basis to get a contradiction. So .
In , let (xy-plane).
Then . If , then iff .
So is 1-dimensional, and .
For a subspace , (under the natural identification ).
It's the fundamental constraint on linear maps. It tells us that 'dimension lost' (kernel) plus 'dimension gained' (image) equals the starting dimension. Everything about existence and uniqueness of solutions follows from this.
Isomorphic spaces are 'the same' for all linear algebra purposes. They have identical algebraic structure—same dimension, same types of subspaces, same linear maps. Only the 'names' of elements differ.
It means every finite-dimensional space can be studied using coordinates. Abstract theorems about V translate to concrete matrix computations in Fⁿ. This is the bridge between abstract and computational linear algebra.
A linear map from V to the base field F. It assigns a scalar to each vector, linearly. Examples: evaluation at a point, integration, trace of a matrix.
For a matrix A representing T, rank(A) = rank(T) = dim(im T) = column rank = row rank. The nullity equals the number of free variables, which is n - rank(A).