Subspaces are vector spaces contained within larger vector spaces. They inherit the same operations and satisfy the same axioms—automatically! Understanding subspaces is essential for analyzing linear systems, transformations, and more.
The concept of a subspace emerged naturally from the study of linear equations. When mathematicians noticed that solution sets of homogeneous systems had special properties—closure under addition and scalar multiplication—the abstract notion of subspace crystallized.
Hermann Grassmann (1844) was among the first to recognize that certain subsets of vector spaces formed vector spaces in their own right. His work on "extensions" laid groundwork for modern subspace theory.
Today, subspaces are central to linear algebra: null spaces, column spaces, eigenspaces, and solution spaces are all subspaces. The theory of subspaces provides the language for analyzing linear systems and transformations.
A subspace is a subset of a vector space that is itself a vector space under the inherited operations. The key insight: we don't need to check all eight axioms!
A subset is a subspace of if is itself a vector space under the same operations of addition and scalar multiplication.
We write or to indicate that is a subspace of . Some texts use for proper subspaces (where ).
A non-empty subset is a subspace if and only if:
(⇒) If is a subspace (a vector space), closure under both operations is immediate from the definition.
(⇐) Assume is non-empty and closed under both operations. We verify the vector space axioms:
If is a subspace, then . Equivalently, if , then is NOT a subspace.
A non-empty subset is a subspace if and only if for all and :
This single condition combines closure under addition and scalar multiplication. Setting gives .
Every vector space has exactly two trivial subspaces:
All other subspaces are called proper or non-trivial.
In , any line through the origin is a subspace:
Verification:
Note: A line NOT through the origin (like ) is NOT a subspace.
In , a plane through the origin is defined by:
This is a subspace (it's the null space of the matrix ).
Verification:
For any matrix , the null space (kernel) is:
This is always a subspace of :
Let be polynomials of degree at most . This is a subspace of :
Dimension: (basis: ).
The set of symmetric matrices:
is a subspace of :
Dimension: .
The following are NOT subspaces:
In , subspaces are:
All must pass through the origin!
The set of upper triangular matrices:
is a subspace of :
Dimension: (diagonal + upper triangle).
Matrices with trace zero:
is a subspace (this is the Lie algebra of ):
Dimension: .
Diagonal matrices form a subspace:
Dimension: (only n diagonal entries can be non-zero).
Note: (diagonal matrices are both symmetric and upper triangular).
The span of a set of vectors is the set of all linear combinations. It's the smallest subspace containing those vectors—and every subspace can be described as a span of some generating set.
The linear span (or simply span) of a set is:
i.e., the set of all finite linear combinations of vectors in .
By convention, . The empty sum is the zero vector.
For any subset , is a subspace of .
We verify the subspace criterion:
is the smallest subspace containing . That is:
(1) For any , we have .
(2) Let be a subspace with . Any linear combination with is in by closure. So .
A set generates (or spans) a vector space if . We also say is a generating set for .
In :
Many different sets can span the same subspace. For example:
The minimal spanning set (with no redundant vectors) is called a basis—covered in Chapter 2.4.
A vector is in if and only if adding to does not increase the dimension of the span.
Is ?
We need to check if has a solution.
This gives: , , .
Check: ✓. Yes, is in the span.
Is ?
Check: .
This gives: , , .
But the third component: ... wait, that's 1 = 1. Let me redo.
Third component: . So IS in the span!
To find span(S) and test membership, form a matrix with vectors in S as columns and row reduce. The pivot columns give a basis; the dimension equals the rank.
Given two subspaces, we can combine them in two natural ways: intersection and sum. The intersection is always a subspace (but the union usually isn't!).
If are subspaces of , then is a subspace of .
Check the subspace criterion:
If is any collection of subspaces of , then is also a subspace of .
Let be subspaces of . Then is a subspace if and only if or .
(⇐) If , then , a subspace.
(⇒) Suppose is a subspace but neither nor .
Then there exist and .
Since is a subspace, .
Case 1: . Then . Contradiction!
Case 2: . Then . Contradiction!
The sum of subspaces and is:
If are subspaces of , then is a subspace of .
Check the subspace criterion:
. The sum is the smallest subspace containing both and .
For finite-dimensional subspaces :
Let be a basis for .
Extend to a basis for .
Extend to a basis for .
Claim: is a basis for .
It spans since any is a combination of these vectors.
Linear independence can be verified by showing that if , then all coefficients are zero (using the fact that and ).
Therefore: .
Let (xy-plane) and (yz-plane).
Let and .
The sum is called a direct sum, written , if .
if and only if every can be written uniquely as with .
(⇒) Suppose and .
Then .
So and . Uniqueness!
(⇐) If decomposition is unique, suppose .
Then are two decompositions. Uniqueness implies .
If , then .
Let and .
Then (no overlap in coordinates).
So (direct sum).
Uniqueness: is the unique decomposition.
Let and .
Then , so .
This is NOT a direct sum: has multiple decompositions.
If , we say is a complement of in . Complements are not unique! For example, in , any line through the origin is a complement of any other non-parallel line.
Every subspace of a finite-dimensional space has a complement: there exists such that .
Let be a basis for .
Extend to a basis for .
Let .
Then (spans all of ) and (basis is linearly independent).
The dimension formula generalizes. For three subspaces:
But the exact formula is more complex due to triple overlaps. For direct sums, we need pairwise intersections to be zero.
Problem: Is a subspace?
Solution: Yes! This is the null space of .
Dimension: 2 (one constraint in 3D).
Problem: Is a subspace of ?
Solution: No!
Quick check: , so .
This is an affine subspace (translated plane), not a linear subspace.
Problem: Find a basis for .
Solution:
Set up equations: and .
Parametrize: .
Basis: .
Dimension: 2.
Problem: Find where:
Solution: We need both: and .
From first: . From second: .
.
Dimension: 1 (a line).
Problem: Find where and .
Solution:
.
Since and are linearly independent, this is a 2-dimensional subspace (a plane).
Check: (the vectors are not parallel), so .
Problem: Is the set of invertible upper triangular matrices a subspace of ?
Solution: No!
Note: The set of ALL upper triangular matrices IS a subspace.
Problem: Show that even functions form a subspace of .
Solution: Let .
Similarly, odd functions form a subspace , and !
Problem: In , let and . What are the possible values of ?
Solution: By the dimension formula:
Since , we have .
So , meaning .
Also, .
Possible values: 2, 3.
Problem: Show that where is symmetric and is skew-symmetric matrices.
Solution:
For any matrix , we can write:
where is symmetric and is skew-symmetric.
Check : If and , then , so .
Dimension check: , , ✓
Problem: For , find the column space and row space.
Solution:
Column space: span of columns =
Since all columns are multiples of :
, dimension 1.
Row space: span of rows =
Since :
, dimension 1.
Note: dim(Col(A)) = dim(Row(A)) = rank(A). Always!
Problem: Find for .
Solution: Solve .
RREF:
Equation: , so .
Parametrize: .
, dimension 2.
Rank-Nullity: rank(A) + nullity(A) = 1 + 2 = 3 = number of columns ✓
Problem: Let in . Find a subspace such that .
Solution:
has dimension 2 (one constraint). Need with dimension 1 and .
Find a vector not in : since .
Let .
Check : If , then , so . ✓
Verify: . ✓
Problem: Is ?
Solution: Check if each set spans the same subspace.
Can we write ?
System: , , .
From first two: , . Check third: ✓
Similarly check : works.
Answer: Yes, the spans are equal (both are the same 2D subspace).
If a subset doesn't contain , it's not a subspace. Always check this first!
Example: — the origin isn't in this set.
is rarely a subspace (only when one contains the other). is always a subspace. Use sum, not union!
In , the line (through origin) is a subspace. But is NOT — it doesn't pass through the origin.
Solutions to form a subspace (null space). Solutions to (with ) do NOT — they form an affine subspace.
A set can be closed under addition but not scalar multiplication.
Example: in — closed under addition, but .
is NOT automatically a direct sum. You must verify . If not, decompositions are not unique.
Over , you need closure under all real scalars, including negatives and fractions. Over , need closure under complex scalars too.
Remember: dim(U + W) = dim(U) + dim(W) - dim(U ∩ W). Don't forget to subtract the intersection! And dim(U + W) ≤ dim(V), always.
Given a subset , verify it's a subspace:
Quick test: If for all and , then is a subspace.
To check if is a subspace: (1) , (2) closed under +, (3) closed under scalar mult. If these hold, all axioms are automatic.
span(S) = all linear combinations of vectors in S. It's the smallest subspace containing S. Every subspace is a span of some set.
Intersection of subspaces is always a subspace. Union is almost never a subspace (only if one contains the other).
U + W = span(U ∪ W) is the smallest subspace containing both. Direct sum U ⊕ W requires U ∩ W = {0}.
dim(U + W) = dim U + dim W - dim(U ∩ W). Like inclusion-exclusion for counting.
ker(A) = {x : Ax = 0} is always a subspace. This is why homogeneous systems have nice solution structure.
| Operation | Subspace? | Notes |
|---|---|---|
| U ∩ W | Always | Can take arbitrary intersections |
| U ∪ W | Rarely | Only if one contains the other |
| U + W | Always | = span(U ∪ W) |
| span(S) | Always | Smallest subspace containing S |
| ker(A) | Always | Solutions to Ax = 0 |
For a Matrix A
For a Linear Map T
Subspaces are the building blocks for understanding linear maps. In the next chapters:
In Solving Systems
In Eigenvalue Theory
In Differential Equations
In Data Science
Thinking in terms of subspaces is powerful: instead of individual vectors, consider the spaces they generate. This shift from "elements" to "structures" is central to abstract linear algebra and leads to cleaner theorems and deeper understanding.
| Dimension | Type | Example |
|---|---|---|
| 0 | Point | {(0, 0, 0)} |
| 1 | Line (through origin) | span{(1, 2, 3)} |
| 2 | Plane (through origin) | {(x,y,z) : x + y + z = 0} |
| 3 | Whole space | ℝ³ |
Use the one-step subspace test: W is a subspace iff for all u, v ∈ W and α ∈ F, we have αu + v ∈ W. This combines closure under addition and scalar multiplication, plus automatically includes 0 (set α = 0).
Every span is a subspace, but not every subspace is described as a span initially. Span{S} is the smallest subspace containing S—it's the set of all linear combinations of vectors in S.
Yes! For example, span{(1,0), (2,0)} = span{(1,0)} = the x-axis in ℝ². Removing redundant (linearly dependent) vectors doesn't change the span.
The null space (kernel) of a matrix A is {x : Ax = 0}. It's always a subspace of ℝⁿ. This is fundamental: the solution set of a homogeneous linear system is always a subspace.
U + W contains all points reachable by first moving in U, then in W. If U and W are lines through the origin, U + W is either a line (if U = W) or the plane containing both lines.
U + W is always a subspace. U ⊕ W (direct sum) requires additionally that U ∩ W = {0}. In a direct sum, every vector in U + W can be written uniquely as u + w.
Union U ∪ W is rarely a subspace because if u ∈ U \ W and w ∈ W \ U, then u + w is in neither U nor W (but it's in U + W). The exception: one subspace contains the other.
If W = {x : Ax = 0}, solve the homogeneous system Ax = 0 using Gaussian elimination. Write the solution in parametric form; the coefficient vectors of the free variables form a basis.
Yes! Every subspace W ⊆ ℝⁿ equals ker(A) for some matrix A. If W has dimension k, you can find (n - k) linear equations defining W, and A is the coefficient matrix.
dim(U ∩ W) ≤ min(dim U, dim W). The dimension formula gives: dim(U ∩ W) = dim U + dim W - dim(U + W). The intersection can be just {0} even if both spaces are large.