MathIsimple
LA-2.2
Available

Subspaces

Subspaces are vector spaces contained within larger vector spaces. They inherit the same operations and satisfy the same axioms—automatically! Understanding subspaces is essential for analyzing linear systems, transformations, and more.

3-4 hours Core Level 10 Objectives
Learning Objectives
  • Apply the subspace criterion to verify subspaces
  • Compute intersection and sum of subspaces
  • Understand the concept of linear span
  • Determine if a set generates a vector space
  • Recognize common subspaces (null space, column space)
  • Prove the dimension formula for sum of subspaces
  • Understand direct sums and their characterization
  • Compute bases for intersection and sum
  • Recognize when union of subspaces is a subspace
  • Apply subspace concepts to solve systems of equations
Prerequisites
  • LA-2.1: Vector Space Definition
  • Set theory basics
  • Gaussian elimination (LA-1.4)
  • Familiarity with matrix operations
  • Mathematical proof techniques
Historical Context

The concept of a subspace emerged naturally from the study of linear equations. When mathematicians noticed that solution sets of homogeneous systems had special properties—closure under addition and scalar multiplication—the abstract notion of subspace crystallized.

Hermann Grassmann (1844) was among the first to recognize that certain subsets of vector spaces formed vector spaces in their own right. His work on "extensions" laid groundwork for modern subspace theory.

Today, subspaces are central to linear algebra: null spaces, column spaces, eigenspaces, and solution spaces are all subspaces. The theory of subspaces provides the language for analyzing linear systems and transformations.

1. Definition and Criterion

A subspace is a subset of a vector space that is itself a vector space under the inherited operations. The key insight: we don't need to check all eight axioms!

Definition 2.2: Subspace

A subset WVW \subseteq V is a subspace of VV if WWis itself a vector space under the same operations of addition and scalar multiplication.

Remark 2.1: Notation

We write WVW \leq V or WVW \subseteq V to indicate that WW is a subspace of VV. Some texts use W<VW < V for proper subspaces (where WVW \neq V).

Theorem 2.2: Subspace Criterion

A non-empty subset WVW \subseteq V is a subspace if and only if:

  1. Closed under addition: u,vW    u+vWu, v \in W \implies u + v \in W
  2. Closed under scalar multiplication: αF,vW    αvW\alpha \in F, v \in W \implies \alpha v \in W
Proof of Theorem 2.2:

(⇒) If WW is a subspace (a vector space), closure under both operations is immediate from the definition.

(⇐) Assume WW is non-empty and closed under both operations. We verify the vector space axioms:

  • Zero vector: Since WW is non-empty, pick any wWw \in W. By closure under scalar mult, 0w=0W0 \cdot w = 0 \in W.
  • Additive inverses: For wWw \in W, we have (1)w=wW(-1) \cdot w = -w \in W by scalar mult closure.
  • Commutativity, associativity, distributivity: These are inherited from VV since WVW \subseteq V.
  • Multiplicative identity: 1w=wW1 \cdot w = w \in W automatically for wWw \in W.
Corollary 2.1: Zero Vector Test

If WVW \subseteq V is a subspace, then 0W0 \in W. Equivalently, if 0W0 \notin W, then WW is NOT a subspace.

Remark 2.2: One-Step Subspace Test

A non-empty subset WVW \subseteq V is a subspace if and only if for allu,vWu, v \in W and αF\alpha \in F:

αu+vW\alpha u + v \in W

This single condition combines closure under addition and scalar multiplication. Setting α=0\alpha = 0 gives 0W0 \in W.

Example 2.2: Trivial Subspaces

Every vector space VV has exactly two trivial subspaces:

  • {0}\{0\} — the zero subspace (dimension 0)
  • VV itself — the improper subspace (dimension = dim V)

All other subspaces are called proper or non-trivial.

Example 2.3: Lines Through the Origin

In R2\mathbb{R}^2, any line through the origin is a subspace:

L={t(a,b):tR}L = \{t(a, b) : t \in \mathbb{R}\}

Verification:

  • Zero: 0(a,b)=(0,0)L0 \cdot (a, b) = (0, 0) \in L
  • Closed under +: t1(a,b)+t2(a,b)=(t1+t2)(a,b)Lt_1(a,b) + t_2(a,b) = (t_1 + t_2)(a,b) \in L
  • Closed under scalar mult: ct(a,b)=(ct)(a,b)Lc \cdot t(a,b) = (ct)(a,b) \in L

Note: A line NOT through the origin (like y=x+1y = x + 1) is NOT a subspace.

Example 2.4: Planes Through the Origin

In R3\mathbb{R}^3, a plane through the origin is defined by:

P={(x,y,z):ax+by+cz=0}P = \{(x, y, z) : ax + by + cz = 0\}

This is a subspace (it's the null space of the 1×31 \times 3 matrix [a b c][a\ b\ c]).

Verification:

  • Zero: a(0)+b(0)+c(0)=0a(0) + b(0) + c(0) = 0
  • Closed under +: If ax1+by1+cz1=0ax_1 + by_1 + cz_1 = 0 and ax2+by2+cz2=0ax_2 + by_2 + cz_2 = 0, then a(x1+x2)+b(y1+y2)+c(z1+z2)=0a(x_1+x_2) + b(y_1+y_2) + c(z_1+z_2) = 0
  • Closed under scalar mult: If ax+by+cz=0ax + by + cz = 0, then a(αx)+b(αy)+c(αz)=α0=0a(\alpha x) + b(\alpha y) + c(\alpha z) = \alpha \cdot 0 = 0
Example 2.5: The Null Space

For any matrix AMm×n(F)A \in M_{m \times n}(F), the null space (kernel) is:

ker(A)={xFn:Ax=0}\ker(A) = \{x \in F^n : Ax = 0\}

This is always a subspace of FnF^n:

  • Zero: A0=0A \cdot 0 = 0
  • Closed under +: If Ax=0Ax = 0 and Ay=0Ay = 0, then A(x+y)=Ax+Ay=0A(x+y) = Ax + Ay = 0
  • Closed under scalar mult: If Ax=0Ax = 0, then A(αx)=αAx=0A(\alpha x) = \alpha Ax = 0
Example 2.6: Polynomial Subspaces

Let Pn(F)P_n(F) be polynomials of degree at most nn. This is a subspace of F[x]F[x]:

  • Zero: The zero polynomial has degree n-\infty \leq n
  • Closed under +: deg(p+q)max(degp,degq)n\deg(p + q) \leq \max(\deg p, \deg q) \leq n
  • Closed under scalar mult: deg(αp)=deg(p)n\deg(\alpha p) = \deg(p) \leq n for α0\alpha \neq 0

Dimension: dimPn(F)=n+1\dim P_n(F) = n + 1 (basis: {1,x,x2,,xn}\{1, x, x^2, \ldots, x^n\}).

Example 2.7: Symmetric Matrices

The set of symmetric n×nn \times n matrices:

Sn={AMn(R):AT=A}S_n = \{A \in M_n(\mathbb{R}) : A^T = A\}

is a subspace of Mn(R)M_n(\mathbb{R}):

  • Zero: OT=OO^T = O
  • Closed under +: (A+B)T=AT+BT=A+B(A + B)^T = A^T + B^T = A + B
  • Closed under scalar mult: (αA)T=αAT=αA(\alpha A)^T = \alpha A^T = \alpha A

Dimension: n(n+1)2\frac{n(n+1)}{2}.

Example 2.8: Non-Subspace Examples

The following are NOT subspaces:

  • First quadrant in ℝ²: {(x,y):x0,y0}\{(x, y) : x \geq 0, y \geq 0\}
    No additive inverses: (1,1)=(1,1)-(1, 1) = (-1, -1) not in set.
  • Unit circle: {(x,y):x2+y2=1}\{(x, y) : x^2 + y^2 = 1\}
    Doesn't contain zero: 02+02=010^2 + 0^2 = 0 \neq 1.
  • Polynomials of exact degree n: Not closed under addition:
    x2+(x2+1)=1x^2 + (-x^2 + 1) = 1 has degree 0, not 2.
  • Invertible matrices: Zero matrix not invertible; I+(I)=0I + (-I) = 0 not invertible.
  • Integers in ℝ: ZR\mathbb{Z} \subseteq \mathbb{R}
    Not closed under scalar mult: 121=12Z\frac{1}{2} \cdot 1 = \frac{1}{2} \notin \mathbb{Z}.
Remark 2.3: Geometric Interpretation

In Rn\mathbb{R}^n, subspaces are:

  • {0}\{0\} (dimension 0) — just the origin
  • Lines through the origin (dimension 1)
  • Planes through the origin (dimension 2)
  • Hyperplanes through the origin (dimension n-1)
  • The whole space Rn\mathbb{R}^n (dimension n)

All must pass through the origin!

Example 2.9: Upper Triangular Matrices

The set of upper triangular n×nn \times n matrices:

Un={AMn(F):aij=0 for i>j}U_n = \{A \in M_n(F) : a_{ij} = 0 \text{ for } i > j\}

is a subspace of Mn(F)M_n(F):

  • Zero: The zero matrix is upper triangular ✓
  • Closed under +: Sum of upper triangular is upper triangular ✓
  • Closed under scalar mult: Scalar multiple preserves the zero pattern ✓

Dimension: n(n+1)2\frac{n(n+1)}{2} (diagonal + upper triangle).

Example 2.10: Trace Zero Matrices

Matrices with trace zero:

sln(F)={AMn(F):tr(A)=0}\text{sl}_n(F) = \{A \in M_n(F) : \text{tr}(A) = 0\}

is a subspace (this is the Lie algebra of SLn(F)SL_n(F)):

  • Zero: tr(0) = 0 ✓
  • Closed under +: tr(A + B) = tr(A) + tr(B) = 0 + 0 = 0 ✓
  • Closed under scalar mult: tr(αA) = α·tr(A) = α·0 = 0 ✓

Dimension: n21n^2 - 1.

Example 2.11: Diagonal Matrices

Diagonal matrices form a subspace:

Dn={AMn(F):aij=0 for ij}D_n = \{A \in M_n(F) : a_{ij} = 0 \text{ for } i \neq j\}

Dimension: nn (only n diagonal entries can be non-zero).

Note: DnSnUnD_n \subseteq S_n \cap U_n (diagonal matrices are both symmetric and upper triangular).

2. Linear Span

The span of a set of vectors is the set of all linear combinations. It's the smallest subspace containing those vectors—and every subspace can be described as a span of some generating set.

Definition 2.3: Linear Span

The linear span (or simply span) of a set SVS \subseteq V is:

span(S)={i=1kαivi:kN,αiF,viS}\text{span}(S) = \left\{\sum_{i=1}^{k} \alpha_i v_i : k \in \mathbb{N}, \alpha_i \in F, v_i \in S\right\}

i.e., the set of all finite linear combinations of vectors in SS.

Remark 2.3: Convention

By convention, span()={0}\text{span}(\emptyset) = \{0\}. The empty sum is the zero vector.

Theorem 2.3: Span is a Subspace

For any subset SVS \subseteq V, span(S)\text{span}(S) is a subspace of VV.

Proof of Theorem 2.3:

We verify the subspace criterion:

  • Zero: The empty sum (or 0v0 \cdot v for any vSv \in S) gives 0span(S)0 \in \text{span}(S).
  • Closed under +: If u=αiviu = \sum \alpha_i v_i and w=βjvjw = \sum \beta_j v_j, then u+w=αivi+βjvju + w = \sum \alpha_i v_i + \sum \beta_j v_j is also a linear combination.
  • Closed under scalar mult: If u=αiviu = \sum \alpha_i v_i, then cu=(cαi)vispan(S)cu = \sum (c\alpha_i) v_i \in \text{span}(S).
Theorem 2.4: Span is the Smallest Subspace

span(S)\text{span}(S) is the smallest subspace containing SS. That is:

  1. Sspan(S)S \subseteq \text{span}(S)
  2. If WW is any subspace with SWS \subseteq W, then span(S)W\text{span}(S) \subseteq W
Proof of Theorem 2.4:

(1) For any vSv \in S, we have v=1vspan(S)v = 1 \cdot v \in \text{span}(S).

(2) Let WW be a subspace with SWS \subseteq W. Any linear combination αivi\sum \alpha_i v_i with viSWv_i \in S \subseteq W is in WW by closure. So span(S)W\text{span}(S) \subseteq W.

Definition 2.4: Generating Set

A set SS generates (or spans) a vector space VV if span(S)=V\text{span}(S) = V. We also say SS is a generating set for VV.

Example 2.8: Computing Spans in ℝ²
  • span{(1,0)}={(t,0):tR}\text{span}\{(1, 0)\} = \{(t, 0) : t \in \mathbb{R}\} — the x-axis (1-dimensional)
  • span{(1,0),(0,1)}=R2\text{span}\{(1, 0), (0, 1)\} = \mathbb{R}^2 — the standard basis spans all of R2\mathbb{R}^2
  • span{(1,2),(2,4)}={(t,2t):tR}\text{span}\{(1, 2), (2, 4)\} = \{(t, 2t) : t \in \mathbb{R}\} — a line (since (2,4)=2(1,2)(2, 4) = 2(1, 2))
  • span{(1,1),(1,1)}=R2\text{span}\{(1, 1), (1, -1)\} = \mathbb{R}^2 — these two vectors are not parallel
Example 2.9: Computing Spans in ℝ³
  • span{(1,0,0)}\text{span}\{(1, 0, 0)\} — the x-axis (line)
  • span{(1,0,0),(0,1,0)}\text{span}\{(1, 0, 0), (0, 1, 0)\} — the xy-plane
  • span{(1,1,0),(0,1,1),(1,2,1)}\text{span}\{(1, 1, 0), (0, 1, 1), (1, 2, 1)\} — since (1,2,1)=(1,1,0)+(0,1,1)(1, 2, 1) = (1, 1, 0) + (0, 1, 1), this is a plane (2-dimensional)
  • span{(1,0,0),(0,1,0),(0,0,1)}=R3\text{span}\{(1, 0, 0), (0, 1, 0), (0, 0, 1)\} = \mathbb{R}^3
Example 2.10: Span of Polynomials

In R[x]\mathbb{R}[x]:

  • span{1,x,x2}=P2(R)\text{span}\{1, x, x^2\} = P_2(\mathbb{R}) (polynomials of degree ≤ 2)
  • span{1,1+x,1+x+x2}=P2(R)\text{span}\{1, 1+x, 1+x+x^2\} = P_2(\mathbb{R}) (same space, different generators)
  • span{x,x2,x3,}\text{span}\{x, x^2, x^3, \ldots\} = polynomials with no constant term
Remark 2.4: Same Span, Different Sets

Many different sets can span the same subspace. For example:

span{(1,0),(2,0)}=span{(1,0)}=span{(3,0),(5,0)}\text{span}\{(1, 0), (2, 0)\} = \text{span}\{(1, 0)\} = \text{span}\{(3, 0), (-5, 0)\}

The minimal spanning set (with no redundant vectors) is called a basis—covered in Chapter 2.4.

Theorem 2.4.5: Membership Test

A vector vv is in span(S)\text{span}(S) if and only if adding vv to SSdoes not increase the dimension of the span.

Example 2.11: Testing Membership

Is (1,2,3)span{(1,0,1),(0,1,1)}(1, 2, 3) \in \text{span}\{(1, 0, 1), (0, 1, 1)\}?

We need to check if (1,2,3)=a(1,0,1)+b(0,1,1)(1, 2, 3) = a(1, 0, 1) + b(0, 1, 1) has a solution.

This gives: a=1a = 1, b=2b = 2, a+b=3a + b = 3.

Check: 1+2=31 + 2 = 3 ✓. Yes, (1,2,3)(1, 2, 3) is in the span.

Example 2.12: Testing Non-Membership

Is (1,1,1)span{(1,0,1),(0,1,0)}(1, 1, 1) \in \text{span}\{(1, 0, 1), (0, 1, 0)\}?

Check: (1,1,1)=a(1,0,1)+b(0,1,0)(1, 1, 1) = a(1, 0, 1) + b(0, 1, 0).

This gives: a=1a = 1, b=1b = 1, a=1a = 1.

But the third component: a1+b0=11a \cdot 1 + b \cdot 0 = 1 \neq 1... wait, that's 1 = 1. Let me redo.

Third component: 1a+0b=a=11 \cdot a + 0 \cdot b = a = 1. So (1,1,1)(1, 1, 1) IS in the span!

Remark 2.5: Computing Spans with Matrices

To find span(S) and test membership, form a matrix with vectors in S as columns and row reduce. The pivot columns give a basis; the dimension equals the rank.

3. Intersection and Sum

Given two subspaces, we can combine them in two natural ways: intersection and sum. The intersection is always a subspace (but the union usually isn't!).

Theorem 2.5: Intersection of Subspaces

If U,WU, W are subspaces of VV, then UWU \cap W is a subspace of VV.

Proof of Theorem 2.5:

Check the subspace criterion:

  • Zero: 0U0 \in U and 0W0 \in W, so 0UW0 \in U \cap W.
  • Closed under +: If v1,v2UWv_1, v_2 \in U \cap W, then v1+v2Uv_1 + v_2 \in U (since UU is a subspace) and v1+v2Wv_1 + v_2 \in W (since WW is a subspace). So v1+v2UWv_1 + v_2 \in U \cap W.
  • Closed under scalar mult: If vUWv \in U \cap W, then αvU\alpha v \in U and αvW\alpha v \in W, so αvUW\alpha v \in U \cap W.
Corollary 2.2: Arbitrary Intersections

If {Wi}iI\{W_i\}_{i \in I} is any collection of subspaces of VV, theniIWi\bigcap_{i \in I} W_i is also a subspace of VV.

Theorem 2.6: Union is Rarely a Subspace

Let U,WU, W be subspaces of VV. Then UWU \cup W is a subspace if and only ifUWU \subseteq W or WUW \subseteq U.

Proof of Theorem 2.6:

(⇐) If UWU \subseteq W, then UW=WU \cup W = W, a subspace.

(⇒) Suppose UWU \cup W is a subspace but neither UWU \subseteq W nor WUW \subseteq U.

Then there exist uUWu \in U \setminus W and wWUw \in W \setminus U.

Since UWU \cup W is a subspace, u+wUWu + w \in U \cup W.

Case 1: u+wUu + w \in U. Then w=(u+w)uUw = (u + w) - u \in U. Contradiction!

Case 2: u+wWu + w \in W. Then u=(u+w)wWu = (u + w) - w \in W. Contradiction!

Definition 2.5: Sum of Subspaces

The sum of subspaces UU and WW is:

U+W={u+w:uU,wW}U + W = \{u + w : u \in U, w \in W\}
Theorem 2.7: Sum is a Subspace

If U,WU, W are subspaces of VV, then U+WU + W is a subspace of VV.

Proof of Theorem 2.7:

Check the subspace criterion:

  • Zero: 0=0+0U+W0 = 0 + 0 \in U + W (with 0U0 \in U and 0W0 \in W).
  • Closed under +: If u1+w1,u2+w2U+Wu_1 + w_1, u_2 + w_2 \in U + W, then (u1+w1)+(u2+w2)=(u1+u2)+(w1+w2)U+W(u_1 + w_1) + (u_2 + w_2) = (u_1 + u_2) + (w_1 + w_2) \in U + W.
  • Closed under scalar mult: α(u+w)=αu+αwU+W\alpha(u + w) = \alpha u + \alpha w \in U + W.
Remark 2.5: Sum as Span

U+W=span(UW)U + W = \text{span}(U \cup W). The sum is the smallest subspace containing both UU and WW.

Theorem 2.8: Dimension Formula

For finite-dimensional subspaces U,WU, W:

dim(U+W)=dim(U)+dim(W)dim(UW)\dim(U + W) = \dim(U) + \dim(W) - \dim(U \cap W)
Proof of Theorem 2.8:

Let {v1,,vk}\{v_1, \ldots, v_k\} be a basis for UWU \cap W.

Extend to a basis {v1,,vk,u1,,um}\{v_1, \ldots, v_k, u_1, \ldots, u_m\} for UU.

Extend to a basis {v1,,vk,w1,,wn}\{v_1, \ldots, v_k, w_1, \ldots, w_n\} for WW.

Claim: {v1,,vk,u1,,um,w1,,wn}\{v_1, \ldots, v_k, u_1, \ldots, u_m, w_1, \ldots, w_n\} is a basis for U+WU + W.

It spans U+WU + W since any u+wu + w is a combination of these vectors.

Linear independence can be verified by showing that if αivi+βjuj+γlwl=0\sum \alpha_i v_i + \sum \beta_j u_j + \sum \gamma_l w_l = 0, then all coefficients are zero (using the fact that ujWu_j \notin W and wlUw_l \notin U).

Therefore: dim(U+W)=k+m+n=(k+m)+(k+n)k=dimU+dimWdim(UW)\dim(U + W) = k + m + n = (k + m) + (k + n) - k = \dim U + \dim W - \dim(U \cap W).

Example 2.11: Intersection and Sum in ℝ³

Let U=span{(1,0,0),(0,1,0)}U = \text{span}\{(1, 0, 0), (0, 1, 0)\} (xy-plane) and W=span{(0,1,0),(0,0,1)}W = \text{span}\{(0, 1, 0), (0, 0, 1)\} (yz-plane).

  • UW=span{(0,1,0)}U \cap W = \text{span}\{(0, 1, 0)\} (y-axis)
  • U+W=R3U + W = \mathbb{R}^3
  • Check dimension formula: dim(U+W)=2+21=3\dim(U + W) = 2 + 2 - 1 = 3
Example 2.12: Lines in ℝ²

Let U=span{(1,1)}U = \text{span}\{(1, 1)\} and W=span{(1,1)}W = \text{span}\{(1, -1)\}.

  • UW={0}U \cap W = \{0\} (different lines intersect only at origin)
  • U+W=R2U + W = \mathbb{R}^2
  • Check: dim(U+W)=1+10=2\dim(U + W) = 1 + 1 - 0 = 2
Definition 2.6: Direct Sum

The sum U+WU + W is called a direct sum, written UWU \oplus W, if UW={0}U \cap W = \{0\}.

Theorem 2.9: Characterization of Direct Sum

U+W=UWU + W = U \oplus W if and only if every vU+Wv \in U + W can be written uniquely as v=u+wv = u + w with uU,wWu \in U, w \in W.

Proof of Theorem 2.9:

(⇒) Suppose UW={0}U \cap W = \{0\} and v=u1+w1=u2+w2v = u_1 + w_1 = u_2 + w_2.

Then u1u2=w2w1UW={0}u_1 - u_2 = w_2 - w_1 \in U \cap W = \{0\}.

So u1=u2u_1 = u_2 and w1=w2w_1 = w_2. Uniqueness!

(⇐) If decomposition is unique, suppose vUWv \in U \cap W.

Then v=v+0=0+vv = v + 0 = 0 + v are two decompositions. Uniqueness implies v=0v = 0.

Corollary 2.3: Dimension of Direct Sum

If V=UWV = U \oplus W, then dimV=dimU+dimW\dim V = \dim U + \dim W.

Example 3.1: Direct Sum in ℝ⁴

Let U=span{(1,0,0,0),(0,1,0,0)}U = \text{span}\{(1, 0, 0, 0), (0, 1, 0, 0)\} and W=span{(0,0,1,0),(0,0,0,1)}W = \text{span}\{(0, 0, 1, 0), (0, 0, 0, 1)\}.

Then UW={0}U \cap W = \{0\} (no overlap in coordinates).

So R4=UW\mathbb{R}^4 = U \oplus W (direct sum).

Uniqueness: (a,b,c,d)=(a,b,0,0)+(0,0,c,d)(a, b, c, d) = (a, b, 0, 0) + (0, 0, c, d) is the unique decomposition.

Example 3.2: Non-Direct Sum

Let U=span{(1,1,0)}U = \text{span}\{(1, 1, 0)\} and W=span{(1,1,0),(0,0,1)}W = \text{span}\{(1, 1, 0), (0, 0, 1)\}.

Then UWU \subseteq W, so UW=U{0}U \cap W = U \neq \{0\}.

This is NOT a direct sum: (1,1,0)=(1,1,0)+0=0+(1,1,0)(1, 1, 0) = (1, 1, 0) + 0 = 0 + (1, 1, 0) has multiple decompositions.

Remark 3.1: Complementary Subspaces

If V=UWV = U \oplus W, we say WW is a complement of UU in VV. Complements are not unique! For example, in R2\mathbb{R}^2, any line through the origin is a complement of any other non-parallel line.

Theorem 2.10: Existence of Complements

Every subspace UU of a finite-dimensional space VV has a complement: there exists WW such that V=UWV = U \oplus W.

Proof of Theorem 2.10:

Let {u1,,uk}\{u_1, \ldots, u_k\} be a basis for UU.

Extend to a basis {u1,,uk,w1,,wm}\{u_1, \ldots, u_k, w_1, \ldots, w_m\} for VV.

Let W=span{w1,,wm}W = \text{span}\{w_1, \ldots, w_m\}.

Then V=U+WV = U + W (spans all of VV) and UW={0}U \cap W = \{0\} (basis is linearly independent).

Example 3.3: Multiple Subspaces

The dimension formula generalizes. For three subspaces:

dim(U+W+X)dimU+dimW+dimX\dim(U + W + X) \leq \dim U + \dim W + \dim X

But the exact formula is more complex due to triple overlaps. For direct sumsUWXU \oplus W \oplus X, we need pairwise intersections to be zero.

4. Worked Examples

Example 4.1: Verifying a Subspace from Equations

Problem: Is W={(x,y,z)R3:2xy+3z=0}W = \{(x, y, z) \in \mathbb{R}^3 : 2x - y + 3z = 0\} a subspace?

Solution: Yes! This is the null space of [2 1 3][2\ {-}1\ 3].

  • Zero: 2(0)0+3(0)=02(0) - 0 + 3(0) = 0
  • Closed under +: If 2x1y1+3z1=02x_1 - y_1 + 3z_1 = 0 and 2x2y2+3z2=02x_2 - y_2 + 3z_2 = 0, then 2(x1+x2)(y1+y2)+3(z1+z2)=02(x_1+x_2) - (y_1+y_2) + 3(z_1+z_2) = 0
  • Closed under scalar mult: If 2xy+3z=02x - y + 3z = 0, then 2(αx)αy+3(αz)=α(2xy+3z)=02(\alpha x) - \alpha y + 3(\alpha z) = \alpha(2x - y + 3z) = 0

Dimension: 2 (one constraint in 3D).

Example 4.2: Non-Subspace: Affine Plane

Problem: Is W={(x,y,z):x+y+z=1}W = \{(x, y, z) : x + y + z = 1\} a subspace of R3\mathbb{R}^3?

Solution: No!

Quick check: 0+0+0=010 + 0 + 0 = 0 \neq 1, so (0,0,0)W(0, 0, 0) \notin W.

This is an affine subspace (translated plane), not a linear subspace.

Example 4.3: Finding a Basis for a Subspace

Problem: Find a basis for W={(x,y,z,w):x+y=0,zw=0}W = \{(x, y, z, w) : x + y = 0, z - w = 0\}.

Solution:

Set up equations: x=yx = -y and z=wz = w.

Parametrize: (x,y,z,w)=(y,y,w,w)=y(1,1,0,0)+w(0,0,1,1)(x, y, z, w) = (-y, y, w, w) = y(-1, 1, 0, 0) + w(0, 0, 1, 1).

Basis: {(1,1,0,0),(0,0,1,1)}\{(-1, 1, 0, 0), (0, 0, 1, 1)\}.

Dimension: 2.

Example 4.4: Computing Intersection

Problem: Find UWU \cap W where:

  • U={(x,y,z):x+y=0}U = \{(x, y, z) : x + y = 0\}
  • W={(x,y,z):y+z=0}W = \{(x, y, z) : y + z = 0\}

Solution: We need both: x+y=0x + y = 0 and y+z=0y + z = 0.

From first: x=yx = -y. From second: z=yz = -y.

UW={(y,y,y):yR}=span{(1,1,1)}U \cap W = \{(-y, y, -y) : y \in \mathbb{R}\} = \text{span}\{(-1, 1, -1)\}.

Dimension: 1 (a line).

Example 4.5: Computing Sum

Problem: Find U+WU + W where U=span{(1,0,1)}U = \text{span}\{(1, 0, 1)\} and W=span{(0,1,1)}W = \text{span}\{(0, 1, 1)\}.

Solution:

U+W=span{(1,0,1),(0,1,1)}U + W = \text{span}\{(1, 0, 1), (0, 1, 1)\}.

Since (1,0,1)(1, 0, 1) and (0,1,1)(0, 1, 1) are linearly independent, this is a 2-dimensional subspace (a plane).

Check: UW={0}U \cap W = \{0\} (the vectors are not parallel), so U+W=UWU + W = U \oplus W.

Example 4.6: Non-Subspace: Upper Triangular Non-Singular

Problem: Is the set of invertible upper triangular matrices a subspace of Mn(R)M_n(\mathbb{R})?

Solution: No!

  • The zero matrix is not invertible, so 0W0 \notin W.
  • Also, I+(I)=0I + (-I) = 0 is not invertible, so not closed under addition.

Note: The set of ALL upper triangular matrices IS a subspace.

Example 4.7: Subspace of Functions

Problem: Show that even functions form a subspace of F(R,R)\mathcal{F}(\mathbb{R}, \mathbb{R}).

Solution: Let E={f:f(x)=f(x) for all x}E = \{f : f(-x) = f(x) \text{ for all } x\}.

  • Zero: 0(x)=0=0(x)0(-x) = 0 = 0(x)
  • Closed under +: If f(x)=f(x)f(-x) = f(x) and g(x)=g(x)g(-x) = g(x), then (f+g)(x)=f(x)+g(x)=f(x)+g(x)=(f+g)(x)(f+g)(-x) = f(-x) + g(-x) = f(x) + g(x) = (f+g)(x)
  • Closed under scalar mult: (αf)(x)=αf(x)=αf(x)=(αf)(x)(\alpha f)(-x) = \alpha f(-x) = \alpha f(x) = (\alpha f)(x)

Similarly, odd functions form a subspace OO, and F=EO\mathcal{F} = E \oplus O!

Example 4.8: Dimension Formula Application

Problem: In R5\mathbb{R}^5, let dimU=3\dim U = 3 and dimW=4\dim W = 4. What are the possible values of dim(UW)\dim(U \cap W)?

Solution: By the dimension formula:

dim(U+W)=3+4dim(UW)\dim(U + W) = 3 + 4 - \dim(U \cap W)

Since U+WR5U + W \subseteq \mathbb{R}^5, we have dim(U+W)5\dim(U + W) \leq 5.

So 7dim(UW)57 - \dim(U \cap W) \leq 5, meaning dim(UW)2\dim(U \cap W) \geq 2.

Also, dim(UW)min(3,4)=3\dim(U \cap W) \leq \min(3, 4) = 3.

Possible values: 2, 3.

Example 4.9: Direct Sum Decomposition

Problem: Show that M2(R)=S2A2M_2(\mathbb{R}) = S_2 \oplus A_2 where S2S_2 is symmetric and A2A_2 is skew-symmetric matrices.

Solution:

For any matrix AA, we can write:

A=A+AT2+AAT2A = \frac{A + A^T}{2} + \frac{A - A^T}{2}

where A+AT2\frac{A + A^T}{2} is symmetric and AAT2\frac{A - A^T}{2} is skew-symmetric.

Check S2A2={0}S_2 \cap A_2 = \{0\}: If AT=AA^T = A and AT=AA^T = -A, then A=AA = -A, so A=0A = 0.

Dimension check: dimS2=3\dim S_2 = 3, dimA2=1\dim A_2 = 1, dimM2=4=3+1\dim M_2 = 4 = 3 + 1

Example 4.10: Column Space and Row Space

Problem: For A=(121242)A = \begin{pmatrix} 1 & 2 & 1 \\ 2 & 4 & 2 \end{pmatrix}, find the column space and row space.

Solution:

Column space: span of columns = span{(1,2),(2,4),(1,2)}\text{span}\{(1, 2), (2, 4), (1, 2)\}

Since all columns are multiples of (1,2)(1, 2):

Col(A)=span{(1,2)}\text{Col}(A) = \text{span}\{(1, 2)\}, dimension 1.

Row space: span of rows = span{(1,2,1),(2,4,2)}\text{span}\{(1, 2, 1), (2, 4, 2)\}

Since (2,4,2)=2(1,2,1)(2, 4, 2) = 2(1, 2, 1):

Row(A)=span{(1,2,1)}\text{Row}(A) = \text{span}\{(1, 2, 1)\}, dimension 1.

Note: dim(Col(A)) = dim(Row(A)) = rank(A). Always!

Example 4.11: Null Space Calculation

Problem: Find ker(A)\ker(A) for A=(121242)A = \begin{pmatrix} 1 & 2 & -1 \\ 2 & 4 & -2 \end{pmatrix}.

Solution: Solve Ax=0Ax = 0.

RREF: (121000)\begin{pmatrix} 1 & 2 & -1 \\ 0 & 0 & 0 \end{pmatrix}

Equation: x1+2x2x3=0x_1 + 2x_2 - x_3 = 0, so x1=2x2+x3x_1 = -2x_2 + x_3.

Parametrize: (x1,x2,x3)=s(2,1,0)+t(1,0,1)(x_1, x_2, x_3) = s(-2, 1, 0) + t(1, 0, 1).

ker(A)=span{(2,1,0),(1,0,1)}\ker(A) = \text{span}\{(-2, 1, 0), (1, 0, 1)\}, dimension 2.

Rank-Nullity: rank(A) + nullity(A) = 1 + 2 = 3 = number of columns ✓

Example 4.12: Complementary Subspaces

Problem: Let U={(x,y,z):x+y=0}U = \{(x, y, z) : x + y = 0\} in R3\mathbb{R}^3. Find a subspace WW such that R3=UW\mathbb{R}^3 = U \oplus W.

Solution:

UU has dimension 2 (one constraint). Need WW with dimension 1 and UW={0}U \cap W = \{0\}.

Find a vector not in UU: (1,0,0)U(1, 0, 0) \notin U since 1+001 + 0 \neq 0.

Let W=span{(1,0,0)}W = \text{span}\{(1, 0, 0)\}.

Check UW={0}U \cap W = \{0\}: If t(1,0,0)Ut(1, 0, 0) \in U, then t+0=0t + 0 = 0, so t=0t = 0. ✓

Verify: dimU+dimW=2+1=3=dimR3\dim U + \dim W = 2 + 1 = 3 = \dim \mathbb{R}^3. ✓

Example 4.13: Span Equality Test

Problem: Is span{(1,2,3),(4,5,6)}=span{(1,1,1),(2,3,4)}\text{span}\{(1, 2, 3), (4, 5, 6)\} = \text{span}\{(1, 1, 1), (2, 3, 4)\}?

Solution: Check if each set spans the same subspace.

Can we write (1,2,3)=a(1,1,1)+b(2,3,4)(1, 2, 3) = a(1, 1, 1) + b(2, 3, 4)?

System: a+2b=1a + 2b = 1, a+3b=2a + 3b = 2, a+4b=3a + 4b = 3.

From first two: b=1b = 1, a=1a = -1. Check third: 1+4=3-1 + 4 = 3

Similarly check (4,5,6)=c(1,1,1)+d(2,3,4)(4, 5, 6) = c(1, 1, 1) + d(2, 3, 4): c=2,d=3c = -2, d = 3 works.

Answer: Yes, the spans are equal (both are the same 2D subspace).

5. Common Mistakes

Mistake 1: Forgetting to check zero

If a subset doesn't contain 00, it's not a subspace. Always check this first!
Example: {(x,y):x+y=1}\{(x, y) : x + y = 1\} — the origin isn't in this set.

Mistake 2: Confusing union and sum

UWU \cup W is rarely a subspace (only when one contains the other).U+WU + W is always a subspace. Use sum, not union!

Mistake 3: Lines not through origin

In R2\mathbb{R}^2, the line y=2xy = 2x (through origin) is a subspace. But y=2x+1y = 2x + 1 is NOT — it doesn't pass through the origin.

Mistake 4: Solution sets of non-homogeneous systems

Solutions to Ax=0Ax = 0 form a subspace (null space). Solutions to Ax=bAx = b (with b0b \neq 0) do NOT — they form an affine subspace.

Mistake 5: Only checking addition

A set can be closed under addition but not scalar multiplication.
Example: Z2\mathbb{Z}^2 in R2\mathbb{R}^2 — closed under addition, but 12(1,1)Z2\frac{1}{2}(1, 1) \notin \mathbb{Z}^2.

Mistake 6: Assuming direct sum

U+WU + W is NOT automatically a direct sum. You must verify UW={0}U \cap W = \{0\}. If not, decompositions are not unique.

Mistake 7: Forgetting closure under ALL scalars

Over R\mathbb{R}, you need closure under all real scalars, including negatives and fractions. Over C\mathbb{C}, need closure under complex scalars too.

Mistake 8: Dimension calculation errors

Remember: dim(U + W) = dim(U) + dim(W) - dim(U ∩ W). Don't forget to subtract the intersection! And dim(U + W) ≤ dim(V), always.

✓ How to Check: Subspace Checklist

Given a subset WVW \subseteq V, verify it's a subspace:

  1. Is WW non-empty? (Usually check 0W0 \in W)
  2. Is WW closed under addition? (If u,vWu, v \in W, is u+vWu + v \in W?)
  3. Is WW closed under scalar multiplication? (If vWv \in W, is αvW\alpha v \in W?)

Quick test: If αu+vW\alpha u + v \in W for all u,vWu, v \in W and αF\alpha \in F, then WW is a subspace.

6. Key Takeaways

Subspace Criterion

To check if WW is a subspace: (1) 0W0 \in W, (2) closed under +, (3) closed under scalar mult. If these hold, all axioms are automatic.

Span

span(S) = all linear combinations of vectors in S. It's the smallest subspace containing S. Every subspace is a span of some set.

Intersection vs Union

Intersection of subspaces is always a subspace. Union is almost never a subspace (only if one contains the other).

Sum of Subspaces

U + W = span(U ∪ W) is the smallest subspace containing both. Direct sum U ⊕ W requires U ∩ W = {0}.

Dimension Formula

dim(U + W) = dim U + dim W - dim(U ∩ W). Like inclusion-exclusion for counting.

Null Space

ker(A) = {x : Ax = 0} is always a subspace. This is why homogeneous systems have nice solution structure.

Summary Table

OperationSubspace?Notes
U ∩ WAlwaysCan take arbitrary intersections
U ∪ WRarelyOnly if one contains the other
U + WAlways= span(U ∪ W)
span(S)AlwaysSmallest subspace containing S
ker(A)AlwaysSolutions to Ax = 0

Important Subspaces in Linear Algebra

For a Matrix A

  • Null space: ker(A) = {x : Ax = 0}
  • Column space: Col(A) = span of columns
  • Row space: Row(A) = span of rows
  • Left null space: ker(Aᵀ)

For a Linear Map T

  • Kernel: ker(T) = {v : T(v) = 0}
  • Image: im(T) = {T(v) : v ∈ V}
  • Eigenspaces: E_λ = ker(T - λI)
  • Invariant subspaces: T(U) ⊆ U
Remark 6.1: Looking Ahead

Subspaces are the building blocks for understanding linear maps. In the next chapters:

  • Linear Independence: When is a spanning set minimal?
  • Basis: The right balance of spanning and independence
  • Dimension: The invariant size of a subspace
  • Linear Maps: How subspaces transform

Applications of Subspaces

In Solving Systems

  • Solution set of Ax = 0 is a subspace
  • General solution = particular + null space
  • Rank-nullity theorem connects dimensions

In Eigenvalue Theory

  • Eigenspaces are subspaces
  • Diagonalization uses direct sums
  • Invariant subspaces key to structure

In Differential Equations

  • Solutions to linear ODEs form subspace
  • General solution = span of basic solutions
  • Dimension = order of equation

In Data Science

  • PCA projects onto subspaces
  • Feature spaces are subspaces
  • Low-rank approximation via subspaces
Remark 6.2: The Subspace Perspective

Thinking in terms of subspaces is powerful: instead of individual vectors, consider the spaces they generate. This shift from "elements" to "structures" is central to abstract linear algebra and leads to cleaner theorems and deeper understanding.

7. Quick Reference

Three Conditions for Subspace

  1. Non-empty (usually: 0 ∈ W)
  2. Closed under +: u, v ∈ W ⟹ u + v ∈ W
  3. Closed under scalar: v ∈ W ⟹ αv ∈ W

Key Dimension Results

  • dim(U ∩ W) ≤ min(dim U, dim W)
  • dim(U + W) = dim U + dim W - dim(U ∩ W)
  • dim(U ⊕ W) = dim U + dim W
  • dim(ker A) + rank(A) = n (columns)

Common Subspaces

  • Null space: ker(A) = {x : Ax = 0}
  • Column space: Col(A) = span(columns)
  • Row space: Row(A) = span(rows)
  • Span: span(S) = all linear combos

Quick Tests

  • 0 ∉ W ⟹ not a subspace
  • Homogeneous equation ⟹ subspace
  • Inhomogeneous ⟹ NOT subspace
  • Union ⟹ rarely a subspace

Subspaces in ℝ³

DimensionTypeExample
0Point{(0, 0, 0)}
1Line (through origin)span{(1, 2, 3)}
2Plane (through origin){(x,y,z) : x + y + z = 0}
3Whole spaceℝ³
Subspaces Practice
12
Questions
0
Correct
0%
Accuracy
1
Is the set {(x,y)R2:x=2y}\{(x, y) \in \mathbb{R}^2 : x = 2y\} a subspace of R2\mathbb{R}^2?
Easy
Not attempted
2
Is {(x,y)R2:x+y=1}\{(x, y) \in \mathbb{R}^2 : x + y = 1\} a subspace of R2\mathbb{R}^2?
Easy
Not attempted
3
What is span{(1,0),(0,1)}\text{span}\{(1, 0), (0, 1)\} in R2\mathbb{R}^2?
Easy
Not attempted
4
If UU and WW are subspaces of VV, is UWU \cup W always a subspace?
Medium
Not attempted
5
What is dim(UW)\dim(U \cap W) if U,WU, W are planes through origin in R3\mathbb{R}^3 and UWU \neq W?
Medium
Not attempted
6
What is span{(1,1,0),(0,1,1),(1,2,1)}\text{span}\{(1,1,0), (0,1,1), (1,2,1)\} in R3\mathbb{R}^3?
Medium
Not attempted
7
If dim(U)=3\dim(U) = 3 and dim(W)=4\dim(W) = 4 in a 5-dimensional space, what is the minimum possible dim(UW)\dim(U \cap W)?
Hard
Not attempted
8
Is the set of invertible n×nn \times n matrices a subspace of Mn(R)M_n(\mathbb{R})?
Hard
Not attempted
9
Is {(x,y,z):x2+y2+z2=0}\{(x, y, z) : x^2 + y^2 + z^2 = 0\} a subspace of R3\mathbb{R}^3?
Medium
Not attempted
10
If UW=VU \oplus W = V (direct sum), what is dim(UW)\dim(U \cap W)?
Medium
Not attempted
11
Is the set of symmetric n×nn \times n matrices a subspace of Mn(R)M_n(\mathbb{R})?
Easy
Not attempted
12
What is span{0}\text{span}\{0\}?
Easy
Not attempted

Frequently Asked Questions

What's the quickest way to check if a set is a subspace?

Use the one-step subspace test: W is a subspace iff for all u, v ∈ W and α ∈ F, we have αu + v ∈ W. This combines closure under addition and scalar multiplication, plus automatically includes 0 (set α = 0).

What's the difference between span and subspace?

Every span is a subspace, but not every subspace is described as a span initially. Span{S} is the smallest subspace containing S—it's the set of all linear combinations of vectors in S.

Can two different sets have the same span?

Yes! For example, span{(1,0), (2,0)} = span{(1,0)} = the x-axis in ℝ². Removing redundant (linearly dependent) vectors doesn't change the span.

How is the null space related to subspaces?

The null space (kernel) of a matrix A is {x : Ax = 0}. It's always a subspace of ℝⁿ. This is fundamental: the solution set of a homogeneous linear system is always a subspace.

What's the geometric intuition for sum of subspaces?

U + W contains all points reachable by first moving in U, then in W. If U and W are lines through the origin, U + W is either a line (if U = W) or the plane containing both lines.

What's the difference between direct sum and regular sum?

U + W is always a subspace. U ⊕ W (direct sum) requires additionally that U ∩ W = {0}. In a direct sum, every vector in U + W can be written uniquely as u + w.

Why doesn't union of subspaces work?

Union U ∪ W is rarely a subspace because if u ∈ U \ W and w ∈ W \ U, then u + w is in neither U nor W (but it's in U + W). The exception: one subspace contains the other.

How do I find a basis for a subspace defined by equations?

If W = {x : Ax = 0}, solve the homogeneous system Ax = 0 using Gaussian elimination. Write the solution in parametric form; the coefficient vectors of the free variables form a basis.

Is every subspace a null space of some matrix?

Yes! Every subspace W ⊆ ℝⁿ equals ker(A) for some matrix A. If W has dimension k, you can find (n - k) linear equations defining W, and A is the coefficient matrix.

What happens to dimension when taking intersection?

dim(U ∩ W) ≤ min(dim U, dim W). The dimension formula gives: dim(U ∩ W) = dim U + dim W - dim(U + W). The intersection can be just {0} even if both spaces are large.