MathIsimple
LA-2.4
Available

Basis & Dimension

A basis is a minimal spanning set and a maximal independent set. The number of elements in any basis is the dimension—the most important invariant of a vector space.

Learning Objectives
  • Define basis as a linearly independent spanning set
  • Prove that all bases of a space have the same cardinality (dimension theorem)
  • Compute coordinates with respect to a given basis
  • Understand the coordinate isomorphism between V and F^n
  • Distinguish finite and infinite dimensional spaces
  • Find bases for common vector spaces (R^n, polynomials, matrices)
  • Apply the dimension formula for subspaces
  • Perform change of basis calculations
  • Use dimension arguments to solve problems
  • Connect dimension to rank of matrices
Prerequisites
  • Linear independence and dependence (LA-2.3)
  • Spanning sets and linear span (LA-2.2)
  • Vector space definition and axioms (LA-2.1)
  • Matrix row reduction and rank (LA-1.4)
  • Basic set theory notation

Historical Context

The concept of dimension evolved gradually through the 19th century. While geometric intuition suggested that lines are 1-dimensional, planes are 2-dimensional, and space is 3-dimensional, the abstract notion of dimension for arbitrary vector spaces was formalized by Hermann Grassmann (1844) and Giuseppe Peano (1888). The proof that dimension is well-defined—that all bases have the same size—relies on the Steinitz Exchange Lemma (1913), which Ernst Steinitz proved in his foundational work on field theory. This result is sometimes called the "Replacement Theorem" and is one of the most important theorems in linear algebra.

1. Basis

A basis is the "perfect" set of vectors for a vector space: it has exactly the right number of vectors to span the entire space without any redundancy. Think of it as a minimal set of building blocks from which every vector in the space can be uniquely constructed.

Definition 1.1: Basis

A basis of a vector space VV over a field FF is a set BV\mathcal{B} \subseteq V that satisfies:

  1. Linear independence: No vector in B\mathcal{B} is a linear combination of the others
  2. Spanning: span(B)=V\text{span}(\mathcal{B}) = V
Remark 1.1: Intuition

A basis gives you:

  • Completeness: Every vector can be expressed using basis vectors
  • Non-redundancy: No basis vector is "extra"—removing any would lose spanning
  • Uniqueness: Each vector has exactly one representation in terms of the basis
Theorem 1.1: Basis Equivalences (TFAE)

For a finite set B={v1,,vn}\mathcal{B} = \{v_1, \ldots, v_n\} in a vector space VV, the following are equivalent:

  1. B\mathcal{B} is a basis for VV
  2. Every vVv \in V can be written uniquely as v=α1v1++αnvnv = \alpha_1 v_1 + \cdots + \alpha_n v_n
  3. B\mathcal{B} is a maximal linearly independent set
  4. B\mathcal{B} is a minimal spanning set
Proof of Theorem 1.1:

(1) ⇒ (2): Since B\mathcal{B} spans, every vv can be written as v=αiviv = \sum \alpha_i v_i. If also v=βiviv = \sum \beta_i v_i, then (αiβi)vi=0\sum (\alpha_i - \beta_i) v_i = 0. By independence, αi=βi\alpha_i = \beta_i for all ii.

(2) ⇒ (3): Unique representation implies independence (the zero vector has only the trivial representation). Adding any wBw \notin \mathcal{B} creates dependence since ww already has a representation.

(3) ⇒ (4): If B\mathcal{B} is maximal independent but doesn't span, there exists wspan(B)w \notin \text{span}(\mathcal{B}). Then B{w}\mathcal{B} \cup \{w\} is independent, contradicting maximality.

(4) ⇒ (1): Minimal spanning implies independence (if dependent, remove a redundant vector while maintaining spanning).

Example 1.1: Standard Basis of ℝⁿ

The standard basis of Rn\mathbb{R}^n is:

E={e1,e2,,en}\mathcal{E} = \{e_1, e_2, \ldots, e_n\}

where ei=(0,,0,1,0,,0)e_i = (0, \ldots, 0, 1, 0, \ldots, 0) has 1 in position ii and 0 elsewhere.

For example, in R3\mathbb{R}^3:

e1=(1,0,0),e2=(0,1,0),e3=(0,0,1)e_1 = (1, 0, 0), \quad e_2 = (0, 1, 0), \quad e_3 = (0, 0, 1)

Verification:

  • Independent: αiei=0\sum \alpha_i e_i = 0 implies (α1,,αn)=(0,,0)(\alpha_1, \ldots, \alpha_n) = (0, \ldots, 0)
  • Spanning: Any (x1,,xn)=x1e1++xnen(x_1, \ldots, x_n) = x_1 e_1 + \cdots + x_n e_n
Example 1.2: Standard Basis of Polynomials

For Pn(F)P_n(F) = polynomials of degree ≤ nn, the standard basis is:

{1,x,x2,,xn}\{1, x, x^2, \ldots, x^n\}

This has n+1n + 1 elements.

Example: 3x22x+5=51+(2)x+3x23x^2 - 2x + 5 = 5 \cdot 1 + (-2) \cdot x + 3 \cdot x^2

Example 1.3: Standard Basis of Matrices

For Mm×n(F)M_{m \times n}(F), the standard basis consists of matrices EijE_{ij} with 1 in position (i,j)(i, j) and 0 elsewhere.

For M2×2(R)M_{2 \times 2}(\mathbb{R}):

E11=(1000),E12=(0100),E21=(0010),E22=(0001)E_{11} = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, E_{12} = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, E_{21} = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, E_{22} = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}

Any matrix A=(aij)A = (a_{ij}) can be written as A=i,jaijEijA = \sum_{i,j} a_{ij} E_{ij}.

Example 1.4: Non-Standard Bases

In R2\mathbb{R}^2, besides the standard basis {(1,0),(0,1)}\{(1,0), (0,1)\}, other bases include:

  • {(1,1),(1,1)}\{(1, 1), (1, -1)\} — diagonal directions
  • {(1,0),(1,1)}\{(1, 0), (1, 1)\} — one standard, one diagonal
  • {(2,3),(1,2)}\{(2, 3), (1, 2)\} — any two non-parallel vectors

Key insight: Any two linearly independent vectors in R2\mathbb{R}^2 form a basis.

Theorem 1.2: Extension Theorem

Every linearly independent set in a finite-dimensional vector space VV can beextended to a basis of VV.

Proof of Theorem 1.2:

Let SS be linearly independent in VV. If span(S)=V\text{span}(S) = V, then SS is already a basis.

Otherwise, there exists vspan(S)v \notin \text{span}(S). Then S{v}S \cup \{v\} is still independent (by the extension criterion).

Repeat this process. Since VV is finite-dimensional, there's an upper bound on independent set sizes (any spanning set size). So the process terminates with a basis.

Theorem 1.3: Reduction Theorem

Every spanning set of a finite-dimensional vector space VV contains a basis.

Proof of Theorem 1.3:

Let SS span VV. If SS is independent, it's a basis.

Otherwise, some vSv \in S is a linear combination of others: v=wvαwwv = \sum_{w \neq v} \alpha_w w.

Then S{v}S \setminus \{v\} still spans VV. Repeat until independent.

Corollary 1.1: Existence of Basis

Every finite-dimensional vector space has a basis.

Remark 1.2: The Empty Basis

The zero space {0}\{0\} has the empty set as its (unique) basis. This is consistent: the empty set is vacuously independent, and its span is {0}\{0\}(by convention, the empty sum equals the zero vector).

Example 1.5: Basis for Upper Triangular Matrices

For 2×2 upper triangular matrices over R\mathbb{R}:

U2={(ab0c):a,b,cR}U_2 = \left\{ \begin{pmatrix} a & b \\ 0 & c \end{pmatrix} : a, b, c \in \mathbb{R} \right\}

Standard basis:

{(1000),(0100),(0001)}\left\{ \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} \right\}

dim(U2)=3\dim(U_2) = 3.

Example 1.6: Basis for Trace-Zero Matrices

For 2×2 matrices with trace 0:

tr(A)=a+d=0    d=a\text{tr}(A) = a + d = 0 \implies d = -a

General element: (abca)=a(1001)+b(0100)+c(0010)\begin{pmatrix} a & b \\ c & -a \end{pmatrix} = a\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} + b\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} + c\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}

dim(sl2)=3\dim(\text{sl}_2) = 3 (This is the Lie algebra sl2\mathfrak{sl}_2).

Theorem 1.4: Basis from Row Reduction

If v1,,vmv_1, \ldots, v_m are vectors in FnF^n, form the matrix with these as rows and row reduce. The original vectors corresponding to pivot rows form a basis forspan{v1,,vm}\text{span}\{v_1, \ldots, v_m\}.

Example 1.7: Basis via Row Reduction

Problem: Find a basis for span{(1,2,1),(2,4,3),(1,2,2)}\text{span}\{(1, 2, 1), (2, 4, 3), (1, 2, 2)\}.

Solution:

(121243122)(121001001)(120001000)\begin{pmatrix} 1 & 2 & 1 \\ 2 & 4 & 3 \\ 1 & 2 & 2 \end{pmatrix} \to \begin{pmatrix} 1 & 2 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{pmatrix} \to \begin{pmatrix} 1 & 2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix}

Pivots in rows 1 and 2. Basis: {(1,2,1),(2,4,3)}\{(1, 2, 1), (2, 4, 3)\}. Dimension = 2.

2. Dimension

The dimension of a vector space is the number of vectors in any basis. The remarkable fact is that this number is the same for all bases—a consequence of the Steinitz Exchange Lemma. Dimension is the most important invariant of a vector space.

Theorem 2.1: Dimension is Well-Defined

Any two bases of a finite-dimensional vector space have the same number of elements.

Proof of Theorem 2.1:

Let B1\mathcal{B}_1 and B2\mathcal{B}_2 be two bases with mm and nn elements respectively.

B1\mathcal{B}_1 is independent and B2\mathcal{B}_2 spans, so by the Fundamental Inequality: mnm \leq n.

B2\mathcal{B}_2 is independent and B1\mathcal{B}_1 spans, so: nmn \leq m.

Therefore m=nm = n.

Definition 2.1: Dimension

The dimension of a finite-dimensional vector space VV, denoted dim(V)\dim(V) or dimF(V)\dim_F(V), is the number of elements in any basis of VV.

By convention: dim({0})=0\dim(\{0\}) = 0.

Example 2.1: Standard Dimensions
Vector SpaceStandard BasisDimension
Rn\mathbb{R}^n{e1,,en}\{e_1, \ldots, e_n\}nn
Pn(F)P_n(F){1,x,,xn}\{1, x, \ldots, x^n\}n+1n + 1
Mm×n(F)M_{m \times n}(F){Eij}\{E_{ij}\}mnmn
C\mathbb{C} over R\mathbb{R}{1,i}\{1, i\}2
{0}\{0\}\emptyset0
Theorem 2.2: Dimension Criteria

Let dim(V)=n\dim(V) = n. Then:

  1. Any nn linearly independent vectors form a basis
  2. Any nn spanning vectors form a basis
  3. Any set of more than nn vectors is linearly dependent
  4. Any set of fewer than nn vectors cannot span VV
Proof of Theorem 2.2:

(1): If SS has nn independent vectors but doesn't span, extend to a basis of size >> n. Contradiction.

(2): If SS spans with nn vectors but is dependent, reduce to a basis of size << n. Contradiction.

(3): By Fundamental Inequality: |independent| ≤ |spanning| = n.

(4): By Fundamental Inequality: |spanning| ≥ |independent| = n.

Corollary 2.1: Subspace Dimension

If WW is a subspace of a finite-dimensional space VV, then:

  1. WW is finite-dimensional
  2. dim(W)dim(V)\dim(W) \leq \dim(V)
  3. dim(W)=dim(V)    W=V\dim(W) = \dim(V) \iff W = V
Proof of Corollary 2.1:

(1), (2): Any independent set in WW is independent in VV, so has size ≤ dim(V). Thus WW has a basis of finite size.

(3): If dim(W) = dim(V) = n, a basis of WW has nn independent vectors in VV, hence is a basis of VV. So span = V.

Theorem 2.3: Dimension Formula for Sums

For subspaces UU and WW of VV:

dim(U+W)=dim(U)+dim(W)dim(UW)\dim(U + W) = \dim(U) + \dim(W) - \dim(U \cap W)
Proof of Theorem 2.3:

Let {u1,,uk}\{u_1, \ldots, u_k\} be a basis of UWU \cap W.

Extend to bases: {u1,,uk,v1,,vm}\{u_1, \ldots, u_k, v_1, \ldots, v_m\} of UU and {u1,,uk,w1,,wn}\{u_1, \ldots, u_k, w_1, \ldots, w_n\} of WW.

One can show {u1,,uk,v1,,vm,w1,,wn}\{u_1, \ldots, u_k, v_1, \ldots, v_m, w_1, \ldots, w_n\} is a basis for U+WU + W.

Count: k+m+n=(k+m)+(k+n)k=dim(U)+dim(W)dim(UW)k + m + n = (k + m) + (k + n) - k = \dim(U) + \dim(W) - \dim(U \cap W).

Example 2.2: Dimension of Sum

In R3\mathbb{R}^3, let U=span{(1,0,0),(0,1,0)}U = \text{span}\{(1,0,0), (0,1,0)\} (xy-plane) and W=span{(0,1,0),(0,0,1)}W = \text{span}\{(0,1,0), (0,0,1)\} (yz-plane).

  • dim(U)=2\dim(U) = 2, dim(W)=2\dim(W) = 2
  • UW=span{(0,1,0)}U \cap W = \text{span}\{(0,1,0)\} (y-axis), so dim(UW)=1\dim(U \cap W) = 1
  • dim(U+W)=2+21=3\dim(U + W) = 2 + 2 - 1 = 3

Indeed, U+W=R3U + W = \mathbb{R}^3.

Remark 2.1: Finite vs Infinite Dimensional

A vector space is:

  • Finite-dimensional: has a finite spanning set (equivalently, a finite basis)
  • Infinite-dimensional: not finite-dimensional

Examples of infinite-dimensional spaces:

  • F[x]F[x] — all polynomials (basis: 1,x,x2,x3,1, x, x^2, x^3, \ldots)
  • C[0,1]C[0,1] — continuous functions on [0,1]
  • 2\ell^2 — square-summable sequences
Example 2.3: Dimension of Kernel

For a linear map T:VWT: V \to W, the nullity is dim(kerT)\dim(\ker T).

If TT is represented by matrix AA, then:

nullity(A)=nrank(A)\text{nullity}(A) = n - \text{rank}(A)

This is the rank-nullity theorem (covered in Part III).

Example 2.4: Dimension of Image

For T:VWT: V \to W, the rank is dim(im T)\dim(\text{im } T).

Key relationship:

dim(V)=dim(kerT)+dim(im T)\dim(V) = \dim(\ker T) + \dim(\text{im } T)
Theorem 2.4: Dimension and Isomorphism

Two finite-dimensional vector spaces over the same field are isomorphic if and only if they have the same dimension.

Proof of Theorem 2.4:

(⇒) If T:VWT: V \to W is an isomorphism and B\mathcal{B} is a basis of VV, then T(B)T(\mathcal{B}) is a basis of WW (bijection preserves size).

(⇐) If dim(V)=dim(W)=n\dim(V) = \dim(W) = n, choose bases BV\mathcal{B}_V and BW\mathcal{B}_W. Define T(vi)=wiT(v_i) = w_i and extend linearly. This is an isomorphism.

Corollary 2.2: Classification

Every nn-dimensional vector space over FF is isomorphic to FnF^n. Thus, up to isomorphism, there is exactly one nn-dimensional space over FF for each n0n \geq 0.

Remark 2.2: Dimension as Complete Invariant

For finite-dimensional spaces, dimension completely characterizes the isomorphism class. Two spaces are "the same" (structurally) iff they have the same dimension over the same field.

3. Coordinates

Once we fix a basis, every vector gets a unique set of coordinates—the coefficients in its representation as a linear combination of basis vectors. This gives us a powerful correspondence between abstract vectors and concrete column vectors in FnF^n.

Definition 3.1: Ordered Basis

An ordered basis is a basis together with a specific ordering of its elements:

B=(v1,v2,,vn)\mathcal{B} = (v_1, v_2, \ldots, v_n)

The ordering matters for defining coordinates.

Definition 3.2: Coordinate Vector

Let B=(v1,,vn)\mathcal{B} = (v_1, \ldots, v_n) be an ordered basis of VV. For any vVv \in V, write:

v=α1v1+α2v2++αnvnv = \alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_n v_n

The coordinate vector of vv with respect to B\mathcal{B} is:

[v]B=(α1α2αn)Fn[v]_{\mathcal{B}} = \begin{pmatrix} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n \end{pmatrix} \in F^n
Remark 3.1: Uniqueness of Coordinates

Since B\mathcal{B} is a basis, the representation v=αiviv = \sum \alpha_i v_i isunique. So the coordinate vector [v]B[v]_{\mathcal{B}} is well-defined.

Example 3.1: Standard Coordinates

In R3\mathbb{R}^3 with the standard basis E=(e1,e2,e3)\mathcal{E} = (e_1, e_2, e_3):

[(2,3,5)]E=(235)[(2, -3, 5)]_{\mathcal{E}} = \begin{pmatrix} 2 \\ -3 \\ 5 \end{pmatrix}

For the standard basis, coordinates equal components.

Example 3.2: Non-Standard Coordinates

In R2\mathbb{R}^2, let B=((1,1),(1,1))\mathcal{B} = ((1, 1), (1, -1)).

Find [(3,1)]B[(3, 1)]_{\mathcal{B}}.

Solution: Solve (3,1)=α(1,1)+β(1,1)(3, 1) = \alpha(1, 1) + \beta(1, -1):

{α+β=3αβ=1    α=2,β=1\begin{cases} \alpha + \beta = 3 \\ \alpha - \beta = 1 \end{cases} \implies \alpha = 2, \beta = 1

So [(3,1)]B=(21)[(3, 1)]_{\mathcal{B}} = \begin{pmatrix} 2 \\ 1 \end{pmatrix}.

Theorem 3.1: Coordinate Isomorphism

Let B\mathcal{B} be an ordered basis of an nn-dimensional space VV. The map:

ϕB:VFn,v[v]B\phi_{\mathcal{B}}: V \to F^n, \quad v \mapsto [v]_{\mathcal{B}}

is a linear isomorphism.

Proof of Theorem 3.1:

Linear: If v=αiviv = \sum \alpha_i v_i and w=βiviw = \sum \beta_i v_i, then:

v+w=(αi+βi)vi    [v+w]B=[v]B+[w]Bv + w = \sum (\alpha_i + \beta_i) v_i \implies [v + w]_{\mathcal{B}} = [v]_{\mathcal{B}} + [w]_{\mathcal{B}}

Similarly for scalar multiplication.

Bijective:

  • Injective: [v]B=0    v=0[v]_{\mathcal{B}} = 0 \implies v = 0
  • Surjective: For any (α1,,αn)Fn(\alpha_1, \ldots, \alpha_n) \in F^n, the vector v=αiviv = \sum \alpha_i v_i maps to it
Corollary 3.1: Finite-Dimensional Spaces are Isomorphic to Fⁿ

Every nn-dimensional vector space over FF is isomorphic to FnF^n.

Remark 3.2: Why Coordinates Matter

The coordinate isomorphism lets us:

  • Reduce abstract problems to concrete matrix calculations
  • Represent linear maps as matrices
  • Use computers to solve linear algebra problems
  • Transfer results from FnF^n to any nn-dimensional space
Example 3.3: Polynomial Coordinates

In P2(R)P_2(\mathbb{R}) with basis B=(1,x,x2)\mathcal{B} = (1, x, x^2):

[3x22x+5]B=(523)[3x^2 - 2x + 5]_{\mathcal{B}} = \begin{pmatrix} 5 \\ -2 \\ 3 \end{pmatrix}

The coefficients of the polynomial become coordinates.

Example 3.4: Matrix Coordinates

In M2×2(R)M_{2 \times 2}(\mathbb{R}) with standard basis (E11,E12,E21,E22)(E_{11}, E_{12}, E_{21}, E_{22}):

[(1234)]E=(1234)\left[ \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} \right]_{\mathcal{E}} = \begin{pmatrix} 1 \\ 2 \\ 3 \\ 4 \end{pmatrix}
Theorem 3.2: Change of Basis

Let B\mathcal{B} and B\mathcal{B}' be two ordered bases of VV. There exists an invertible matrix PP (the change of basis matrix) such that:

[v]B=P1[v]B[v]_{\mathcal{B}'} = P^{-1} [v]_{\mathcal{B}}

The columns of PP are [v1]B,,[vn]B[v_1']_{\mathcal{B}}, \ldots, [v_n']_{\mathcal{B}}.

Example 3.5: Change of Basis Calculation

Let B=((1,0),(0,1))\mathcal{B} = ((1, 0), (0, 1)) and B=((1,1),(1,1))\mathcal{B}' = ((1, 1), (1, -1)) in R2\mathbb{R}^2.

Change of basis matrix:

P=(1111),P1=12(1111)P = \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, \quad P^{-1} = \frac{1}{2}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}

Example: For v=(3,1)v = (3, 1):

[v]B=(31),[v]B=P1[v]B=12(1111)(31)=(21)[v]_{\mathcal{B}} = \begin{pmatrix} 3 \\ 1 \end{pmatrix}, \quad [v]_{\mathcal{B}'} = P^{-1} [v]_{\mathcal{B}} = \frac{1}{2}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} \begin{pmatrix} 3 \\ 1 \end{pmatrix} = \begin{pmatrix} 2 \\ 1 \end{pmatrix}
Theorem 3.3: Coordinates and Linear Operations

For any ordered basis B\mathcal{B} and vectors u,vVu, v \in V, scalar αF\alpha \in F:

  1. [u+v]B=[u]B+[v]B[u + v]_{\mathcal{B}} = [u]_{\mathcal{B}} + [v]_{\mathcal{B}}
  2. [αv]B=α[v]B[\alpha v]_{\mathcal{B}} = \alpha [v]_{\mathcal{B}}
  3. [0]B=0[0]_{\mathcal{B}} = 0 (zero vector maps to zero vector)

This is why coordinates give an isomorphism—they preserve all vector space operations.

Example 3.6: Operations via Coordinates

In R2\mathbb{R}^2 with B=((1,1),(1,1))\mathcal{B} = ((1, 1), (1, -1)):

Let u=(4,2)u = (4, 2) and v=(1,3)v = (1, 3).

Find coordinates:

  • [u]B=(3,1)T[u]_{\mathcal{B}} = (3, 1)^T (since u=3(1,1)+1(1,1)u = 3(1,1) + 1(1,-1))
  • [v]B=(2,1)T[v]_{\mathcal{B}} = (2, -1)^T (since v=2(1,1)1(1,1)v = 2(1,1) - 1(1,-1))

Add in coordinates: [u]B+[v]B=(5,0)T[u]_{\mathcal{B}} + [v]_{\mathcal{B}} = (5, 0)^T.

Verify: u+v=(5,5)=5(1,1)+0(1,1)u + v = (5, 5) = 5(1,1) + 0(1,-1). ✓

Remark 3.3: Matrix of Linear Map

If T:VWT: V \to W is linear and we fix bases BV\mathcal{B}_V, BW\mathcal{B}_W, thenTT can be represented by a matrix AA such that:

[T(v)]BW=A[v]BV[T(v)]_{\mathcal{B}_W} = A [v]_{\mathcal{B}_V}

This is covered in detail in Part III: Linear Mappings.

Example 3.7: Coordinates in Function Spaces

In P2(R)P_2(\mathbb{R}) with Lagrange interpolation basis at points 0,1,20, 1, 2:

L0(x)=(x1)(x2)2,L1(x)=x(x2),L2(x)=x(x1)2L_0(x) = \frac{(x-1)(x-2)}{2}, \quad L_1(x) = -x(x-2), \quad L_2(x) = \frac{x(x-1)}{2}

For any polynomial p(x)p(x) of degree ≤ 2:

[p]L=(p(0),p(1),p(2))T[p]_{\mathcal{L}} = (p(0), p(1), p(2))^T

The coordinates are simply the function values! This is a key idea in interpolation.

4. Worked Examples

Example 4.1: Finding a Basis for a Subspace

Problem: Find a basis for W={(x,y,z)R3:x+yz=0}W = \{(x, y, z) \in \mathbb{R}^3 : x + y - z = 0\}.

Solution: Solve z=x+yz = x + y, so:

W={(x,y,x+y):x,yR}={x(1,0,1)+y(0,1,1)}W = \{(x, y, x + y) : x, y \in \mathbb{R}\} = \{x(1, 0, 1) + y(0, 1, 1)\}

Basis: {(1,0,1),(0,1,1)}\{(1, 0, 1), (0, 1, 1)\}.

Dimension: dim(W)=2\dim(W) = 2.

Example 4.2: Extending to a Basis

Problem: Extend {(1,1,0)}\{(1, 1, 0)\} to a basis of R3\mathbb{R}^3.

Solution: Need 2 more independent vectors.

Try e1=(1,0,0)e_1 = (1, 0, 0): Independent from (1,1,0)(1, 1, 0)? Yes (not scalar multiple).

Try e3=(0,0,1)e_3 = (0, 0, 1): Independent from both? Yes (third component non-zero).

Basis: {(1,1,0),(1,0,0),(0,0,1)}\{(1, 1, 0), (1, 0, 0), (0, 0, 1)\}.

Example 4.3: Dimension of Solution Space

Problem: Find dim(ker A) for:

A=(121024311221)A = \begin{pmatrix} 1 & 2 & 1 & 0 \\ 2 & 4 & 3 & 1 \\ 1 & 2 & 2 & 1 \end{pmatrix}

Solution: Row reduce:

(121024311221)(120100110000)\begin{pmatrix} 1 & 2 & 1 & 0 \\ 2 & 4 & 3 & 1 \\ 1 & 2 & 2 & 1 \end{pmatrix} \to \begin{pmatrix} 1 & 2 & 0 & -1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 \end{pmatrix}

Rank = 2 (two pivots). By rank-nullity:

dim(kerA)=42=2\dim(\ker A) = 4 - 2 = 2
Example 4.4: Dimension Argument

Problem: If {v1,v2,v3}\{v_1, v_2, v_3\} spans R3\mathbb{R}^3, prove it's a basis.

Solution: We know dim(R3)=3\dim(\mathbb{R}^3) = 3.

Any 3 spanning vectors in a 3-dimensional space must be a basis (by dimension criteria).

Alternatively: if dependent, could reduce to a smaller spanning set, contradicting that minimum is 3.

Example 4.5: Intersection Dimension

Problem: In R4\mathbb{R}^4, let UU and WW be 3-dimensional subspaces. What are the possible values of dim(UW)\dim(U \cap W)?

Solution: By dimension formula:

dim(U+W)=dim(U)+dim(W)dim(UW)=6dim(UW)\dim(U + W) = \dim(U) + \dim(W) - \dim(U \cap W) = 6 - \dim(U \cap W)

Since U+WR4U + W \subseteq \mathbb{R}^4, we have dim(U+W)4\dim(U + W) \leq 4.

So 6dim(UW)46 - \dim(U \cap W) \leq 4, giving dim(UW)2\dim(U \cap W) \geq 2.

Also dim(UW)3\dim(U \cap W) \leq 3 (subspace of UU).

Answer: dim(UW){2,3}\dim(U \cap W) \in \{2, 3\}.

Example 4.6: Basis for Symmetric Matrices

Problem: Find a basis for symmetric 2×2 matrices.

Solution: Symmetric matrices have form (abbc)\begin{pmatrix} a & b \\ b & c \end{pmatrix}.

Basis:

{(1000),(0110),(0001)}\left\{ \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} \right\}

Dimension: 3.

Example 4.7: Coordinates in Polynomial Space

Problem: Find [p(x)]B[p(x)]_{\mathcal{B}} where p(x)=x2+2x+3p(x) = x^2 + 2x + 3 and B=(1,1+x,1+x+x2)\mathcal{B} = (1, 1+x, 1+x+x^2).

Solution: Find a,b,ca, b, c such that:

x2+2x+3=a1+b(1+x)+c(1+x+x2)x^2 + 2x + 3 = a \cdot 1 + b(1+x) + c(1+x+x^2)

Comparing coefficients:

  • Constant: a+b+c=3a + b + c = 3
  • xx: b+c=2b + c = 2
  • x2x^2: c=1c = 1

So c=1,b=1,a=1c = 1, b = 1, a = 1.

Answer: [p(x)]B=(1,1,1)T[p(x)]_{\mathcal{B}} = (1, 1, 1)^T.

Example 4.8: Basis for Row Space

Problem: Find a basis for the row space of:

A=(121240121)A = \begin{pmatrix} 1 & 2 & 1 \\ 2 & 4 & 0 \\ 1 & 2 & -1 \end{pmatrix}

Solution: Row reduce:

(121240121)(121002002)(120001000)\begin{pmatrix} 1 & 2 & 1 \\ 2 & 4 & 0 \\ 1 & 2 & -1 \end{pmatrix} \to \begin{pmatrix} 1 & 2 & 1 \\ 0 & 0 & -2 \\ 0 & 0 & -2 \end{pmatrix} \to \begin{pmatrix} 1 & 2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix}

Basis: {(1,2,0),(0,0,1)}\{(1, 2, 0), (0, 0, 1)\}. Dimension = 2.

Example 4.9: Dimension via Parametrization

Problem: Find dim of W={(x,y,z,w):xy+z=0,yw=0}W = \{(x, y, z, w) : x - y + z = 0, y - w = 0\}.

Solution: From equations: y=wy = w, x=yz=wzx = y - z = w - z.

Parametrize with free variables z,wz, w:

(x,y,z,w)=(wz,w,z,w)=z(1,0,1,0)+w(1,1,0,1)(x, y, z, w) = (w - z, w, z, w) = z(-1, 0, 1, 0) + w(1, 1, 0, 1)

Basis: {(1,0,1,0),(1,1,0,1)}\{(-1, 0, 1, 0), (1, 1, 0, 1)\}. dim(W) = 2.

Example 4.10: Proving Equality of Subspaces

Problem: Let U=span{(1,1,0),(0,1,1)}U = \text{span}\{(1, 1, 0), (0, 1, 1)\} and W={(x,y,z):xy+z=0}W = \{(x, y, z) : x - y + z = 0\}. Show U=WU = W.

Solution:

Step 1: Check UWU \subseteq W:

  • (1,1,0)(1, 1, 0): 11+0=01 - 1 + 0 = 0
  • (0,1,1)(0, 1, 1): 01+1=00 - 1 + 1 = 0

Step 2: Dimensions:

  • dim(U)=2\dim(U) = 2 (two independent vectors)
  • dim(W)=31=2\dim(W) = 3 - 1 = 2 (one constraint in R3\mathbb{R}^3)

Since UWU \subseteq W and dim(U)=dim(W)\dim(U) = \dim(W), we have U=WU = W.

Example 4.11: Direct Sum Dimension

Problem: If V=UWV = U \oplus W (direct sum), prove dim(V)=dim(U)+dim(W)\dim(V) = \dim(U) + \dim(W).

Solution: Direct sum means UW={0}U \cap W = \{0\} and U+W=VU + W = V.

By dimension formula:

dim(V)=dim(U+W)=dim(U)+dim(W)dim(UW)=dim(U)+dim(W)0\dim(V) = \dim(U + W) = \dim(U) + \dim(W) - \dim(U \cap W) = \dim(U) + \dim(W) - 0
Example 4.12: Finding Change of Basis Matrix

Problem: Find the change of basis matrix from B=((1,0),(0,1))\mathcal{B} = ((1, 0), (0, 1)) to B=((1,2),(3,4))\mathcal{B}' = ((1, 2), (3, 4)).

Solution: Express B\mathcal{B}' vectors in terms of B\mathcal{B}:

  • (1,2)=1(1,0)+2(0,1)(1, 2) = 1 \cdot (1, 0) + 2 \cdot (0, 1), so [(1,2)]B=(1,2)T[(1, 2)]_{\mathcal{B}} = (1, 2)^T
  • (3,4)=3(1,0)+4(0,1)(3, 4) = 3 \cdot (1, 0) + 4 \cdot (0, 1), so [(3,4)]B=(3,4)T[(3, 4)]_{\mathcal{B}} = (3, 4)^T
P=(1324)P = \begin{pmatrix} 1 & 3 \\ 2 & 4 \end{pmatrix}

To convert coordinates: [v]B=P1[v]B[v]_{\mathcal{B}'} = P^{-1} [v]_{\mathcal{B}}.

5. Common Mistakes

Mistake 1: Confusing spanning set with basis

A spanning set may be larger than a basis. Example: {e1,e2,e1+e2}\{e_1, e_2, e_1 + e_2\} spans R2\mathbb{R}^2but is NOT a basis (dependent, has 3 vectors in a 2D space).

Mistake 2: Forgetting dimension of polynomial spaces

Pn(F)P_n(F) has dimension n+1n + 1, not nn! The degree ≤ n polynomials include constants, so the basis {1,x,,xn}\{1, x, \ldots, x^n\} has n+1n + 1 elements.

Mistake 3: Assuming coordinates are unique without a basis

Coordinates are only unique when the representing set is a basis. For a dependent spanning set, the same vector can have multiple representations.

Mistake 4: Ignoring the ordering of basis vectors

Coordinates depend on the ORDER of basis vectors. If B=(v1,v2)\mathcal{B} = (v_1, v_2), then[v1+v2]B=(1,1)T[v_1 + v_2]_{\mathcal{B}} = (1, 1)^T. Swap the order and coordinates change!

Mistake 5: dim(U ∩ W) = dim(U) ∩ dim(W)?

NO! dim(UW)\dim(U \cap W) is NOT min(dimU,dimW)\min(\dim U, \dim W). Use the dimension formula:dim(UW)=dimU+dimWdim(U+W)\dim(U \cap W) = \dim U + \dim W - \dim(U + W).

Mistake 6: Thinking the zero vector can be a basis element

A basis must be linearly independent. Any set containing the zero vector is dependent, so 00 can never be part of a basis.

Mistake 7: Wrong dimension after change of basis

Change of basis does NOT change dimension! The same space can be described with different bases, but the number of basis vectors is always the same (the dimension).

Mistake 8: Forgetting dimension depends on the field

C\mathbb{C} over C\mathbb{C} is 1-dimensional, but C\mathbb{C} overR\mathbb{R} is 2-dimensional. Always specify the field!

✓ Basis Verification Checklist

To verify {v1,,vn}\{v_1, \ldots, v_n\} is a basis for VV:

  1. Check independence: Solve αivi=0\sum \alpha_i v_i = 0. Only trivial solution?
  2. Check spanning: Can every vVv \in V be written as αivi\sum \alpha_i v_i?
  3. OR: Check count: If |S| = dim(V), need only one of above!

Shortcut for n vectors in n-dim space:

  • If independent → automatically spans → basis
  • If spans → automatically independent → basis

6. Key Takeaways

Basis = Independent + Spanning

A basis is the "perfect size": big enough to span, small enough to be independent.

Dimension is Well-Defined

All bases have the same size—this is not obvious but follows from the Steinitz Exchange Lemma.

n Vectors in n-dim Space

In an nn-dimensional space: nn independent vectors = basis = nn spanning vectors.

Coordinates = Unique Representation

With an ordered basis, every vector has unique coordinates—this is the key to computation.

Coordinate Isomorphism

Every nn-dimensional space is isomorphic to FnF^n—abstract spaces become concrete!

Dimension Formula

dim(U+W)=dimU+dimWdim(UW)\dim(U + W) = \dim U + \dim W - \dim(U \cap W)

Quick Reference Table

Vector SpaceDimensionStandard Basis
ℝⁿne₁, ..., eₙ
ℂ over ℝ21, i
Pₙ(F)n + 11, x, x², ..., xⁿ
Mₘₓₙ(F)mnEᵢⱼ
Symmetric n×nn(n+1)/2Eᵢᵢ, Eᵢⱼ + Eⱼᵢ
Skew-symmetric n×nn(n-1)/2Eᵢⱼ - Eⱼᵢ (i < j)

7. Applications

Computer Graphics

3D coordinates use basis vectors. Change of basis rotates/transforms scenes. Homogeneous coordinates add a dimension for projective transformations.

Data Compression

SVD/PCA find low-dimensional subspaces capturing most variance. Dimension reduction = finding a good low-rank basis.

Differential Equations

Solution spaces of linear ODEs are vector spaces. Dimension = order of equation. Finding basis solutions gives the general solution.

Quantum Mechanics

Quantum states live in Hilbert spaces. Observables have eigenbases. Measurements correspond to basis projections.

Example 7.1: Rank-Nullity in Data Science

For a data matrix ARm×nA \in \mathbb{R}^{m \times n} (m samples, n features):

  • rank(A)\text{rank}(A) = dimension of column space = number of "effective" features
  • dim(kerA)\dim(\ker A) = number of linear dependencies in features
  • If rank < n, there's redundancy in the features (multicollinearity)
Example 7.2: Network Analysis

In a network with nn nodes and mm edges:

  • The cycle space has dimension mn+cm - n + c (c = connected components)
  • This counts independent cycles in the network
  • Used in electrical circuit analysis and graph theory
Remark 7.1: The Power of Dimension

Dimension is a complete invariant for finite-dimensional vector spaces: two spaces are isomorphic if and only if they have the same dimension. This is why dimension is often the first thing we compute about a vector space.

Example 7.3: Machine Learning: Feature Engineering

In machine learning, consider a dataset with 100 features:

  • If rank of feature matrix is only 10, the "effective dimension" is 10
  • 90 features are redundant (linearly dependent on others)
  • PCA finds an optimal 10-dimensional basis

Dimension reduction = finding a low-dimensional basis that captures most information.

Example 7.4: Coding Theory

A linear code CC is a subspace of FqnF_q^n:

  • dim(C) = k means 2ᵏ codewords (for binary codes)
  • A [n,k,d][n, k, d] code has block length n, dimension k, minimum distance d
  • Generator matrix has k rows (basis of code)
  • Parity check matrix has n-k rows (basis of orthogonal complement)
Example 7.5: Structural Engineering

In structural analysis, forces and displacements live in vector spaces:

  • Degrees of freedom = dimension of displacement space
  • Constraints reduce dimension (fixing a point removes 3 DOF in 3D)
  • Stiffness matrix relates force and displacement bases

Why Bases Matter in Practice

  • Computation: Algorithms work with coordinates, not abstract vectors
  • Compression: Good basis → sparse coordinates → efficient storage
  • Analysis: Eigenbasis simplifies linear transformations
  • Visualization: Project to 2D/3D basis for plotting high-dim data
  • Stability: Orthonormal bases minimize numerical errors

8. Quick Reference

Finding a Basis

  1. Write spanning set
  2. Form matrix with vectors as rows/columns
  3. Row reduce
  4. Pivot positions give basis vectors

Finding Coordinates

  1. Set up v=αibiv = \sum \alpha_i b_i
  2. Create system of equations
  3. Solve for coefficients
  4. Coordinates = coefficient vector

Checking if Basis

  1. Count vectors: should equal dim(V)
  2. Check independence (or spanning)
  3. Either condition + right count = basis

Key Formulas

  • • dim(Fⁿ) = n
  • • dim(Pₙ) = n + 1
  • • dim(Mₘₓₙ) = mn
  • • dim(U + W) = dim(U) + dim(W) - dim(U ∩ W)
Remark 8.1: Looking Ahead

With basis and dimension established, we can now represent linear maps as matrices. The choice of bases for domain and codomain determines the matrix representation. This connection between abstract linear maps and concrete matrices is one of the most powerful ideas in linear algebra.

Algorithm: Finding Basis from Spanning Set

Input: Spanning set S={v1,,vm}S = \{v_1, \ldots, v_m\}

  1. Form matrix AA with viv_i as columns
  2. Row reduce AA to echelon form
  3. Identify pivot columns (positions 1, 2, ...)
  4. Output: Vectors corresponding to pivot columns form basis

Complexity: O(mn²) for m vectors in n dimensions

Algorithm: Extending to Basis

Input: Independent set S={v1,,vk}S = \{v_1, \ldots, v_k\} in FnF^n

  1. Form matrix with viv_i as columns
  2. Append standard basis vectors e1,,ene_1, \ldots, e_n
  3. Row reduce the augmented matrix
  4. Take vectors corresponding to first nn pivot columns

Result: Basis containing original vectors plus some standard vectors

Dimension Checklist

To find dimension of a subspace:

  1. Find a spanning set (parametrize or from generators)
  2. Reduce to basis (row reduce)
  3. Count basis vectors = dimension

To show dim(U) = dim(W):

  • Find bases and count, OR
  • Show U ⊆ W and exhibit dim(U) independent vectors in U

To show U = W (subspaces):

  • Show U ⊆ W and W ⊆ U, OR
  • Show U ⊆ W and dim(U) = dim(W)

9. Chapter Summary

Core Definitions

  • Basis: Linearly independent spanning set
  • Dimension: Number of vectors in any basis
  • Coordinate vector: Coefficients in basis representation

Key Theorems

  • All bases of a space have the same size (dimension is well-defined)
  • n independent vectors in n-dim space ⟺ basis ⟺ n spanning vectors
  • Coordinate map is a linear isomorphism: V ≅ Fⁿ
  • dim(U + W) = dim(U) + dim(W) - dim(U ∩ W)

Key Skills

  • Find bases via row reduction
  • Extend independent sets to bases
  • Compute coordinates with respect to any basis
  • Calculate change of basis matrices
  • Use dimension arguments to prove subspace equality

Standard Dimensions

ℝⁿ: n | Pₙ(F): n+1 | Mₘₓₙ(F): mn | Symmetric n×n: n(n+1)/2

Remark 9.1: What's Next

With basis and dimension established, we're ready for:

  • Direct sums: Decomposing spaces into "independent" subspaces
  • Linear maps: Functions that preserve vector space structure
  • Matrix representation: Representing linear maps using coordinates

The dimension of domain and codomain determines the size of representing matrices!

Basis & Dimension Practice
12
Questions
0
Correct
0%
Accuracy
1
What is dim(R5)\dim(\mathbb{R}^5)?
Easy
Not attempted
2
What is dim(M2×3(R))\dim(M_{2 \times 3}(\mathbb{R}))?
Easy
Not attempted
3
What is dim(R3[x])\dim(\mathbb{R}_3[x]) (polynomials of degree ≤ 3)?
Easy
Not attempted
4
If dim(V)=n\dim(V) = n, how many vectors are in any basis of VV?
Medium
Not attempted
5
In R3\mathbb{R}^3, if WW is a plane through the origin, what is dim(W)\dim(W)?
Medium
Not attempted
6
Can 3 vectors form a basis for R4\mathbb{R}^4?
Medium
Not attempted
7
What is [v]B[v]_B if v=(3,4)v = (3, 4) and B={(1,0),(0,1)}B = \{(1, 0), (0, 1)\}?
Medium
Not attempted
8
Is {(1,1),(1,1)}\{(1,1), (1,-1)\} a basis for R2\mathbb{R}^2?
Hard
Not attempted
9
If WW is a subspace of VV and dim(W)=dim(V)\dim(W) = \dim(V), what can you conclude?
Hard
Not attempted
10
What is the dimension of the solution space of Ax=0Ax = 0 where AA is 3×53 \times 5 with rank 2?
Hard
Not attempted
11
If {v1,v2}\{v_1, v_2\} is a basis of VV and w=3v12v2w = 3v_1 - 2v_2, what is [w]B[w]_B?
Medium
Not attempted
12
What is dim(R[x])\dim(\mathbb{R}[x]) (all polynomials)?
Hard
Not attempted

Frequently Asked Questions

Why is dimension well-defined?

The Steinitz exchange lemma shows that if you have a spanning set of size n and an independent set of size m, then m ≤ n. Applying this both ways to two bases shows they have the same size.

What's the difference between a basis and a spanning set?

A spanning set may have redundant vectors. A basis is a minimal spanning set—remove any vector and it no longer spans. Equivalently, it's a maximal independent set.

How do I find the dimension of a subspace?

Find a basis (e.g., by row reducing the matrix whose rows generate the subspace) and count the basis vectors. Or use the rank of the associated matrix.

Can a space have multiple different bases?

Yes! There are infinitely many bases for any space of dimension ≥ 1. All bases have the same size but contain different vectors. Coordinates depend on basis choice.

What about infinite-dimensional spaces?

Spaces like F[x] (all polynomials) or C[0,1] (continuous functions) are infinite-dimensional—no finite set can span them. They still have bases (by Zorn's lemma) but of infinite cardinality.

How do coordinates change when I change the basis?

If you switch from basis B to basis B', there's a change of basis matrix P such that [v]_B' = P^{-1}[v]_B. The matrix P has columns that are the B-coordinates of the B' basis vectors.

What is the relationship between dimension and rank?

The rank of a matrix equals the dimension of its column space (or row space). For a linear map T: V → W, rank(T) = dim(im(T)) and the rank-nullity theorem says dim(V) = dim(ker T) + dim(im T).

Why is the zero space 0-dimensional?

The zero space {0} has no basis vectors (the empty set is its basis), so dim({0}) = 0. This is consistent: any n vectors containing 0 are dependent, so you can't have a non-empty basis.

Can I always extend an independent set to a basis?

Yes, in finite-dimensional spaces. Start with an independent set, and if it doesn't span, add vectors one by one that aren't in the current span. This process terminates with a basis.

What's the dimension of a sum of subspaces?

For subspaces U and W: dim(U + W) = dim(U) + dim(W) - dim(U ∩ W). This is the dimension formula for sums, analogous to inclusion-exclusion in counting.