The definition of a vector space is one of the most important in mathematics. By abstracting the essential properties of ℝⁿ, we obtain a framework that applies to functions, polynomials, matrices, and countless other mathematical objects.
The concept of a vector space emerged gradually during the 19th century.Hermann Grassmann (1809–1877) introduced many of the key ideas in hisAusdehnungslehre (1844), though his work was largely ignored during his lifetime.
Giuseppe Peano (1858–1932) gave the first axiomatic definition of a vector space in 1888, crystallizing the abstract structure we study today. His approach emphasized that vectors need not be "arrows"—they can be any objects satisfying the axioms.
The abstraction was revolutionary: the same theory applies to ℝⁿ, polynomials, functions, and matrices. This unified perspective is central to modern mathematics.
Today, vector spaces are foundational in physics (quantum mechanics), engineering (signal processing), computer science (machine learning), and pure mathematics (functional analysis, algebraic geometry).
A vector space combines two types of objects: vectors (elements of the space) and scalars (elements of a field). The definition specifies how these interact.
A vector space over a field is a set together with two operations:
satisfying the following eight axioms for all and :
There exists such that for all .
For each , there exists such that .
where is the multiplicative identity of .
For any field and positive integer , the set is a vector space with:
Zero vector:
Additive inverse:
The set of all polynomials with coefficients in is a vector space with standard addition and scalar multiplication:
This is an infinite-dimensional vector space—no finite set of polynomials can span all of .
The set of polynomials of degree at most is a finite-dimensional vector space of dimension .
Standard basis: .
The set of all matrices with entries in is a vector space with entry-wise operations:
Dimension: (the standard basis consists of matrices with exactly one entry equal to 1 and all others 0).
Let be the set of all functions from a set to a field . With pointwise operations:
This is a vector space. If is infinite, the space is infinite-dimensional.
The set of all continuous functions is a vector space with pointwise operations. This is a subspace of the function space above.
Zero vector: The constant function .
Key property: The sum of continuous functions is continuous, and a scalar multiple of a continuous function is continuous.
The set of all real sequences forms a vector space with component-wise operations:
Special subspaces include:
is a 2-dimensional real vector space with basis :
Note: over is 1-dimensional, but over it's 2-dimensional. The field matters!
Let (positive reals) with operations:
This is a vector space over !
Several important properties follow directly from the axioms. These are used constantly in proofs throughout linear algebra.
In any vector space over a field :
Property 3: Let .
Adding to both sides gives .
Property 4: Let .
Adding gives .
Property 5:
So is the additive inverse of , i.e., .
Property 6: Suppose and .
Note that (add to both sides). This is the cancellation law for vector addition.
For each , the additive inverse is unique.
Suppose and are both additive inverses of . Then:
For any and :
Part 1 (⇐): Already shown in Theorem 2.1.
Part 1 (⇒): If and , then:
Part 2: We show is the additive inverse of :
By uniqueness, . Similarly for .
We define subtraction as . This is well-defined by the uniqueness of additive inverses. All expected properties of subtraction follow from the axioms.
Let and in . Then:
In , let and . Then:
In :
Zero vector:
In the space of continuous functions :
Let and . Then:
A linear combination of vectors is any expression:
where . The set of all linear combinations forms the span of the vectors, which is always a subspace.
Many important sets are vector spaces because they are subspaces of larger known spaces. The next chapter develops this idea: a non-empty subset of a vector space is a subspace iff it's closed under + and scalar multiplication.
Understanding what is not a vector space is as important as understanding what is.
To verify a set is a vector space, check:
Tip: If the set is a subset of a known vector space with inherited operations, you only need to check closure under addition, closure under scalar multiplication, and that 0 is in the set (subspace criteria).
Common reasons a set fails to be a vector space:
Let (the unit circle). Is this a vector space with standard operations?
Analysis:
Already fails. But also: and . Not closed under addition either.
Let (upper half-plane, excluding the x-axis).
Analysis:
Also: would be needed for checking closure, but . However, even for points in : no additive inverses exist (if , then ).
Problem: Verify that with standard operations is a real vector space.
Solution: We check all 8 axioms.
V1 (Commutativity): ✓
V2 (Associativity): Direct computation shows ✓
V3 (Zero): satisfies ✓
V4 (Inverses): satisfies ✓
V5 (Scalar identity): ✓
V6 (Scalar associativity): ✓
V7, V8 (Distributivity): Follow from distributivity in ✓
Problem: Is a vector space?
Solution: No! Check closure under addition:
(since ) and (since )... wait, .
Let's try: and .
Sum: . ✓
But: and .
Sum: . ✓
Actually, try and : both in ?
✓, ✗
So . The set consists of points in Q1, Q3, and axes.
Counter-example: . Check: ✓, ✗.
Need both in : , (since ).
Sum: . Is ? No!
Not closed under addition. Not a vector space.
Problem: Show that is a vector space.
Solution: is a subset of . Check subspace criteria:
So is a subspace of , hence a vector space.
Problem: Show that (polynomials of degree ≤ n) is a vector space.
Solution: . Check subspace criteria:
Dimension: (basis: ).
Problem: Show that the set of symmetric matrices is a vector space.
Solution: Let .
Dimension: (diagonal + upper triangle entries).
Problem: Is a vector space?
Solution: Yes! Check subspace criteria:
Dimension: .
Problem: Is the set of differentiable functions a vector space?
Solution: Yes! With pointwise operations:
This is a subspace of all functions .
Problem: Show that solutions to form a vector space.
Solution: Let be the set of solutions.
Dimension: 2 (general solution: ).
A vector in an abstract space is not necessarily a list of numbers. In function spaces, vectors are functions. In matrix spaces, vectors are matrices. The coordinates depend on choosing a basis.
A common error when checking if a subset is a subspace. If , then is NOT a vector space, period. Always check this first!
The same set can be a vector space over one field but not another. is a vector space over , but is NOT a vector space over !
Polynomials of degree exactly do NOT form a vector space! has degree less than 2. Use "degree at most n" instead.
Vector spaces need scalar multiplication, not just addition. is an abelian group but cannot be made into a vector space over .
A subset of a vector space is not automatically a vector space. It must satisfy the subspace criteria (closed under + and scalar mult, contains 0).
While finite-dimensional spaces are isomorphic to Fⁿ, infinite-dimensional spaces (like F[x] or C[a,b]) require different techniques. Not everything is coordinate-based.
V1-V4: (V, +) is an abelian group. V5-V8: Scalar multiplication interacts properly with addition. All 8 are required.
Always specify the field! The same set can have different dimensions over different fields. ℂ over ℂ: dim = 1. ℂ over ℝ: dim = 2.
Fⁿ, polynomials F[x], matrices, functions, continuous functions C[a,b], sequence spaces. All use pointwise operations.
For subsets of known vector spaces: (1) 0 ∈ W, (2) closed under +, (3) closed under scalar mult. These imply all axioms.
0·v = 0, α·0 = 0, (-1)·v = -v, αv = 0 ⟹ α = 0 or v = 0. These follow from the axioms alone.
One theory covers all examples. Theorems about Fⁿ apply to polynomials, functions, matrices. This unification is the power of abstraction.
| Vector Space | Zero Vector | Dimension |
|---|---|---|
| Fⁿ | (0, 0, ..., 0) | n |
| Pₙ(F) (degree ≤ n) | 0 polynomial | n + 1 |
| F[x] (all polynomials) | 0 polynomial | ∞ |
| m×n matrices | Zero matrix | mn |
| Symmetric n×n | Zero matrix | n(n+1)/2 |
| ℂ over ℝ | 0 | 2 |
| C[a,b] | f(x) = 0 | ∞ |
Addition Axioms (V1-V4)
Scalar Multiplication Axioms (V5-V8)
Vector spaces appear throughout mathematics, science, and engineering. The abstract framework lets us apply the same techniques across diverse domains.
State spaces in quantum mechanics are complex vector spaces. Forces, velocities, and displacements naturally form real vector spaces.
3D transformations, color spaces (RGB), and image processing all use vector space structure. Every rotation, scaling, and translation is a linear operation.
Feature vectors, weight matrices, and embedding spaces are all vector spaces. Neural networks perform linear operations followed by nonlinearities.
Audio signals, images, and other data are elements of function spaces. Fourier analysis uses orthogonal projections in inner product spaces.
Solution spaces of linear ODEs are vector spaces. This explains why general solutions are linear combinations of particular solutions.
Vector spaces over finite fields underpin error-correcting codes and many cryptographic protocols.
By studying vector spaces abstractly, we develop tools that apply everywhere: linear independence, basis, dimension, linear maps, eigenvalues. Learn the theory once, apply it to any domain.
With the definition of vector space established, the next chapters develop:
Each concept builds on the foundation we've established here. The abstract definition of a vector space is the cornerstone of all linear algebra.
Each axiom captures an essential property. Remove any one, and the theory breaks: without additive inverses, we can't subtract; without the distributive law, scalar multiplication and addition are unrelated. The axioms are the minimal set ensuring a coherent algebraic structure.
Conceptually: points are locations, vectors are displacements. Mathematically: there's no difference in ℝⁿ! But in abstract vector spaces like function spaces, there are no 'points'—the vectors are functions. The axioms define what vectors ARE by how they behave.
Yes! The trivial vector space {0} contains only the zero vector. It's a valid vector space (all axioms are satisfied vacuously or trivially). Its dimension is 0.
The same set can be a vector space over different fields with different properties. ℂ over ℂ is 1-dimensional; ℂ over ℝ is 2-dimensional. The field determines what 'scalar' means and affects dimension and structure fundamentally.
For finite-dimensional spaces: essentially yes! Every n-dimensional vector space over F is isomorphic to Fⁿ. But infinite-dimensional spaces (function spaces, sequence spaces) are genuinely different and require more sophisticated tools.
If scalars come from a ring (like ℤ), we get a 'module' instead of a vector space. Modules lack division, so many vector space theorems fail: not every module has a basis, dimension may not be well-defined, etc.
No! Vector addition is only defined within a single vector space. You can't add a polynomial to a matrix—they live in different spaces. However, you can form product spaces or direct sums to combine spaces.
The field structure ensures scalars have inverses (except 0), associativity, and distributivity. These properties are essential for proofs: e.g., showing αv = 0 implies α = 0 or v = 0 requires scalar inverses.
Think of vectors as 'things that can be added and scaled.' The axioms capture exactly what addition and scaling should mean: you can combine vectors (add), stretch or shrink them (scale), and these operations interact nicely (distributivity).
Yes! Real vector spaces model physical quantities. Complex vector spaces are essential in quantum mechanics and signal processing. Finite fields appear in coding theory and cryptography. The field choice depends on the application.