Linear independence captures when vectors are "genuinely different"—none is a combination of the others. This concept is fundamental to defining basis and dimension, and connects abstract vector spaces to computational methods via matrices.
The concept of linear independence emerged in the 19th century as mathematicians sought to understand the structure of solutions to systems of linear equations. Ernst Steinitz(1871–1928) made fundamental contributions, including the exchange lemma that bears his name, in his 1913 paper on algebraic field theory.
The modern formulation of linear independence in abstract vector spaces was developed by Giuseppe Peano (1888) and later refined by Hermann Weyl and others. The connection between linear independence and matrix rank was a crucial bridge between abstract algebra and computational mathematics.
Linear independence is the key concept that distinguishes "genuinely different" vectors from redundant ones. A set is independent if no vector can be expressed as a combination of the others—equivalently, the only way to get zero from a linear combination is to use all zero coefficients.
Vectors are linearly dependent if there exist scalars , not all zero, such that:
The vectors are linearly independent if the only such combination is the trivial one:
We often say a set is linearly independent (or dependent), meaning the vectors in that set are. This is a slight abuse of notation since order doesn't matter for independence.
Problem: Are and independent in ?
Solution: Solve :
From the first: . Substituting: , so , hence .
Only the trivial solution ⟹ linearly independent.
Problem: Are , , independent?
Solution: Notice that .
Check: ✓
So .
Non-trivial combination equals zero ⟹ linearly dependent.
Vectors (with ) are linearly dependent if and only if one of them is a linear combination of the others.
(⇒) Suppose with some .
Then , a linear combination of the others.
(⇐) If , then:
This is a non-trivial combination (coefficient of is ).
Any set containing the zero vector is linearly dependent.
If , say , then:
The coefficient of is 1 ≠ 0, so this is a non-trivial combination.
A single non-zero vector is linearly independent.
If and , then . Only trivial combination.
Any subset of a linearly independent set is linearly independent.
Let be independent and .
Suppose .
This is also a linear combination of vectors in (with zero coefficients for ).
By independence of , all coefficients are zero. So is independent.
If a set contains a linearly dependent subset, the whole set is linearly dependent.
In or , two vectors are linearly dependent if and only if they are parallel (one is a scalar multiple of the other).
In , three vectors are linearly dependent if and only if they are coplanar (lie in a common plane through the origin).
Spanning asks: can we reach every vector? Independence asks: is there redundancy?
Consider the differential equation .
Solutions: and .
Are they independent?
Suppose for all .
Solving: . Independent!
General solution: .
Consider in .
Over ℂ: is independent (single non-zero vector).
Question: Are and independent over ?
Solve: , i.e., .
Over : let . Then .
Dependent over ℂ! (Note: they would be independent over ℝ.)
Vectors are linearly independent if and only if every vector in has a unique representation as a linear combination.
(⇒) Suppose independent. If , then:
By independence, , so . Unique!
(⇐) If representations are unique, consider .
Any other representation must have by uniqueness.
The Steinitz Exchange Lemma is one of the most important technical tools in linear algebra. It allows us to "swap out" vectors while preserving linear independence and span—the key to proving that dimension is well-defined.
Let be linearly independent vectors that span a subspace . If and , then there exists an index such that replacing with gives a set that is still linearly independent and still spans .
Since , we can write:
Since , at least one . Say .
Then we can solve for :
Span is preserved: Any combination involving can be rewritten using and the other 's.
Independence is preserved: If , substituting gives a combination of the original independent vectors, so all coefficients must be zero.
The exchange lemma allows us to systematically convert any spanning set into a basis by repeatedly exchanging redundant vectors with independent ones, without changing the span.
If is spanned by vectors, then every linearly independent set in has at most vectors.
Let span , and let be independent.
We show by repeated application of exchange:
Step 1: . By exchange lemma, replace some with .
Step 2: . Exchange some with .
(Note: we must exchange a , not , because is independent.)
Continue until all are included. If , we would run out of 's to exchange, contradiction.
In (or any -dimensional space), any set of more than vectors is linearly dependent.
is spanned by (3 vectors).
Therefore, any 4 vectors in must be linearly dependent.
Example: is dependent because .
If is independent and , then is independent.
Suppose .
If , then . Contradiction!
So , and then .
By independence, for all . So the only solution is trivial.
is dependent if and only if .
Problem: Is independent?
Solution using extension:
Any linearly independent set in a finite-dimensional vector space can be extended to a basis.
Let be independent. If spans , it's already a basis.
Otherwise, there exists . Add to .
By the extension theorem, the enlarged set is still independent.
Repeat. By the fundamental inequality, we add at most finitely many vectors.
Process terminates with a basis.
Problem: Extend to a basis of .
Solution: Need 2 more independent vectors.
Try : Is it in span? Solve .
No solution for components 1 and 2. So add .
Try : Not in span (fourth component). Add .
Basis:
In practice, we test independence by solving a homogeneous system. Form a matrix with the vectors as columns—the vectors are independent if and only if the only solution to is .
Vectors are linearly independent if and only if the matrix has full column rank (rank = m).
A linear combination equals the matrix-vector product:
This equals zero iff is in .
Independent ⟺ only works ⟺ ⟺ full column rank.
Problem: Are , , independent?
Solution: Form matrix and row reduce:
Only 2 pivots for 3 columns ⟹ not full column rank ⟹ dependent.
From RREF: .
Problem: Are , , independent?
Solution:
Already in echelon form with 3 pivots ⟹ full column rank ⟹ independent.
For vectors in , the matrix is square. Then:
Problem: Are and independent in ?
Solution:
Non-zero determinant ⟹ independent.
Problem: Are independent in ?
Solution: Suppose (the zero polynomial).
Evaluating at different points (or comparing coefficients):
Only trivial solution ⟹ independent.
Problem: Are independent in ?
Solution: Suppose for all .
At : .
At : .
From first: . Substituting: .
Since and , we get , hence .
Independent.
For vectors in (same number of vectors as dimension), form the square matrix with vectors as columns. Then:
This is computationally efficient for small .
Problem: Are independent?
Solution:
Independent (determinant is non-zero).
Problem: Are independent?
Solution:
Dependent (determinant is zero). Indeed, column 2 = 2 × column 1.
In an -dimensional vector space:
Testing independence of vectors in :
Problem: In the subspace of , find a maximal independent set.
Solution:
is a plane through the origin. Dimension = 2.
Parametrize: .
The vectors are independent and span .
Maximal independent set has 2 vectors.
Problem: Extend to a maximal independent set in .
Solution:
Need 4 independent vectors (dimension of is 4).
Add : Independent? Check .
Gives , so . Yes, independent.
Add : Clearly independent (third component).
Add : Clearly independent (fourth component).
Maximal: .
Problem: Are the rows of independent?
Solution: Row reduce:
Only 2 non-zero rows ⟹ rank = 2 ⟹ dependent (3 rows but rank 2).
Note: Row 2 = 2 × Row 1.
Problem: Are and independent over ?
Solution: Solve :
and .
From first: . Substituting: .
So , hence . Independent.
Problem: If satisfies , are they independent?
Solution: No! The equation is a non-trivial linear combination equaling zero (coefficients 1, -2, 1 are not all zero).
Linearly dependent.
Problem: Are independent in ?
Solution: Use the Wronskian:
Non-zero Wronskian ⟹ independent.
Problem: Are , , , independent?
Solution: Suppose .
Then .
Comparing entries: . Independent.
These form the standard basis for .
Problem: Find the dependence relation for .
Solution: Set up :
Row reduce the augmented matrix:
Free variable . Set : then , .
Dependence relation: .
Equivalently: .
Problem: If are independent and , can we find a pair among that is independent?
Solution: Yes! Any pair works:
Problem: Find a maximal independent subset of .
Solution: Form matrix and row reduce:
Pivots in columns 1 and 2. Rank = 2.
Maximal independent subset: .
Note: and .
Dependent doesn't mean the vectors are equal! It means one can be written as a combination of others. and are dependent but not equal.
Any set containing is automatically dependent. Don't waste time checking other conditions if you see a zero vector!
In , any set with more than vectors is dependent. Don't try to prove independence of 4 vectors in !
For the matrix test, put vectors as columns, not rows. Then check if column rank = number of vectors.
Independence only asks about relations among the given vectors. A set can be independent without spanning the whole space (and vice versa—spanning sets can be dependent).
might be independent over but the question "over " doesn't make sense— isn't real!
When solving , you must verify the solution works for ALL components/equations, not just some. A partial solution doesn't count!
The coefficients must be scalars (from the field). Don't confuse these with the vectors themselves. Also, "not all zero" means at least one .
Independent = only trivial combination gives zero. Dependent = there exists a non-trivial combination giving zero.
Subsets of independent sets are independent. Sets with 0 are dependent. Independent sets have size ≤ dim(V).
Can swap vectors while preserving span and independence. Implies: |independent| ≤ |spanning|.
Vectors as columns. Independent ⟺ full column rank ⟺ ker = {0} ⟺ (if square) det ≠ 0.
Adding w to independent set: stays independent iff w ∉ span. Otherwise becomes dependent.
A basis = independent + spanning = maximal independent = minimal spanning. Next chapter!
| Question | Answer |
|---|---|
| Contains 0? | → Dependent |
| More than dim(V) vectors? | → Dependent |
| One vector is multiple of another? | → Dependent |
| Matrix with full column rank? | → Independent |
| Square matrix with det ≠ 0? | → Independent |
The columns of being independent means has at most one solution. If also square, exactly one solution exists for every .
Independent features in machine learning mean no redundant information. PCA finds independent directions of maximum variance.
Independent solutions to a linear ODE form a basis for the solution space. The Wronskian tests independence of functions.
Controllability requires certain vectors to be independent. A system is controllable iff the controllability matrix has full rank.
Linear independence ensures that:
If is a basis for (independent and spanning), then every has unique coordinates.
Why unique? If , then:
By independence, for all . So .
In Fourier analysis, forms an orthogonal (hence independent) set.
Any periodic signal can be uniquely decomposed into these frequency components.
Independence ensures: each frequency contributes independently, no mixing.
In cryptographic systems, linear independence of key components ensures:
Linear codes require codewords to be independent for error correction.
In 3D graphics, three independent vectors form a coordinate system:
For an matrix :
| Property | Equivalent Statement |
|---|---|
| Columns independent | ker(A) = {0}, rank = n |
| Rows independent | ker(Aᵀ) = {0}, rank = m |
| Square, columns indep. | A invertible, det(A) ≠ 0 |
| Ax = b unique soln | Columns independent (n = rank) |
| Space | Max Independent | Standard Basis |
|---|---|---|
| ℝⁿ | n | e₁, e₂, ..., eₙ |
| Pₙ(ℝ) | n+1 | 1, x, x², ..., xⁿ |
| Mₘₓₙ(ℝ) | mn | Eᵢⱼ (standard matrix units) |
| ℝ[x] | ∞ | 1, x, x², x³, ... |
Now that we understand both spanning and independence, we can define a basis: a set that is both spanning and linearly independent. The remarkable fact is that all bases of a vector space have the same size—this common size is the dimension.
To Test Independence:
To Find Dependence Relation:
To Extract Maximal Independent Subset:
Always Independent:
Always Dependent:
The interplay between spanning (can we reach everything?) andindependence (is there redundancy?) is the heart of dimension theory:
A basis achieves both bounds simultaneously—it's the sweet spot.
Vectors are independent if none of them is 'redundant'—none can be expressed as a combination of the others. Each adds a genuinely new direction. Dependent vectors have overlap in the directions they describe.
Form a matrix with the vectors as columns and row reduce. The vectors are independent iff every column has a pivot (no free variables), equivalently, iff the matrix has full column rank.
It's the key to proving that all bases have the same size (dimension is well-defined). It shows you can systematically replace spanning vectors with independent ones without losing span.
No. If 0 is in the set, then 1·0 = 0 is a non-trivial combination equaling zero, making the set dependent.
The columns of a square matrix are independent iff the matrix is invertible. This connects the abstract concept to concrete computability.
If the new vector is NOT in the span of the existing vectors, the enlarged set is still independent. If it IS in the span, the enlarged set becomes dependent.
Not in general. A linear map can map independent vectors to dependent ones (the kernel could be non-trivial). However, injective linear maps do preserve independence.
At most n. This is the fundamental bound: in an n-dimensional space, any independent set has at most n vectors, and any set with more than n vectors must be dependent.
Spanning means every vector can be written as a combination; independent means no vector is redundant. A basis achieves both: minimal spanning = maximal independent.
Yes! Dependent sets can still span useful subspaces. For example, {(1,0), (2,0), (0,1)} is dependent but spans ℝ². Dependence just means there's redundancy.