MathIsimple
LA-2.3
Available

Linear Independence

Linear independence captures when vectors are "genuinely different"—none is a combination of the others. This concept is fundamental to defining basis and dimension, and connects abstract vector spaces to computational methods via matrices.

45 minutes
Undergraduate
10 Learning Objectives
Learning Objectives
  • Define linear dependence and independence precisely using the trivial combination criterion
  • Test sets of vectors for linear independence using row reduction
  • Prove that a set containing zero is always linearly dependent
  • Understand that subsets of independent sets are independent
  • Apply the Steinitz exchange lemma to replace vectors while preserving span
  • Prove the fundamental bound: independent sets have size ≤ spanning sets
  • Extend linearly independent sets to larger independent sets
  • Recognize when adding a vector preserves independence
  • Connect linear independence to matrix rank and invertibility
  • Understand the role of independence in defining dimension
Historical Context

The concept of linear independence emerged in the 19th century as mathematicians sought to understand the structure of solutions to systems of linear equations. Ernst Steinitz(1871–1928) made fundamental contributions, including the exchange lemma that bears his name, in his 1913 paper on algebraic field theory.

The modern formulation of linear independence in abstract vector spaces was developed by Giuseppe Peano (1888) and later refined by Hermann Weyl and others. The connection between linear independence and matrix rank was a crucial bridge between abstract algebra and computational mathematics.

1. Definition and Basic Properties

Linear independence is the key concept that distinguishes "genuinely different" vectors from redundant ones. A set is independent if no vector can be expressed as a combination of the others—equivalently, the only way to get zero from a linear combination is to use all zero coefficients.

Definition 2.4: Linear Dependence and Independence

Vectors v1,,vnVv_1, \ldots, v_n \in V are linearly dependent if there exist scalars α1,,αnF\alpha_1, \ldots, \alpha_n \in F, not all zero, such that:

α1v1+α2v2++αnvn=0\alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_n v_n = 0

The vectors are linearly independent if the only such combination is the trivial one:

α1v1++αnvn=0    α1=α2==αn=0\alpha_1 v_1 + \cdots + \alpha_n v_n = 0 \implies \alpha_1 = \alpha_2 = \cdots = \alpha_n = 0
Remark 2.5: Terminology

We often say a set is linearly independent (or dependent), meaning the vectors in that set are. This is a slight abuse of notation since order doesn't matter for independence.

Example 2.4: Testing Independence in ℝ²

Problem: Are (1,2)(1, 2) and (3,4)(3, 4) independent in R2\mathbb{R}^2?

Solution: Solve a(1,2)+b(3,4)=(0,0)a(1,2) + b(3,4) = (0,0):

a+3b=0,2a+4b=0a + 3b = 0, \quad 2a + 4b = 0

From the first: a=3ba = -3b. Substituting: 6b+4b=2b=0-6b + 4b = -2b = 0, so b=0b = 0, hence a=0a = 0.

Only the trivial solution ⟹ linearly independent.

Example 2.5: A Dependent Set in ℝ³

Problem: Are (1,2,3)(1, 2, 3), (4,5,6)(4, 5, 6), (7,8,9)(7, 8, 9) independent?

Solution: Notice that (7,8,9)=2(4,5,6)(1,2,3)(7, 8, 9) = 2(4, 5, 6) - (1, 2, 3).

Check: 2(4,5,6)(1,2,3)=(81,102,123)=(7,8,9)2(4, 5, 6) - (1, 2, 3) = (8-1, 10-2, 12-3) = (7, 8, 9)

So (1)(1,2,3)+2(4,5,6)+(1)(7,8,9)=(0,0,0)(-1)(1,2,3) + 2(4,5,6) + (-1)(7,8,9) = (0,0,0).

Non-trivial combination equals zero ⟹ linearly dependent.

Theorem 2.6: Characterization of Dependence

Vectors v1,,vnv_1, \ldots, v_n (with n2n \geq 2) are linearly dependent if and only if one of them is a linear combination of the others.

Proof of Theorem 2.6:

(⇒) Suppose i=1nαivi=0\sum_{i=1}^n \alpha_i v_i = 0 with some αj0\alpha_j \neq 0.

Then vj=1αjijαiviv_j = -\frac{1}{\alpha_j}\sum_{i \neq j} \alpha_i v_i, a linear combination of the others.

(⇐) If vj=ijβiviv_j = \sum_{i \neq j} \beta_i v_i, then:

ijβivi+(1)vj=0\sum_{i \neq j} \beta_i v_i + (-1) v_j = 0

This is a non-trivial combination (coefficient of vjv_j is 10-1 \neq 0).

Theorem 2.7: Zero Vector and Dependence

Any set containing the zero vector is linearly dependent.

Proof of Theorem 2.7:

If 0{v1,,vn}0 \in \{v_1, \ldots, v_n\}, say v1=0v_1 = 0, then:

10+0v2++0vn=01 \cdot 0 + 0 \cdot v_2 + \cdots + 0 \cdot v_n = 0

The coefficient of v1=0v_1 = 0 is 1 ≠ 0, so this is a non-trivial combination.

Corollary 2.4: Single Non-Zero Vector

A single non-zero vector {v}\{v\} is linearly independent.

Proof of Corollary 2.4:

If αv=0\alpha v = 0 and v0v \neq 0, then α=0\alpha = 0. Only trivial combination.

Theorem 2.8: Subsets of Independent Sets

Any subset of a linearly independent set is linearly independent.

Proof of Theorem 2.8:

Let S={v1,,vn}S = \{v_1, \ldots, v_n\} be independent and TST \subseteq S.

Suppose viTαivi=0\sum_{v_i \in T} \alpha_i v_i = 0.

This is also a linear combination of vectors in SS (with zero coefficients for viTv_i \notin T).

By independence of SS, all coefficients are zero. So TT is independent.

Corollary 2.5: Contrapositive

If a set contains a linearly dependent subset, the whole set is linearly dependent.

Example 2.6: Two Vectors: Geometric Interpretation

In R2\mathbb{R}^2 or R3\mathbb{R}^3, two vectors are linearly dependent if and only if they are parallel (one is a scalar multiple of the other).

  • {(1,2),(2,4)}\{(1, 2), (2, 4)\} is dependent: (2,4)=2(1,2)(2, 4) = 2(1, 2)
  • {(1,0),(0,1)}\{(1, 0), (0, 1)\} is independent: neither is a multiple of the other
Example 2.7: Three Vectors in ℝ³

In R3\mathbb{R}^3, three vectors are linearly dependent if and only if they are coplanar (lie in a common plane through the origin).

  • {(1,0,0),(0,1,0),(1,1,0)}\{(1,0,0), (0,1,0), (1,1,0)\} is dependent: all in the xy-plane
  • {(1,0,0),(0,1,0),(0,0,1)}\{(1,0,0), (0,1,0), (0,0,1)\} is independent: not coplanar
Remark 2.6: Independence vs. Spanning

Spanning asks: can we reach every vector? Independence asks: is there redundancy?

  • A spanning set might have redundant vectors (dependent)
  • An independent set might not span everything
  • A basis achieves both: spans and is independent
Example 1.8: Independence in Solution Spaces

Consider the differential equation yy=0y'' - y = 0.

Solutions: exe^x and exe^{-x}.

Are they independent?

Suppose aex+bex=0ae^x + be^{-x} = 0 for all xx.

  • At x=0x = 0: a+b=0a + b = 0
  • At x=1x = 1: ae+be1=0ae + be^{-1} = 0

Solving: a=b=0a = b = 0. Independent!

General solution: y=c1ex+c2exy = c_1 e^x + c_2 e^{-x}.

Example 1.9: Vectors over Different Fields

Consider (1,i)(1, i) in C2\mathbb{C}^2.

Over ℂ: {(1,i)}\{(1, i)\} is independent (single non-zero vector).

Question: Are (1,0)(1, 0) and (i,0)(i, 0) independent over C\mathbb{C}?

Solve: a(1,0)+b(i,0)=(0,0)a(1, 0) + b(i, 0) = (0, 0), i.e., a+bi=0a + bi = 0.

Over C\mathbb{C}: let a=i,b=1a = i, b = -1. Then i+(1)i=0i + (-1)i = 0.

Dependent over ℂ! (Note: they would be independent over ℝ.)

Theorem 1.5: Characterization via Unique Representation

Vectors v1,,vnv_1, \ldots, v_n are linearly independent if and only if every vector in span{v1,,vn}\text{span}\{v_1, \ldots, v_n\} has a unique representation as a linear combination.

Proof of Theorem 1.5:

(⇒) Suppose independent. If v=αivi=βiviv = \sum \alpha_i v_i = \sum \beta_i v_i, then:

(αiβi)vi=0\sum (\alpha_i - \beta_i) v_i = 0

By independence, αiβi=0\alpha_i - \beta_i = 0, so αi=βi\alpha_i = \beta_i. Unique!

(⇐) If representations are unique, consider 0=0vi0 = \sum 0 \cdot v_i.

Any other representation 0=αivi0 = \sum \alpha_i v_i must have αi=0\alpha_i = 0 by uniqueness.

2. The Steinitz Exchange Lemma

The Steinitz Exchange Lemma is one of the most important technical tools in linear algebra. It allows us to "swap out" vectors while preserving linear independence and span—the key to proving that dimension is well-defined.

Lemma 2.1: Steinitz Exchange Lemma

Let {v1,,vm}\{v_1, \ldots, v_m\} be linearly independent vectors that span a subspace UU. If wUw \in U and w0w \neq 0, then there exists an index ii such that replacing viv_i with ww gives a set {v1,,vi1,w,vi+1,,vm}\{v_1, \ldots, v_{i-1}, w, v_{i+1}, \ldots, v_m\}that is still linearly independent and still spans UU.

Proof of Lemma 2.1:

Since wU=span{v1,,vm}w \in U = \text{span}\{v_1, \ldots, v_m\}, we can write:

w=α1v1++αmvmw = \alpha_1 v_1 + \cdots + \alpha_m v_m

Since w0w \neq 0, at least one αi0\alpha_i \neq 0. Say αk0\alpha_k \neq 0.

Then we can solve for vkv_k:

vk=1αk(wikαivi)v_k = \frac{1}{\alpha_k}\left(w - \sum_{i \neq k} \alpha_i v_i\right)

Span is preserved: Any combination involving vkv_k can be rewritten using ww and the other viv_i's.

Independence is preserved: If βkw+ikβivi=0\beta_k w + \sum_{i \neq k} \beta_i v_i = 0, substituting w=αiviw = \sum \alpha_i v_i gives a combination of the original independent vectors, so all coefficients must be zero.

Remark 2.7: Why This Matters

The exchange lemma allows us to systematically convert any spanning set into a basis by repeatedly exchanging redundant vectors with independent ones, without changing the span.

Theorem 2.9: Fundamental Inequality

If VV is spanned by nn vectors, then every linearly independent set in VV has at most nn vectors.

Proof of Theorem 2.9:

Let {g1,,gn}\{g_1, \ldots, g_n\} span VV, and let {w1,,wm}\{w_1, \ldots, w_m\} be independent.

We show mnm \leq n by repeated application of exchange:

Step 1: w1V=span{g1,,gn}w_1 \in V = \text{span}\{g_1, \ldots, g_n\}. By exchange lemma, replace some gig_i with w1w_1.

Step 2: w2span{w1,g2,,gn}w_2 \in \text{span}\{w_1, g_2, \ldots, g_n\}. Exchange some gjg_j with w2w_2.

(Note: we must exchange a gjg_j, not w1w_1, because {w1,w2}\{w_1, w_2\} is independent.)

Continue until all wiw_i are included. If m>nm > n, we would run out of gig_i's to exchange, contradiction.

Corollary 2.6: Size Bound

In Rn\mathbb{R}^n (or any nn-dimensional space), any set of more than nn vectors is linearly dependent.

Example 2.8: Four Vectors in ℝ³

R3\mathbb{R}^3 is spanned by {e1,e2,e3}\{e_1, e_2, e_3\} (3 vectors).

Therefore, any 4 vectors in R3\mathbb{R}^3 must be linearly dependent.

Example: {(1,0,0),(0,1,0),(0,0,1),(1,1,1)}\{(1,0,0), (0,1,0), (0,0,1), (1,1,1)\} is dependent because (1,1,1)=(1,0,0)+(0,1,0)+(0,0,1)(1,1,1) = (1,0,0) + (0,1,0) + (0,0,1).

Theorem 2.10: Extension of Independent Sets

If {v1,,vm}\{v_1, \ldots, v_m\} is independent and wspan{v1,,vm}w \notin \text{span}\{v_1, \ldots, v_m\}, then {v1,,vm,w}\{v_1, \ldots, v_m, w\} is independent.

Proof of Theorem 2.10:

Suppose α1v1++αmvm+βw=0\alpha_1 v_1 + \cdots + \alpha_m v_m + \beta w = 0.

If β0\beta \neq 0, then w=1β(α1v1++αmvm)span{v1,,vm}w = -\frac{1}{\beta}(\alpha_1 v_1 + \cdots + \alpha_m v_m) \in \text{span}\{v_1, \ldots, v_m\}. Contradiction!

So β=0\beta = 0, and then α1v1++αmvm=0\alpha_1 v_1 + \cdots + \alpha_m v_m = 0.

By independence, αi=0\alpha_i = 0 for all ii. So the only solution is trivial.

Corollary 2.7: Adding Vectors

{v1,,vm,w}\{v_1, \ldots, v_m, w\} is dependent if and only if wspan{v1,,vm}w \in \text{span}\{v_1, \ldots, v_m\}.

Example 2.9: Applying Extension Theorem

Problem: Is {(1,0,0),(1,1,0),(0,0,1)}\{(1, 0, 0), (1, 1, 0), (0, 0, 1)\} independent?

Solution using extension:

  • {(1,0,0)}\{(1, 0, 0)\} is independent (single non-zero vector)
  • Is (1,1,0)span{(1,0,0)}(1, 1, 0) \in \text{span}\{(1, 0, 0)\}? No, because span = {(t,0,0)}\{(t, 0, 0)\}
  • So {(1,0,0),(1,1,0)}\{(1, 0, 0), (1, 1, 0)\} is independent
  • Is (0,0,1)span{(1,0,0),(1,1,0)}(0, 0, 1) \in \text{span}\{(1, 0, 0), (1, 1, 0)\}? No (third component non-zero)
  • So {(1,0,0),(1,1,0),(0,0,1)}\{(1, 0, 0), (1, 1, 0), (0, 0, 1)\} is independent
Theorem 2.11: Extending to a Basis

Any linearly independent set in a finite-dimensional vector space can be extended to a basis.

Proof of Theorem 2.11:

Let SS be independent. If SS spans VV, it's already a basis.

Otherwise, there exists vspan(S)v \notin \text{span}(S). Add vv to SS.

By the extension theorem, the enlarged set is still independent.

Repeat. By the fundamental inequality, we add at most finitely many vectors.

Process terminates with a basis.

Example 2.10: Extension Example

Problem: Extend {(1,1,0,0),(0,1,1,0)}\{(1, 1, 0, 0), (0, 1, 1, 0)\} to a basis of R4\mathbb{R}^4.

Solution: Need 2 more independent vectors.

Try e3=(0,0,1,0)e_3 = (0, 0, 1, 0): Is it in span? Solve a(1,1,0,0)+b(0,1,1,0)=(0,0,1,0)a(1,1,0,0) + b(0,1,1,0) = (0,0,1,0).

No solution for components 1 and 2. So add e3e_3.

Try e4=(0,0,0,1)e_4 = (0, 0, 0, 1): Not in span (fourth component). Add e4e_4.

Basis: {(1,1,0,0),(0,1,1,0),(0,0,1,0),(0,0,0,1)}\{(1,1,0,0), (0,1,1,0), (0,0,1,0), (0,0,0,1)\}

3. Testing Linear Independence

In practice, we test independence by solving a homogeneous system. Form a matrix with the vectors as columns—the vectors are independent if and only if the only solution to Ax=0Ax = 0 is x=0x = 0.

Theorem 2.11: Matrix Test for Independence

Vectors v1,,vmFnv_1, \ldots, v_m \in F^n are linearly independent if and only if the matrixA=[v1v2vm]A = [v_1 | v_2 | \cdots | v_m] has full column rank (rank = m).

Proof of Theorem 2.11:

A linear combination α1v1++αmvm\alpha_1 v_1 + \cdots + \alpha_m v_m equals the matrix-vector product:

A(α1αm)=α1v1++αmvmA \begin{pmatrix} \alpha_1 \\ \vdots \\ \alpha_m \end{pmatrix} = \alpha_1 v_1 + \cdots + \alpha_m v_m

This equals zero iff α=(α1,,αm)T\alpha = (\alpha_1, \ldots, \alpha_m)^T is in ker(A)\ker(A).

Independent ⟺ only α=0\alpha = 0 works ⟺ ker(A)={0}\ker(A) = \{0\} ⟺ full column rank.

Example 2.9: Using Row Reduction

Problem: Are (1,2,1)(1, 2, 1), (2,3,1)(2, 3, 1), (3,5,2)(3, 5, 2) independent?

Solution: Form matrix and row reduce:

(123235112)(123011011)(123011000)\begin{pmatrix} 1 & 2 & 3 \\ 2 & 3 & 5 \\ 1 & 1 & 2 \end{pmatrix} \to \begin{pmatrix} 1 & 2 & 3 \\ 0 & -1 & -1 \\ 0 & -1 & -1 \end{pmatrix} \to \begin{pmatrix} 1 & 2 & 3 \\ 0 & -1 & -1 \\ 0 & 0 & 0 \end{pmatrix}

Only 2 pivots for 3 columns ⟹ not full column rank ⟹ dependent.

From RREF: v3=v1+v2v_3 = v_1 + v_2.

Example 2.10: Independent Example

Problem: Are (1,0,0)(1, 0, 0), (1,1,0)(1, 1, 0), (1,1,1)(1, 1, 1) independent?

Solution:

(111011001)\begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}

Already in echelon form with 3 pivots ⟹ full column rank ⟹ independent.

Remark 2.8: Square Matrices

For nn vectors in FnF^n, the matrix is square. Then:

  • Independent ⟺ full rank ⟺ invertible ⟺ det ≠ 0
  • Dependent ⟺ rank < n ⟺ singular ⟺ det = 0
Example 2.11: Determinant Test

Problem: Are (1,2)(1, 2) and (3,5)(3, 5) independent in R2\mathbb{R}^2?

Solution:

det(1325)=1(5)3(2)=56=10\det \begin{pmatrix} 1 & 3 \\ 2 & 5 \end{pmatrix} = 1(5) - 3(2) = 5 - 6 = -1 \neq 0

Non-zero determinant ⟹ independent.

Example 2.12: Polynomials

Problem: Are 1,x,x21, x, x^2 independent in P2(R)P_2(\mathbb{R})?

Solution: Suppose a+bx+cx2=0a + bx + cx^2 = 0 (the zero polynomial).

Evaluating at different points (or comparing coefficients):

  • At x=0x = 0: a=0a = 0
  • Coefficient of xx: b=0b = 0
  • Coefficient of x2x^2: c=0c = 0

Only trivial solution ⟹ independent.

Example 2.13: Functions

Problem: Are ex,e2xe^x, e^{2x} independent in C(R)C^\infty(\mathbb{R})?

Solution: Suppose aex+be2x=0ae^x + be^{2x} = 0 for all xx.

At x=0x = 0: a+b=0a + b = 0.

At x=1x = 1: ae+be2=0ae + be^2 = 0.

From first: b=ab = -a. Substituting: aeae2=ae(1e)=0ae - ae^2 = ae(1 - e) = 0.

Since e0e \neq 0 and 1e01 - e \neq 0, we get a=0a = 0, hence b=0b = 0.

Independent.

Remark 2.9: The Determinant Test

For nn vectors in FnF^n (same number of vectors as dimension), form the square matrix with vectors as columns. Then:

Independent    det(A)0\text{Independent} \iff \det(A) \neq 0

This is computationally efficient for small nn.

Example 2.14: 3×3 Determinant Test

Problem: Are (1,0,1),(0,1,1),(1,1,0)(1,0,1), (0,1,1), (1,1,0) independent?

Solution:

det(101011110)=1(01)0+1(01)=11=20\det \begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 1 & 0 \end{pmatrix} = 1(0-1) - 0 + 1(0-1) = -1 - 1 = -2 \neq 0

Independent (determinant is non-zero).

Example 2.15: Vanishing Determinant

Problem: Are (1,2,3),(2,4,6),(0,1,1)(1,2,3), (2,4,6), (0,1,1) independent?

Solution:

det(120241361)=1(46)2(23)+0=2+2=0\det \begin{pmatrix} 1 & 2 & 0 \\ 2 & 4 & 1 \\ 3 & 6 & 1 \end{pmatrix} = 1(4-6) - 2(2-3) + 0 = -2 + 2 = 0

Dependent (determinant is zero). Indeed, column 2 = 2 × column 1.

Theorem 2.12: Counting Criterion

In an nn-dimensional vector space:

  • Fewer than nn vectors cannot span the space
  • More than nn vectors cannot be independent
  • Exactly nn independent vectors form a basis
Remark 2.10: Computational Complexity

Testing independence of mm vectors in FnF^n:

  • Row reduction: O(nm²) operations
  • Determinant (if m = n): O(n³) operations
  • Gram-Schmidt (in inner product spaces): O(nm²)

4. Worked Examples

Example 4.1: Maximum Independent Set

Problem: In the subspace W={(x,y,z):x+y+z=0}W = \{(x, y, z) : x + y + z = 0\} of R3\mathbb{R}^3, find a maximal independent set.

Solution:

WW is a plane through the origin. Dimension = 2.

Parametrize: (x,y,z)=(yz,y,z)=y(1,1,0)+z(1,0,1)(x, y, z) = (-y-z, y, z) = y(-1, 1, 0) + z(-1, 0, 1).

The vectors {(1,1,0),(1,0,1)}\{(-1, 1, 0), (-1, 0, 1)\} are independent and span WW.

Maximal independent set has 2 vectors.

Example 4.2: Extending an Independent Set

Problem: Extend {(1,1,0,0)}\{(1, 1, 0, 0)\} to a maximal independent set in R4\mathbb{R}^4.

Solution:

Need 4 independent vectors (dimension of R4\mathbb{R}^4 is 4).

Add (1,0,0,0)(1, 0, 0, 0): Independent? Check a(1,1,0,0)+b(1,0,0,0)=(0,0,0,0)a(1,1,0,0) + b(1,0,0,0) = (0,0,0,0).

Gives a+b=0,a=0a + b = 0, a = 0, so a=b=0a = b = 0. Yes, independent.

Add (0,0,1,0)(0, 0, 1, 0): Clearly independent (third component).

Add (0,0,0,1)(0, 0, 0, 1): Clearly independent (fourth component).

Maximal: {(1,1,0,0),(1,0,0,0),(0,0,1,0),(0,0,0,1)}\{(1,1,0,0), (1,0,0,0), (0,0,1,0), (0,0,0,1)\}.

Example 4.3: Row Space and Independence

Problem: Are the rows of A=(123246135)A = \begin{pmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 1 & 3 & 5 \end{pmatrix} independent?

Solution: Row reduce:

(123246135)(123000012)(123012000)\begin{pmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 1 & 3 & 5 \end{pmatrix} \to \begin{pmatrix} 1 & 2 & 3 \\ 0 & 0 & 0 \\ 0 & 1 & 2 \end{pmatrix} \to \begin{pmatrix} 1 & 2 & 3 \\ 0 & 1 & 2 \\ 0 & 0 & 0 \end{pmatrix}

Only 2 non-zero rows ⟹ rank = 2 ⟹ dependent (3 rows but rank 2).

Note: Row 2 = 2 × Row 1.

Example 4.4: Complex Vectors

Problem: Are (1,i)(1, i) and (i,1)(i, 1) independent over C\mathbb{C}?

Solution: Solve a(1,i)+b(i,1)=(0,0)a(1, i) + b(i, 1) = (0, 0):

a+bi=0a + bi = 0 and ai+b=0ai + b = 0.

From first: a=bia = -bi. Substituting: (bi)i+b=bi2+b=b+b=2b=0(-bi)i + b = -bi^2 + b = b + b = 2b = 0.

So b=0b = 0, hence a=0a = 0. Independent.

Example 4.5: Checking Dependence Relation

Problem: If {v1,v2,v3}\{v_1, v_2, v_3\} satisfies v12v2+v3=0v_1 - 2v_2 + v_3 = 0, are they independent?

Solution: No! The equation 1v1+(2)v2+1v3=01 \cdot v_1 + (-2) \cdot v_2 + 1 \cdot v_3 = 0 is a non-trivial linear combination equaling zero (coefficients 1, -2, 1 are not all zero).

Linearly dependent.

Example 4.6: Wronskian Test for Functions

Problem: Are sinx,cosx\sin x, \cos x independent in C(R)C^\infty(\mathbb{R})?

Solution: Use the Wronskian:

W(sinx,cosx)=sinxcosxcosxsinx=sin2xcos2x=10W(\sin x, \cos x) = \begin{vmatrix} \sin x & \cos x \\ \cos x & -\sin x \end{vmatrix} = -\sin^2 x - \cos^2 x = -1 \neq 0

Non-zero Wronskian ⟹ independent.

Example 4.7: Matrices as Vectors

Problem: Are E11=(1000)E_{11} = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, E12=(0100)E_{12} = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, E21=(0010)E_{21} = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, E22=(0001)E_{22} = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} independent?

Solution: Suppose aE11+bE12+cE21+dE22=OaE_{11} + bE_{12} + cE_{21} + dE_{22} = O.

Then (abcd)=(0000)\begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}.

Comparing entries: a=b=c=d=0a = b = c = d = 0. Independent.

These form the standard basis for M2(R)M_2(\mathbb{R}).

Example 4.8: Finding Dependence Relations

Problem: Find the dependence relation for {(1,2,3),(4,5,6),(5,7,9)}\{(1, 2, 3), (4, 5, 6), (5, 7, 9)\}.

Solution: Set up a(1,2,3)+b(4,5,6)+c(5,7,9)=0a(1,2,3) + b(4,5,6) + c(5,7,9) = 0:

{a+4b+5c=02a+5b+7c=03a+6b+9c=0\begin{cases} a + 4b + 5c = 0 \\ 2a + 5b + 7c = 0 \\ 3a + 6b + 9c = 0 \end{cases}

Row reduce the augmented matrix:

(145257369)(145033000)\begin{pmatrix} 1 & 4 & 5 \\ 2 & 5 & 7 \\ 3 & 6 & 9 \end{pmatrix} \to \begin{pmatrix} 1 & 4 & 5 \\ 0 & -3 & -3 \\ 0 & 0 & 0 \end{pmatrix}

Free variable cc. Set c=1c = 1: then b=1b = -1, a=1a = -1.

Dependence relation: (1,2,3)(4,5,6)+(5,7,9)=0-(1,2,3) - (4,5,6) + (5,7,9) = 0.

Equivalently: v3=v1+v2v_3 = v_1 + v_2.

Example 4.9: Independence in Quotient Context

Problem: If v1,v2v_1, v_2 are independent and v3=v1+v2v_3 = v_1 + v_2, can we find a pair among {v1,v2,v3}\{v_1, v_2, v_3\} that is independent?

Solution: Yes! Any pair works:

  • {v1,v2}\{v_1, v_2\} is independent (given)
  • {v1,v3}\{v_1, v_3\}: If av1+bv3=0av_1 + bv_3 = 0, then av1+b(v1+v2)=(a+b)v1+bv2=0av_1 + b(v_1 + v_2) = (a+b)v_1 + bv_2 = 0. Since v1,v2v_1, v_2 are independent, a+b=0a + b = 0 and b=0b = 0, so a=b=0a = b = 0. Independent!
  • {v2,v3}\{v_2, v_3\}: Similar argument shows independence.
Example 4.10: Maximal Independent Subset

Problem: Find a maximal independent subset of {(1,0,1),(0,1,1),(1,1,2),(2,1,3)}\{(1,0,1), (0,1,1), (1,1,2), (2,1,3)\}.

Solution: Form matrix and row reduce:

(101201111123)(101201110000)\begin{pmatrix} 1 & 0 & 1 & 2 \\ 0 & 1 & 1 & 1 \\ 1 & 1 & 2 & 3 \end{pmatrix} \to \begin{pmatrix} 1 & 0 & 1 & 2 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 \end{pmatrix}

Pivots in columns 1 and 2. Rank = 2.

Maximal independent subset: {(1,0,1),(0,1,1)}\{(1,0,1), (0,1,1)\}.

Note: v3=v1+v2v_3 = v_1 + v_2 and v4=2v1+v2v_4 = 2v_1 + v_2.

5. Common Mistakes

Mistake 1: Confusing dependent with "equal"

Dependent doesn't mean the vectors are equal! It means one can be written as a combination of others. (1,0)(1, 0) and (2,0)(2, 0) are dependent but not equal.

Mistake 2: Forgetting the zero vector rule

Any set containing 00 is automatically dependent. Don't waste time checking other conditions if you see a zero vector!

Mistake 3: More vectors than dimension

In Rn\mathbb{R}^n, any set with more than nn vectors is dependent. Don't try to prove independence of 4 vectors in R3\mathbb{R}^3!

Mistake 4: Wrong matrix orientation

For the matrix test, put vectors as columns, not rows. Then check if column rank = number of vectors.

Mistake 5: Thinking independence depends on span

Independence only asks about relations among the given vectors. A set can be independent without spanning the whole space (and vice versa—spanning sets can be dependent).

Mistake 6: Confusing over different fields

(1,i)(1, i) might be independent over C\mathbb{C} but the question "over R\mathbb{R}" doesn't make sense—ii isn't real!

Mistake 7: Not checking all components

When solving αivi=0\sum \alpha_i v_i = 0, you must verify the solution works for ALL components/equations, not just some. A partial solution doesn't count!

Mistake 8: Confusing coefficient with vector

The coefficients αi\alpha_i must be scalars (from the field). Don't confuse these with the vectors themselves. Also, "not all zero" means at least one αi0\alpha_i \neq 0.

✓ Checklist: Testing Independence

  1. Quick check: Does set contain 0? → Dependent
  2. Size check: More than dim(V) vectors? → Dependent
  3. Parallel check: Two vectors, one scalar multiple of other? → Dependent
  4. Matrix test: Form matrix, check if rank = #vectors
  5. Determinant: If square, det ≠ 0 → Independent

6. Key Takeaways

Definition

Independent = only trivial combination gives zero. Dependent = there exists a non-trivial combination giving zero.

Key Properties

Subsets of independent sets are independent. Sets with 0 are dependent. Independent sets have size ≤ dim(V).

Steinitz Exchange

Can swap vectors while preserving span and independence. Implies: |independent| ≤ |spanning|.

Matrix Test

Vectors as columns. Independent ⟺ full column rank ⟺ ker = {0} ⟺ (if square) det ≠ 0.

Adding Vectors

Adding w to independent set: stays independent iff w ∉ span. Otherwise becomes dependent.

Basis Preview

A basis = independent + spanning = maximal independent = minimal spanning. Next chapter!

Quick Reference Table

QuestionAnswer
Contains 0?→ Dependent
More than dim(V) vectors?→ Dependent
One vector is multiple of another?→ Dependent
Matrix with full column rank?→ Independent
Square matrix with det ≠ 0?→ Independent

7. Applications of Linear Independence

Solving Linear Systems

The columns of AA being independent means Ax=bAx = b has at most one solution. If also square, exactly one solution exists for every bb.

Data Representation

Independent features in machine learning mean no redundant information. PCA finds independent directions of maximum variance.

Differential Equations

Independent solutions to a linear ODE form a basis for the solution space. The Wronskian tests independence of functions.

Control Theory

Controllability requires certain vectors to be independent. A system is controllable iff the controllability matrix has full rank.

Remark 7.1: Why Independence Matters

Linear independence ensures that:

  • Representations are unique (no redundancy)
  • Systems have unique solutions when they exist
  • Data is non-redundant (efficient storage)
  • Coordinates are well-defined (via bases)
Example 7.1: Unique Coordinates

If B={v1,v2,v3}\mathcal{B} = \{v_1, v_2, v_3\} is a basis for VV (independent and spanning), then every vVv \in V has unique coordinates.

Why unique? If v=αivi=βiviv = \sum \alpha_i v_i = \sum \beta_i v_i, then:

(αiβi)vi=0\sum (\alpha_i - \beta_i) v_i = 0

By independence, αiβi=0\alpha_i - \beta_i = 0 for all ii. So αi=βi\alpha_i = \beta_i.

Example 7.2: Signal Processing

In Fourier analysis, {1,cosx,sinx,cos2x,sin2x,}\{1, \cos x, \sin x, \cos 2x, \sin 2x, \ldots\} forms an orthogonal (hence independent) set.

Any periodic signal can be uniquely decomposed into these frequency components.

Independence ensures: each frequency contributes independently, no mixing.

Example 7.3: Cryptography

In cryptographic systems, linear independence of key components ensures:

  • No redundant information in keys
  • Maximum entropy / security
  • Inability to derive one key from others

Linear codes require codewords to be independent for error correction.

Example 7.4: Computer Graphics

In 3D graphics, three independent vectors form a coordinate system:

  • Camera orientation: up, forward, right vectors (must be independent)
  • If dependent, camera view is degenerate (2D or 1D)
  • Orthonormal bases preferred for efficiency

Connection to Matrix Properties

For an m×nm \times n matrix AA:

PropertyEquivalent Statement
Columns independentker(A) = {0}, rank = n
Rows independentker(Aᵀ) = {0}, rank = m
Square, columns indep.A invertible, det(A) ≠ 0
Ax = b unique solnColumns independent (n = rank)

8. Quick Reference

Tests for Independence

  1. Set up αivi=0\sum \alpha_i v_i = 0
  2. Solve for coefficients
  3. Only trivial solution? → Independent
  4. Non-trivial solution? → Dependent

Matrix Method

  1. Vectors as columns → matrix A
  2. Row reduce to echelon form
  3. Count pivots = rank
  4. rank = #columns → Independent

Quick Checks

  • • Contains 0? → Dependent
  • • More than dim(V)? → Dependent
  • • Two parallel vectors? → Dependent
  • • Square matrix, det ≠ 0? → Independent

Key Theorems

  • • Subsets of independent → independent
  • • Supersets of dependent → dependent
  • • |independent| ≤ |spanning|
  • • Add w: independent iff w ∉ span

Independence in Different Spaces

SpaceMax IndependentStandard Basis
ℝⁿne₁, e₂, ..., eₙ
Pₙ(ℝ)n+11, x, x², ..., xⁿ
Mₘₓₙ(ℝ)mnEᵢⱼ (standard matrix units)
ℝ[x]1, x, x², x³, ...
Remark 8.1: Looking Ahead: Basis and Dimension

Now that we understand both spanning and independence, we can define a basis: a set that is both spanning and linearly independent. The remarkable fact is that all bases of a vector space have the same size—this common size is the dimension.

Algorithm Summary

To Test Independence:

  1. Form matrix A with vectors as columns
  2. Row reduce to echelon form
  3. Independent iff #pivots = #columns (full column rank)

To Find Dependence Relation:

  1. Solve Ax = 0 using RREF
  2. Express free variables in terms of basic variables
  3. Dependence: αᵢvᵢ + ... = 0 where αᵢ come from x

To Extract Maximal Independent Subset:

  1. Row reduce A = [v₁|v₂|...|vₘ]
  2. Identify pivot columns
  3. Original vectors in pivot positions form maximal independent subset

Common Patterns

Always Independent:

  • • Standard basis vectors e₁, ..., eₙ
  • • 1, x, x², ... in polynomial space
  • • Columns of invertible matrix
  • • Non-zero single vector
  • • Orthogonal non-zero vectors

Always Dependent:

  • • Set containing zero vector
  • • More than n vectors in ℝⁿ
  • • Parallel vectors in any space
  • • Columns of singular matrix
  • • v, 2v (any scalar multiple)
Remark 8.2: Key Insight

The interplay between spanning (can we reach everything?) andindependence (is there redundancy?) is the heart of dimension theory:

dim(V)=min{S:S spans V}=max{S:S is independent}\text{dim}(V) = \min\{|S| : S \text{ spans } V\} = \max\{|S| : S \text{ is independent}\}

A basis achieves both bounds simultaneously—it's the sweet spot.

Linear Independence Practice
12
Questions
0
Correct
0%
Accuracy
1
Are the vectors (1,0)(1, 0) and (0,1)(0, 1) linearly independent in R2\mathbb{R}^2?
Easy
Not attempted
2
Are (1,2,3)(1, 2, 3), (4,5,6)(4, 5, 6), (7,8,9)(7, 8, 9) linearly independent in R3\mathbb{R}^3?
Medium
Not attempted
3
What is the maximum number of linearly independent vectors in R4\mathbb{R}^4?
Easy
Not attempted
4
If {v1,v2,v3}\{v_1, v_2, v_3\} is linearly independent, is {v1,v2}\{v_1, v_2\} linearly independent?
Medium
Not attempted
5
If {v1,v2}\{v_1, v_2\} is linearly dependent, which must be true?
Medium
Not attempted
6
In R3\mathbb{R}^3, can 4 vectors be linearly independent?
Easy
Not attempted
7
The polynomials 1,x,x21, x, x^2 in R[x]\mathbb{R}[x] are:
Medium
Not attempted
8
If vspan{v1,,vn}v \in \text{span}\{v_1, \ldots, v_n\} and the viv_i are independent, then {v1,,vn,v}\{v_1, \ldots, v_n, v\} is:
Hard
Not attempted
9
If a set contains the zero vector, is it linearly independent?
Easy
Not attempted
10
For vectors in Rn\mathbb{R}^n, independence can be tested by:
Medium
Not attempted
11
If {v1,v2,v3}\{v_1, v_2, v_3\} spans R3\mathbb{R}^3 and is independent, then:
Medium
Not attempted
12
The columns of an invertible n×nn \times n matrix are:
Medium
Not attempted

Frequently Asked Questions

What's the intuition for linear independence?

Vectors are independent if none of them is 'redundant'—none can be expressed as a combination of the others. Each adds a genuinely new direction. Dependent vectors have overlap in the directions they describe.

How do I test for linear independence computationally?

Form a matrix with the vectors as columns and row reduce. The vectors are independent iff every column has a pivot (no free variables), equivalently, iff the matrix has full column rank.

Why is the Steinitz exchange lemma important?

It's the key to proving that all bases have the same size (dimension is well-defined). It shows you can systematically replace spanning vectors with independent ones without losing span.

Can the zero vector be part of an independent set?

No. If 0 is in the set, then 1·0 = 0 is a non-trivial combination equaling zero, making the set dependent.

What's the relationship between independence and invertibility?

The columns of a square matrix are independent iff the matrix is invertible. This connects the abstract concept to concrete computability.

What happens if I add a vector to an independent set?

If the new vector is NOT in the span of the existing vectors, the enlarged set is still independent. If it IS in the span, the enlarged set becomes dependent.

Is linear independence preserved under linear maps?

Not in general. A linear map can map independent vectors to dependent ones (the kernel could be non-trivial). However, injective linear maps do preserve independence.

How many independent vectors can I have in ℝⁿ?

At most n. This is the fundamental bound: in an n-dimensional space, any independent set has at most n vectors, and any set with more than n vectors must be dependent.

What's the difference between spanning and independent?

Spanning means every vector can be written as a combination; independent means no vector is redundant. A basis achieves both: minimal spanning = maximal independent.

Can dependent functions still form a useful set?

Yes! Dependent sets can still span useful subspaces. For example, {(1,0), (2,0), (0,1)} is dependent but spans ℝ². Dependence just means there's redundancy.