MathIsimple
LA-3.1
Available

Linear Map Definition

A linear map is a function between vector spaces that preserves the vector space structure—it respects both addition and scalar multiplication. This simple property underlies virtually all of linear algebra.

3-4 hours Core Level 10 Objectives
Learning Objectives
  • State the definition of a linear map between vector spaces
  • Verify whether a given function is linear using the definition
  • Recognize standard examples: differentiation, integration, projection, rotation
  • Understand the decomposition into additivity and homogeneity
  • Prove basic properties: T(0) = 0, T(-v) = -T(v)
  • Define and work with the space of linear maps ℒ(V, W)
  • Compute dimensions of spaces of linear maps
  • Understand and verify composition of linear maps
  • Define and recognize linear functionals
  • Distinguish linear from non-linear maps with counterexamples
Prerequisites
  • Vector space definition (LA-2.1)
  • Basis and dimension (LA-2.4)
  • Basic properties of vector spaces
  • Function composition
  • Field axioms
Historical Context

The concept of a linear map evolved alongside linear algebra itself. Arthur Cayley (1821–1895) introduced matrix algebra in 1858, implicitly using linear transformations. Giuseppe Peano (1858–1932) gave the first abstract definition of vector spaces and linear operations in 1888.

The modern "linear maps before matrices" approach emphasizes that linear maps are the fundamental objects and matrices are merely their representations with respect to chosen bases.

Linear maps appear throughout mathematics: differentiation and integration in calculus, expected value in probability, Fourier transforms in analysis. The linearity property underlies much of modern physics, from quantum mechanics to signal processing.

1. Definition of Linear Maps

The concept of a linear map is central to all of linear algebra. A linear map is a function between vector spaces that respects the vector space structure—it preserves addition and scalar multiplication. This property makes linear maps remarkably well-behaved and leads to a rich theory.

Definition 3.1: Linear Map

Let VV and WW be vector spaces over the same field FF. A function T:VWT: V \to W is called a linear map (or linear transformation) if for all vectors u,vVu, v \in V and all scalars α,βF\alpha, \beta \in F:

T(αu+βv)=αT(u)+βT(v)T(\alpha u + \beta v) = \alpha T(u) + \beta T(v)
Remark 3.1: Equivalent Formulation

The linearity condition can be split into two separate properties:

  • Additivity: T(u+v)=T(u)+T(v)T(u + v) = T(u) + T(v) for all u,vVu, v \in V
  • Homogeneity: T(αv)=αT(v)T(\alpha v) = \alpha T(v) for all αF\alpha \in F and vVv \in V

These two conditions together are equivalent to the single condition in Definition 3.1. Having both conditions is sometimes more convenient for verification.

Theorem 3.1: Basic Properties of Linear Maps

Let T:VWT: V \to W be a linear map. Then:

  1. T(0V)=0WT(0_V) = 0_W (T sends the zero vector to the zero vector)
  2. T(v)=T(v)T(-v) = -T(v) for all vVv \in V
  3. T(vw)=T(v)T(w)T(v - w) = T(v) - T(w) for all v,wVv, w \in V
  4. T(i=1nαivi)=i=1nαiT(vi)T\left(\sum_{i=1}^{n} \alpha_i v_i\right) = \sum_{i=1}^{n} \alpha_i T(v_i) for any finite linear combination
Proof:

(1) Using homogeneity with α=0\alpha = 0:

T(0V)=T(0v)=0T(v)=0WT(0_V) = T(0 \cdot v) = 0 \cdot T(v) = 0_W

(2) Using homogeneity with α=1\alpha = -1:

T(v)=T((1)v)=(1)T(v)=T(v)T(-v) = T((-1) \cdot v) = (-1) \cdot T(v) = -T(v)

(3) Combining additivity and (2):

T(vw)=T(v+(w))=T(v)+T(w)=T(v)T(w)T(v - w) = T(v + (-w)) = T(v) + T(-w) = T(v) - T(w)

(4) Follows by induction on nn, using additivity and homogeneity repeatedly.

Example 3.1: Standard Examples of Linear Maps

Here are fundamental examples that appear throughout mathematics:

  • Zero Map: 0:VW0: V \to W defined by 0(v)=0W0(v) = 0_W for all vVv \in V
  • Identity Map: IV:VVI_V: V \to V defined by IV(v)=vI_V(v) = v for all vVv \in V
  • Scalar Multiplication: λI:VV\lambda I: V \to V defined by (λI)(v)=λv(\lambda I)(v) = \lambda v
  • Differentiation: D:Pn(R)Pn1(R)D: P_n(\mathbb{R}) \to P_{n-1}(\mathbb{R}) defined by D(p)=pD(p) = p'
  • Integration: 0x:Pn(R)Pn+1(R)\int_0^x: P_n(\mathbb{R}) \to P_{n+1}(\mathbb{R}) defined by p0xp(t)dtp \mapsto \int_0^x p(t)\,dt
Example 3.2: Geometric Linear Maps on ℝ²

Geometric transformations on R2\mathbb{R}^2 provide intuitive examples:

  • Rotation by angle θ: Rθ(x,y)=(xcosθysinθ,xsinθ+ycosθ)R_\theta(x, y) = (x\cos\theta - y\sin\theta, x\sin\theta + y\cos\theta)
  • Projection onto x-axis: π(x,y)=(x,0)\pi(x, y) = (x, 0)
  • Reflection about x-axis: ρ(x,y)=(x,y)\rho(x, y) = (x, -y)
  • Scaling: Sc(x,y)=(cx,cy)S_c(x, y) = (cx, cy) for scalar cc
  • Shear: Hk(x,y)=(x+ky,y)H_k(x, y) = (x + ky, y)
Example 3.3: Verifying Linearity

Claim: The map T:R2R3T: \mathbb{R}^2 \to \mathbb{R}^3 defined by T(x1,x2)=(x1x2,x1,x1+x2)T(x_1, x_2) = (x_1 - x_2, x_1, x_1 + x_2) is linear.

Proof: Let (x1,x2),(y1,y2)R2(x_1, x_2), (y_1, y_2) \in \mathbb{R}^2 and k1,k2Rk_1, k_2 \in \mathbb{R}.

T(k1(x1,x2)+k2(y1,y2))=T(k1x1+k2y1,k1x2+k2y2)T(k_1(x_1, x_2) + k_2(y_1, y_2)) = T(k_1 x_1 + k_2 y_1, k_1 x_2 + k_2 y_2)
=((k1x1+k2y1)(k1x2+k2y2),k1x1+k2y1,(k1x1+k2y1)+(k1x2+k2y2))= ((k_1 x_1 + k_2 y_1) - (k_1 x_2 + k_2 y_2), k_1 x_1 + k_2 y_1, (k_1 x_1 + k_2 y_1) + (k_1 x_2 + k_2 y_2))
=k1(x1x2,x1,x1+x2)+k2(y1y2,y1,y1+y2)= k_1(x_1 - x_2, x_1, x_1 + x_2) + k_2(y_1 - y_2, y_1, y_1 + y_2)
=k1T(x1,x2)+k2T(y1,y2)= k_1 T(x_1, x_2) + k_2 T(y_1, y_2)
Example 3.4: Non-Examples

The following are NOT linear maps:

  • Translation: T(x)=x+cT(x) = x + c for c0c \neq 0. Since T(0)=c0T(0) = c \neq 0.
  • Product map: T(x1,x2)=(x1x2,x1+x2)T(x_1, x_2) = (x_1 x_2, x_1 + x_2). Check: T((1,0)+(0,1))=T(1,1)=(1,2)T((1,0) + (0,1)) = T(1,1) = (1, 2) but T(1,0)+T(0,1)=(0,1)+(0,1)=(0,2)T(1,0) + T(0,1) = (0,1) + (0,1) = (0,2).
  • Determinant: det:Mn(R)R\det: M_n(\mathbb{R}) \to \mathbb{R}. Since det(2I)=2n2det(I)\det(2I) = 2^n \neq 2\det(I) for n>1n > 1.
  • Norm: :VR\|\cdot\|: V \to \mathbb{R}. Since v=vv\|-v\| = \|v\| \neq -\|v\|.

2. The Space of Linear Maps ℒ(V, W)

One of the beautiful features of linear algebra is that linear maps themselves form a vector space. This allows us to add linear maps and multiply them by scalars, creating new linear maps.

Definition 3.2: Space of Linear Maps

Let VV and WW be vector spaces over the same field FF. We denote by L(V,W)\mathcal{L}(V, W) the set of all linear maps from VV to WW.

When V=WV = W, we write L(V)\mathcal{L}(V) for L(V,V)\mathcal{L}(V, V), and call its elements linear operators on VV.

Definition 3.3: Operations on Linear Maps

For S,TL(V,W)S, T \in \mathcal{L}(V, W) and λF\lambda \in F, we define:

  • Sum: (S+T)(v)=S(v)+T(v)(S + T)(v) = S(v) + T(v) for all vVv \in V
  • Scalar multiple: (λT)(v)=λT(v)(\lambda T)(v) = \lambda \cdot T(v) for all vVv \in V
Theorem 3.2: ℒ(V, W) is a Vector Space

With addition and scalar multiplication defined as above, L(V,W)\mathcal{L}(V, W) is a vector space over FF. The zero vector is the zero map 00, and the additive inverse of TT is T-T.

Proof:

We verify that S+TS + T and λT\lambda T are linear maps:

Sum is linear:

(S+T)(αu+βv)=S(αu+βv)+T(αu+βv)(S + T)(\alpha u + \beta v) = S(\alpha u + \beta v) + T(\alpha u + \beta v)
=αS(u)+βS(v)+αT(u)+βT(v)=α(S+T)(u)+β(S+T)(v)= \alpha S(u) + \beta S(v) + \alpha T(u) + \beta T(v) = \alpha(S + T)(u) + \beta(S + T)(v)

Scalar multiple is linear:

(λT)(αu+βv)=λT(αu+βv)=λ(αT(u)+βT(v))(\lambda T)(\alpha u + \beta v) = \lambda T(\alpha u + \beta v) = \lambda(\alpha T(u) + \beta T(v))
=α(λT(u))+β(λT(v))=α(λT)(u)+β(λT)(v)= \alpha(\lambda T(u)) + \beta(\lambda T(v)) = \alpha(\lambda T)(u) + \beta(\lambda T)(v)

The vector space axioms follow from the corresponding properties in WW.

Theorem 3.3: Dimension of ℒ(V, W)

If dimV=m\dim V = m and dimW=n\dim W = n are finite, then:

dimL(V,W)=mn\dim \mathcal{L}(V, W) = m \cdot n
Proof:

A linear map T:VWT: V \to W is completely determined by its values on a basis of VV. If {v1,,vm}\{v_1, \ldots, v_m\} is a basis of VV, then T(vj)T(v_j) can be any vector in WW. Each T(vj)T(v_j) requires nn coordinates to specify, and there are mm basis vectors, giving mnmn degrees of freedom.

Example 3.5: Dimension Calculation
  • dimL(R3,R2)=3×2=6\dim \mathcal{L}(\mathbb{R}^3, \mathbb{R}^2) = 3 \times 2 = 6
  • dimL(Rn,R)=n×1=n\dim \mathcal{L}(\mathbb{R}^n, \mathbb{R}) = n \times 1 = n
  • dimL(Rn)=n2\dim \mathcal{L}(\mathbb{R}^n) = n^2

3. Composition of Linear Maps

When we compose two linear maps, the result is again linear. This composition operation corresponds to matrix multiplication when we represent maps as matrices.

Definition 3.4: Composition

Let TL(V,W)T \in \mathcal{L}(V, W) and SL(W,U)S \in \mathcal{L}(W, U). The composition STL(V,U)S \circ T \in \mathcal{L}(V, U) is defined by:

(ST)(v)=S(T(v))for all vV(S \circ T)(v) = S(T(v)) \quad \text{for all } v \in V
Theorem 3.4: Composition is Linear

If TL(V,W)T \in \mathcal{L}(V, W) and SL(W,U)S \in \mathcal{L}(W, U), then STL(V,U)S \circ T \in \mathcal{L}(V, U).

Proof:

For all u,vVu, v \in V and α,βF\alpha, \beta \in F:

(ST)(αu+βv)=S(T(αu+βv))(S \circ T)(\alpha u + \beta v) = S(T(\alpha u + \beta v))
=S(αT(u)+βT(v))=αS(T(u))+βS(T(v))= S(\alpha T(u) + \beta T(v)) = \alpha S(T(u)) + \beta S(T(v))
=α(ST)(u)+β(ST)(v)= \alpha (S \circ T)(u) + \beta (S \circ T)(v)
Theorem 3.5: Properties of Composition

Composition of linear maps satisfies:

  1. Associativity: (RS)T=R(ST)(R \circ S) \circ T = R \circ (S \circ T)
  2. Identity: IWT=T=TIVI_W \circ T = T = T \circ I_V
  3. Distributivity: S(T1+T2)=ST1+ST2S \circ (T_1 + T_2) = S \circ T_1 + S \circ T_2
  4. Distributivity: (S1+S2)T=S1T+S2T(S_1 + S_2) \circ T = S_1 \circ T + S_2 \circ T
  5. Scalar compatibility: λ(ST)=(λS)T=S(λT)\lambda(S \circ T) = (\lambda S) \circ T = S \circ (\lambda T)
Remark 3.2: Non-commutativity

In general, STTSS \circ T \neq T \circ S! This is one of the key differences from ordinary number multiplication. Even when both compositions are defined (e.g., for operators on VV), they may produce different results.

Example 3.6: Non-commuting Compositions

Let D:P3(R)P2(R)D: P_3(\mathbb{R}) \to P_2(\mathbb{R}) be differentiation and M:P2(R)P3(R)M: P_2(\mathbb{R}) \to P_3(\mathbb{R}) be multiplication by xx.

For p(x)=x2p(x) = x^2:

  • (DM)(x2)=D(x3)=3x2(D \circ M)(x^2) = D(x^3) = 3x^2
  • (MD)(x2)=M(2x)=2x2(M \circ D)(x^2) = M(2x) = 2x^2

So DMMDD \circ M \neq M \circ D.

4. Linear Maps are Determined by Basis Values

One of the most powerful results about linear maps is that they are completely determined by their action on a basis. This is the foundation for representing linear maps as matrices.

Theorem 3.6: Determination by Basis

Let {v1,,vn}\{v_1, \ldots, v_n\} be a basis of VV. If T,SL(V,W)T, S \in \mathcal{L}(V, W)satisfy T(vi)=S(vi)T(v_i) = S(v_i) for all i=1,,ni = 1, \ldots, n, then T=ST = S.

Proof:

For any vVv \in V, we can write v=i=1nαiviv = \sum_{i=1}^{n} \alpha_i v_i. Then:

T(v)=T(i=1nαivi)=i=1nαiT(vi)=i=1nαiS(vi)=S(v)T(v) = T\left(\sum_{i=1}^{n} \alpha_i v_i\right) = \sum_{i=1}^{n} \alpha_i T(v_i) = \sum_{i=1}^{n} \alpha_i S(v_i) = S(v)

Since this holds for all vVv \in V, we have T=ST = S.

Theorem 3.7: Existence and Uniqueness

Let {v1,,vn}\{v_1, \ldots, v_n\} be a basis of VV, and let w1,,wnw_1, \ldots, w_nbe any vectors in WW. Then there exists a unique linear map TL(V,W)T \in \mathcal{L}(V, W)such that T(vi)=wiT(v_i) = w_i for all i=1,,ni = 1, \ldots, n.

Proof:

Existence: Define TT by

T(i=1nαivi)=i=1nαiwiT\left(\sum_{i=1}^{n} \alpha_i v_i\right) = \sum_{i=1}^{n} \alpha_i w_i

This is well-defined since every vVv \in V has a unique representation as a linear combination of basis vectors. One can verify directly that TT is linear.

Uniqueness: Follows from Theorem 3.6.

Example 3.7: Constructing a Linear Map

Problem: Does there exist a linear map T:R3R2T: \mathbb{R}^3 \to \mathbb{R}^2 with T(1,1,1)=(1,0)T(1, -1, 1) = (1, 0) and T(1,1,1)=(0,1)T(1, 1, 1) = (0, 1)?

Solution: Yes! Extend {(1,1,1),(1,1,1)}\{(1,-1,1), (1,1,1)\} to a basis of R3\mathbb{R}^3by adding (1,0,0)(1, 0, 0). Set T(1,0,0)=(0,0)T(1, 0, 0) = (0, 0) (arbitrary choice). By Theorem 3.7, a unique linear map exists.

Remark 3.3: When Does a Linear Map NOT Exist?

A proposed linear map fails to exist when:

  • The given conditions would require T(0)0T(0) \neq 0
  • The given conditions would map dependent vectors to independent vectors
  • The given conditions are inconsistent (e.g., same input, different outputs)

5. Linear Functionals

Linear functionals are linear maps to the scalar field. They form the dual space and are fundamental in many areas of mathematics.

Definition 3.5: Linear Functional

A linear functional (or linear form) on VV is a linear map φ:VF\varphi: V \to F, where FF is the scalar field of VV.

Example 3.8: Examples of Linear Functionals
  • Evaluation: For polynomials, φa(p)=p(a)\varphi_a(p) = p(a) evaluates at aa
  • Integration: φ(f)=abf(x)dx\varphi(f) = \int_a^b f(x)\,dx on C[a,b]C[a,b]
  • Trace: tr(A)=i=1naii\text{tr}(A) = \sum_{i=1}^{n} a_{ii} on Mn(F)M_n(F)
  • Coordinate functional: πi(x1,,xn)=xi\pi_i(x_1, \ldots, x_n) = x_i on FnF^n
Theorem 3.8: Linear Functionals on Finite-Dimensional Spaces

If dimV=n\dim V = n, then dimV=dimL(V,F)=n\dim V^* = \dim \mathcal{L}(V, F) = n.

6. Worked Examples

Example 1: Verify Linearity

Problem: Is T:P2(R)P1(R)T: P_2(\mathbb{R}) \to P_1(\mathbb{R}) defined by T(p)=p(x+1)p(x)T(p) = p(x+1) - p(x) linear?

Solution: For p1,p2P2p_1, p_2 \in P_2 and k1,k2Rk_1, k_2 \in \mathbb{R}:

T(k1p1+k2p2)=(k1p1+k2p2)(x+1)(k1p1+k2p2)(x)T(k_1 p_1 + k_2 p_2) = (k_1 p_1 + k_2 p_2)(x+1) - (k_1 p_1 + k_2 p_2)(x)
=k1(p1(x+1)p1(x))+k2(p2(x+1)p2(x))=k1T(p1)+k2T(p2)= k_1(p_1(x+1) - p_1(x)) + k_2(p_2(x+1) - p_2(x)) = k_1 T(p_1) + k_2 T(p_2)

T is linear.

Example 2: Find dim ℒ(V, W)

Problem: Find dimL(P3(R),M2×2(R))\dim \mathcal{L}(P_3(\mathbb{R}), M_{2 \times 2}(\mathbb{R})).

Solution:

  • dimP3(R)=4\dim P_3(\mathbb{R}) = 4 (basis: {1,x,x2,x3}\{1, x, x^2, x^3\})
  • dimM2×2(R)=4\dim M_{2 \times 2}(\mathbb{R}) = 4

Therefore, dimL(P3,M2×2)=4×4=16\dim \mathcal{L}(P_3, M_{2 \times 2}) = 4 \times 4 = 16.

Example 3: Disprove Linearity

Problem: Show T(x1,x2)=(x12,x2)T(x_1, x_2) = (x_1^2, x_2) is not linear.

Solution: Check homogeneity:

T(2(1,0))=T(2,0)=(4,0)T(2 \cdot (1, 0)) = T(2, 0) = (4, 0)
2T(1,0)=2(1,0)=(2,0)2 \cdot T(1, 0) = 2 \cdot (1, 0) = (2, 0)

Since (4,0)(2,0)(4, 0) \neq (2, 0), T is NOT linear.

Example 4: Composition

Problem: Let S(x,y)=(x+y,x)S(x, y) = (x + y, x) and T(x,y)=(2x,yx)T(x, y) = (2x, y - x). Find STS \circ T and TST \circ S.

Solution:

(ST)(x,y)=S(T(x,y))=S(2x,yx)=(2x+yx,2x)=(x+y,2x)(S \circ T)(x, y) = S(T(x, y)) = S(2x, y-x) = (2x + y - x, 2x) = (x + y, 2x)
(TS)(x,y)=T(S(x,y))=T(x+y,x)=(2(x+y),x(x+y))=(2x+2y,y)(T \circ S)(x, y) = T(S(x, y)) = T(x+y, x) = (2(x+y), x - (x+y)) = (2x+2y, -y)

Note: STTSS \circ T \neq T \circ S.

Example 5: Existence Question

Problem: Does there exist a linear map T:R2R3T: \mathbb{R}^2 \to \mathbb{R}^3 with

T(1,0)=(1,0,0),T(0,1)=(0,1,0),T(1,1)=(0,0,1)?T(1, 0) = (1, 0, 0), \quad T(0, 1) = (0, 1, 0), \quad T(1, 1) = (0, 0, 1)?

Solution: No! If TT were linear:

T(1,1)=T((1,0)+(0,1))=T(1,0)+T(0,1)=(1,0,0)+(0,1,0)=(1,1,0)T(1, 1) = T((1,0) + (0,1)) = T(1,0) + T(0,1) = (1,0,0) + (0,1,0) = (1, 1, 0)

But the condition says T(1,1)=(0,0,1)(1,1,0)T(1, 1) = (0, 0, 1) \neq (1, 1, 0). Contradiction!

Example 6: Linear Functional

Problem: Show that φ:M2(R)R\varphi: M_2(\mathbb{R}) \to \mathbb{R} given by φ(A)=tr(A)\varphi(A) = \text{tr}(A) is a linear functional.

Solution: For matrices A,BA, B and scalar cc:

tr(A+B)=i(aii+bii)=iaii+ibii=tr(A)+tr(B)\text{tr}(A + B) = \sum_i (a_{ii} + b_{ii}) = \sum_i a_{ii} + \sum_i b_{ii} = \text{tr}(A) + \text{tr}(B)
tr(cA)=icaii=ciaii=ctr(A)\text{tr}(cA) = \sum_i c \cdot a_{ii} = c \sum_i a_{ii} = c \cdot \text{tr}(A)

Trace is a linear functional.

7. Common Mistakes

Mistake 1: Forgetting T(0) = 0

If T(0)0T(0) \neq 0, then TT is definitely NOT linear. This is a quick first check when testing linearity.

Mistake 2: Assuming Composition Commutes

STTSS \circ T \neq T \circ S in general! Even for operators on the same space, composition order matters.

Mistake 3: Confusing Linear with Affine

T(x)=ax+bT(x) = ax + b is NOT linear if b0b \neq 0. This is an affine map. Linear maps through the origin: T(x)=axT(x) = ax.

Mistake 4: Dependent to Independent

Linear maps preserve dependence but not independence. If vectors are dependent, their images must be dependent. But independent vectors can map to dependent ones.

Mistake 5: Checking Only One Property

Both additivity AND homogeneity must hold. Some maps satisfy one but not the other (especially over certain fields).

8. Key Takeaways

Definition

T(αu+βv)=αT(u)+βT(v)T(\alpha u + \beta v) = \alpha T(u) + \beta T(v) for all vectors and scalars. Equivalently: additivity + homogeneity.

Basic Properties

T(0)=0T(0) = 0, T(v)=T(v)T(-v) = -T(v), and linear maps preserve linear combinations and dependence.

Space of Maps

L(V,W)\mathcal{L}(V, W) is a vector space with dim=(dimV)(dimW)\dim = (\dim V)(\dim W). Composition is associative but not commutative.

Basis Determination

A linear map is uniquely determined by its values on a basis. Given basis values, there exists a unique linear extension.

9. Connection to the Rest of Linear Algebra

Linear maps are the central objects of study in linear algebra. They connect to every major topic in the subject and provide the framework for understanding matrices, determinants, eigenvalues, and more.

Matrices as Representations

Every linear map T:VWT: V \to W between finite-dimensional spaces can be represented by a matrix once we choose bases for VV and WW. The matrix encodes the images of basis vectors as its columns.

[T]B,C=matrix of T with respect to bases B and C[T]_{\mathcal{B}, \mathcal{C}} = \text{matrix of } T \text{ with respect to bases } \mathcal{B} \text{ and } \mathcal{C}
Kernel and Image

Two fundamental subspaces associated with a linear map T:VWT: V \to W:

  • Kernel (null space): ker(T)={vV:T(v)=0}\ker(T) = \{v \in V : T(v) = 0\}
  • Image (range): im(T)={T(v):vV}\text{im}(T) = \{T(v) : v \in V\}

The Rank-Nullity Theorem relates these: dim(kerT)+dim(im T)=dimV\dim(\ker T) + \dim(\text{im } T) = \dim V.

Invertibility and Isomorphism

A linear map is an isomorphism if it is bijective. For linear maps between spaces of the same dimension, the following are equivalent:

  • TT is injective (kerT={0}\ker T = \{0\})
  • TT is surjective (im T=W\text{im } T = W)
  • TT is bijective (invertible)
Eigenvalues and Eigenvectors

For a linear operator T:VVT: V \to V, an eigenvector is a nonzero vector vv such that T(v)=λvT(v) = \lambda v for some scalar λ\lambda (the eigenvalue).

Eigenvalues reveal the intrinsic behavior of linear operators and are essential for diagonalization, solving differential equations, and many applications.

Dual Spaces

The dual space V=L(V,F)V^* = \mathcal{L}(V, F) consists of all linear functionals on VV. Every linear map T:VWT: V \to W induces a dual map T:WVT^*: W^* \to V^*.

The dual perspective provides powerful tools for understanding linear algebra through row operations, transpose matrices, and annihilators.

10. Advanced Examples

Example: Rotation is Linear

Problem: Prove that the rotation Rθ:R2R2R_\theta: \mathbb{R}^2 \to \mathbb{R}^2 by angle θ\theta is linear.

Solution: The rotation is given by:

Rθ(x,y)=(xcosθysinθ,xsinθ+ycosθ)R_\theta(x, y) = (x\cos\theta - y\sin\theta, x\sin\theta + y\cos\theta)

For any vectors (x1,y1),(x2,y2)(x_1, y_1), (x_2, y_2) and scalars α,β\alpha, \beta:

Rθ(α(x1,y1)+β(x2,y2))=Rθ(αx1+βx2,αy1+βy2)R_\theta(\alpha(x_1, y_1) + \beta(x_2, y_2)) = R_\theta(\alpha x_1 + \beta x_2, \alpha y_1 + \beta y_2)
=((αx1+βx2)cosθ(αy1+βy2)sinθ,...)= ((\alpha x_1 + \beta x_2)\cos\theta - (\alpha y_1 + \beta y_2)\sin\theta, ...)
=α(x1cosθy1sinθ,x1sinθ+y1cosθ)+β(...)= \alpha(x_1\cos\theta - y_1\sin\theta, x_1\sin\theta + y_1\cos\theta) + \beta(...)
=αRθ(x1,y1)+βRθ(x2,y2)= \alpha R_\theta(x_1, y_1) + \beta R_\theta(x_2, y_2)
Example: Differentiation Operator

Problem: Show that D:C(R)C(R)D: C^\infty(\mathbb{R}) \to C^\infty(\mathbb{R}) defined by D(f)=fD(f) = f' is linear.

Solution: For differentiable functions f,gf, g and scalars α,β\alpha, \beta:

D(αf+βg)=(αf+βg)=αf+βg=αD(f)+βD(g)D(\alpha f + \beta g) = (\alpha f + \beta g)' = \alpha f' + \beta g' = \alpha D(f) + \beta D(g)

This follows from the linearity of differentiation in calculus.

Example: Expected Value is Linear

Problem: Show that expected value E:VR\mathbb{E}: V \to \mathbb{R} on a space of random variables is linear.

Solution: The fundamental property of expected value is:

E[αX+βY]=αE[X]+βE[Y]\mathbb{E}[\alpha X + \beta Y] = \alpha \mathbb{E}[X] + \beta \mathbb{E}[Y]

This is exactly the linearity condition! Expected value is a linear functional.

Example: Transpose as a Linear Map

Problem: Show that T:Mm×n(R)Mn×m(R)T: M_{m \times n}(\mathbb{R}) \to M_{n \times m}(\mathbb{R}) defined by T(A)=ATT(A) = A^T is linear.

Solution: For matrices A,BA, B and scalars α,β\alpha, \beta:

T(αA+βB)=(αA+βB)T=αAT+βBT=αT(A)+βT(B)T(\alpha A + \beta B) = (\alpha A + \beta B)^T = \alpha A^T + \beta B^T = \alpha T(A) + \beta T(B)

This uses the properties (A+B)T=AT+BT(A + B)^T = A^T + B^T and (αA)T=αAT(\alpha A)^T = \alpha A^T.

Example: Shift Operator

Problem: On the space of sequences R\mathbb{R}^\infty, define the right shift R(x1,x2,x3,)=(0,x1,x2,)R(x_1, x_2, x_3, \ldots) = (0, x_1, x_2, \ldots). Is this linear?

Solution: Yes! For sequences (xn),(yn)(x_n), (y_n) and scalars α,β\alpha, \beta:

R(α(xn)+β(yn))=R(αx1+βy1,αx2+βy2,)R(\alpha(x_n) + \beta(y_n)) = R(\alpha x_1 + \beta y_1, \alpha x_2 + \beta y_2, \ldots)
=(0,αx1+βy1,αx2+βy2,)= (0, \alpha x_1 + \beta y_1, \alpha x_2 + \beta y_2, \ldots)
=α(0,x1,x2,)+β(0,y1,y2,)=αR(xn)+βR(yn)= \alpha(0, x_1, x_2, \ldots) + \beta(0, y_1, y_2, \ldots) = \alpha R(x_n) + \beta R(y_n)

11. Theoretical Insights

Theorem 3.9: Preservation of Linear Dependence

If T:VWT: V \to W is linear and {v1,,vk}\{v_1, \ldots, v_k\} is linearly dependent in VV, then {T(v1),,T(vk)}\{T(v_1), \ldots, T(v_k)\} is linearly dependent in WW.

Proof:

Since {v1,,vk}\{v_1, \ldots, v_k\} is dependent, there exist scalars α1,,αk\alpha_1, \ldots, \alpha_k, not all zero, such that αivi=0\sum \alpha_i v_i = 0. Applying TT:

i=1kαiT(vi)=T(i=1kαivi)=T(0)=0\sum_{i=1}^k \alpha_i T(v_i) = T\left(\sum_{i=1}^k \alpha_i v_i\right) = T(0) = 0

The same (non-all-zero) scalars show that {T(v1),,T(vk)}\{T(v_1), \ldots, T(v_k)\} is linearly dependent.

Remark 3.4: Independence May Not Be Preserved

The converse is false! A linear map can take linearly independent vectors to linearly dependent vectors. For example, the projection π(x,y)=(x,0)\pi(x, y) = (x, 0) maps the independent set {(1,0),(0,1)}\{(1, 0), (0, 1)\} to {(1,0),(0,0)}\{(1, 0), (0, 0)\}, which is dependent.

Theorem 3.10: Image of a Basis Spans the Image

If {v1,,vn}\{v_1, \ldots, v_n\} is a basis of VV and T:VWT: V \to W is linear, then:

im(T)=span{T(v1),,T(vn)}\text{im}(T) = \text{span}\{T(v_1), \ldots, T(v_n)\}
Proof:

Any wim(T)w \in \text{im}(T) equals T(v)T(v) for some vVv \in V. Since v=αiviv = \sum \alpha_i v_i:

w=T(v)=T(αivi)=αiT(vi)span{T(v1),,T(vn)}w = T(v) = T\left(\sum \alpha_i v_i\right) = \sum \alpha_i T(v_i) \in \text{span}\{T(v_1), \ldots, T(v_n)\}

Conversely, any element of the span is in the image since each T(vi)im(T)T(v_i) \in \text{im}(T).

Corollary 3.1: Rank Bound

For any linear map T:VWT: V \to W with dimV=n\dim V = n:

rank(T)=dim(im T)min{dimV,dimW}\text{rank}(T) = \dim(\text{im } T) \leq \min\{\dim V, \dim W\}
Theorem 3.11: Linear Maps and Subspaces

Let T:VWT: V \to W be linear and UU be a subspace of VV. Then:

  1. T(U)T(U) is a subspace of WW
  2. If UWU' \leq W, then T1(U)={vV:T(v)U}T^{-1}(U') = \{v \in V : T(v) \in U'\} is a subspace of VV

12. Additional Practice Problems

Problem 1

Let T:R3R2T: \mathbb{R}^3 \to \mathbb{R}^2 be defined by T(x,y,z)=(x+y,2zx)T(x, y, z) = (x + y, 2z - x). Verify that TT is linear and find T(1,2,3)T(1, 2, 3).

Problem 2

Prove or disprove: The map T:M2×2(R)M2×2(R)T: M_{2 \times 2}(\mathbb{R}) \to M_{2 \times 2}(\mathbb{R})defined by T(A)=A2T(A) = A^2 is linear.

Problem 3

Find dimL(P4(R),R3)\dim \mathcal{L}(P_4(\mathbb{R}), \mathbb{R}^3).

Problem 4

Let T,S:VVT, S: V \to V be linear operators. Prove that T+ST + S andTST \circ S are also linear operators on VV.

Problem 5

Does there exist a linear map T:R2R2T: \mathbb{R}^2 \to \mathbb{R}^2 such thatT(1,1)=(2,3)T(1, 1) = (2, 3) and T(2,2)=(1,1)T(2, 2) = (1, 1)? Justify your answer.

Problem 6

Let V={pP3(R):p(0)=0}V = \{p \in P_3(\mathbb{R}) : p(0) = 0\}. Show that VV is a vector space and that the evaluation map φ1:VR\varphi_1: V \to \mathbb{R} defined by φ1(p)=p(1)\varphi_1(p) = p(1) is a linear functional.

13. Quick Reference Summary

ConceptDefinition/Formula
Linear MapT(αu+βv)=αT(u)+βT(v)T(\alpha u + \beta v) = \alpha T(u) + \beta T(v)
Zero PropertyT(0)=0T(0) = 0 always
Space of MapsL(V,W)\mathcal{L}(V, W), dim = (dimV)(dimW)(\dim V)(\dim W)
Sum of Maps(S+T)(v)=S(v)+T(v)(S + T)(v) = S(v) + T(v)
Scalar Multiple(λT)(v)=λT(v)(\lambda T)(v) = \lambda T(v)
Composition(ST)(v)=S(T(v))(S \circ T)(v) = S(T(v)), generally STTSS \circ T \neq T \circ S
Linear Functionalφ:VF\varphi: V \to F, where FF is the scalar field
Basis DeterminationTT is uniquely determined by values on any basis

What's Next?

Now that you understand what linear maps are, the next steps in our journey explore:

  • Kernel and Image: The fundamental subspaces associated with every linear map
  • Rank-Nullity Theorem: The dimension formula connecting kernel and image
  • Isomorphisms: When two vector spaces are "structurally identical"
  • Matrix Representation: How to represent linear maps as matrices
  • Dual Spaces: The vector space of linear functionals

These concepts build directly on the definition of linear maps and reveal the deep structure underlying linear algebra.

Linear Map Definition Practice
12
Questions
0
Correct
0%
Accuracy
1
Is T(x,y)=(x+y,xy)T(x, y) = (x + y, x - y) linear from R2\mathbb{R}^2 to R2\mathbb{R}^2?
Easy
Not attempted
2
Is T(x)=x+1T(x) = x + 1 linear from R\mathbb{R} to R\mathbb{R}?
Easy
Not attempted
3
What is dim(L(R2,R3))\dim(\mathcal{L}(\mathbb{R}^2, \mathbb{R}^3))?
Medium
Not attempted
4
If T:VWT: V \to W and S:WUS: W \to U are linear, is STS \circ T linear?
Medium
Not attempted
5
Is differentiation D:PnPn1D: P_n \to P_{n-1} defined by D(p)=pD(p) = p' linear?
Easy
Not attempted
6
The zero map 0:VW0: V \to W defined by 0(v)=0W0(v) = 0_W for all vv is:
Easy
Not attempted
7
Is T(x1,x2)=(x1x2,x1+x2)T(x_1, x_2) = (x_1 x_2, x_1 + x_2) linear from R2\mathbb{R}^2 to R2\mathbb{R}^2?
Medium
Not attempted
8
Is T(f)=f(0)T(f) = f(0) linear from C[0,1]C[0,1] to R\mathbb{R}?
Medium
Not attempted
9
The rotation Rθ:R2R2R_\theta: \mathbb{R}^2 \to \mathbb{R}^2 by angle θ\theta is:
Medium
Not attempted
10
If T:VWT: V \to W is linear and {v1,,vn}\{v_1, \ldots, v_n\} is linearly dependent, then {T(v1),,T(vn)}\{T(v_1), \ldots, T(v_n)\} is:
Hard
Not attempted
11
Is the map T:M2×2(R)RT: M_{2\times 2}(\mathbb{R}) \to \mathbb{R} given by T(A)=det(A)T(A) = \det(A) linear?
Hard
Not attempted
12
If T:R3R3T: \mathbb{R}^3 \to \mathbb{R}^3 is linear and T(e1)=T(e2)=T(e3)=vT(e_1) = T(e_2) = T(e_3) = v for some vv, what is T(1,2,3)T(1, 2, 3)?
Hard
Not attempted

Frequently Asked Questions

What's the difference between 'linear map', 'linear transformation', and 'linear operator'?

These terms are often used interchangeably, but with slight distinctions: 'Linear map' is the most general term for any linear function T: V → W. 'Linear transformation' is synonymous, though some authors prefer it when V = W. 'Linear operator' typically refers to linear maps from a space to itself (T: V → V), also called endomorphisms. 'Linear functional' specifically means a linear map to the scalar field (T: V → F).

Why is linearity such an important property?

Linearity means the map respects the vector space structure—it preserves addition and scalar multiplication. This has profound consequences: (1) A linear map is completely determined by its action on a basis, (2) Linear maps can be represented by matrices, (3) Composition of linear maps is linear, (4) The set of linear maps forms a vector space itself, (5) Many important operations (differentiation, integration, expected value) are linear.

How do I check if a function is linear?

Method 1: Verify both properties separately: (a) T(u + v) = T(u) + T(v) for all u, v (additivity), and (b) T(αv) = αT(v) for all scalars α and vectors v (homogeneity). Method 2: Verify the combined condition: T(αu + βv) = αT(u) + βT(v) for all scalars α, β and vectors u, v. Quick check: If T(0) ≠ 0, then T is NOT linear.

Can linear maps between infinite-dimensional spaces be represented by matrices?

Not in the usual finite sense. For infinite-dimensional spaces, you would need 'infinite matrices,' but these require careful handling of convergence. In functional analysis, continuous linear maps between Banach or Hilbert spaces are studied using operator theory, which generalizes the finite-dimensional theory.

What are some common non-examples of linear maps?

Common non-linear maps include: (1) Translation: f(x) = x + c for c ≠ 0 (fails T(0) = 0), (2) Squaring: f(x) = x² (not homogeneous), (3) Absolute value: f(x) = |x| (not homogeneous), (4) Determinant: det(cA) = c^n det(A) ≠ c·det(A) for n > 1, (5) Any function with a constant term.

Why must linear maps preserve the zero vector?

This follows from homogeneity: T(0) = T(0·v) = 0·T(v) = 0. This is a useful quick test: if T(0) ≠ 0, the map is definitely NOT linear.

What is a linear functional?

A linear functional (or linear form) is a linear map from a vector space V to its scalar field F: f: V → F. Examples include evaluation at a point f(p) = p(a), integration f(g) = ∫g(x)dx, and trace tr(A). Linear functionals form the dual space V*.

How does the dimension formula dim(ℒ(V,W)) = mn arise?

A linear map T: V → W is completely determined by its values on a basis of V. If dim(V) = m and dim(W) = n, then T(vⱼ) can be any vector in W for each basis vector vⱼ. We need n scalars for each of m basis vectors, giving mn degrees of freedom total.

Is the composition of linear maps always defined?

No! For S ∘ T to be defined, the codomain of T must equal the domain of S. If T: V → W and S: W → U, then S ∘ T: V → U is defined and linear. But if the spaces don't match, composition is undefined.

Can a linear map take linearly independent vectors to linearly dependent vectors?

Yes! For example, T(x, y) = (x + y, x + y) takes the independent set {(1, 0), (0, 1)} to the dependent set {(1, 1), (1, 1)}. Linear maps preserve dependence but not necessarily independence.