MathIsimple
LA-4.4
Available

Elementary Matrices

Elementary matrices are the building blocks that make Gaussian elimination work. Each row operation corresponds to multiplication by an elementary matrix.

Learning Objectives
  • Define the three types of elementary matrices
  • Understand left multiplication as row operations
  • Understand right multiplication as column operations
  • Prove that elementary matrices are invertible
  • Express invertible matrices as products of elementary matrices
  • Use elementary matrices to compute inverses
  • Connect elementary matrices to Gaussian elimination
  • Apply elementary matrices to matrix factorizations
Prerequisites
  • Matrix multiplication (LA-4.2)
  • Matrix inverse (LA-4.3)
  • Gaussian elimination basics
  • Row echelon form

1. Definition of Elementary Matrices

Elementary matrices provide the algebraic foundation for row operations. They transform Gaussian elimination from an algorithm into pure matrix multiplication.

Definition 4.15: Elementary Matrix

An elementary matrix is a matrix obtained from the identity matrix II by performing exactly one elementary row operation.

Definition 4.16: Three Types of Elementary Matrices

The three types correspond to the three types of row operations:

  • Type I (Swap): EijE_{ij} — Swap rows ii and jj
  • Type II (Scale): Ei(c)E_i(c) — Multiply row ii by scalar c0c \neq 0
  • Type III (Add): Eij(c)E_{ij}(c) — Add cc times row ii to row jj
Example 4.12: Elementary Matrices in 3×3

Here are examples of each type for 3×33 \times 3 matrices:

E12=(010100001),E2(5)=(100050001)E_{12} = \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \quad E_2(5) = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 5 & 0 \\ 0 & 0 & 1 \end{pmatrix}
E21(3)=(100310001)E_{21}(3) = \begin{pmatrix} 1 & 0 & 0 \\ 3 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}

E12E_{12} swaps rows 1 and 2; E2(5)E_2(5) multiplies row 2 by 5; E21(3)E_{21}(3) adds 3 times row 1 to row 2.

2. The Fundamental Property

Theorem 4.28: Elementary Matrices and Row Operations

Left-multiplying a matrix AA by an elementary matrix EE performs the corresponding row operation on AA:

EA=result of applying the row operation to AEA = \text{result of applying the row operation to } A
Proof:

Consider Eij(c)E_{ij}(c) which adds cc times row ii to row jj. The product Eij(c)AE_{ij}(c) A:

  • Row kk of Eij(c)E_{ij}(c) is eke_k if kjk \neq j
  • Row jj of Eij(c)E_{ij}(c) is ej+ceie_j + c e_i

So row kk of Eij(c)AE_{ij}(c) A is row kk of AA when kjk \neq j, and row jj of Eij(c)AE_{ij}(c) A is (row j of A)+c(row i of A)(\text{row } j \text{ of } A) + c \cdot (\text{row } i \text{ of } A).

Corollary 4.9: Right Multiplication for Column Operations

Right-multiplying by an elementary matrix performs the corresponding column operation:

AE=result of applying the column operation to AAE = \text{result of applying the column operation to } A
Remark 4.21: Why Left = Row, Right = Column

In EAEA, each row of the result is a linear combination of rows of AA (determined by EE). In AEAE, each column of the result is a linear combination of columns of AA.

3. Inverses of Elementary Matrices

Theorem 4.29: Elementary Matrices are Invertible

Every elementary matrix is invertible, and its inverse is also an elementary matrix:

  • Eij1=EijE_{ij}^{-1} = E_{ij} (swap twice returns to original)
  • Ei(c)1=Ei(1/c)E_i(c)^{-1} = E_i(1/c) (undo scaling by cc by scaling by 1/c1/c)
  • Eij(c)1=Eij(c)E_{ij}(c)^{-1} = E_{ij}(-c) (undo adding cc times by subtracting)
Proof:

Each row operation is reversible:

  • Swap: EijEij=IE_{ij} E_{ij} = I since swapping twice returns to the original
  • Scale: Ei(c)Ei(1/c)=IE_i(c) E_i(1/c) = I since multiplying by cc then 1/c1/c gives 1
  • Add: Eij(c)Eij(c)=IE_{ij}(c) E_{ij}(-c) = I since adding then subtracting cancels
Remark 4.22: The Key Insight

Since elementary matrices are invertible, products of elementary matrices are invertible. This is crucial for the next theorem.

4. Invertible Matrices as Products of Elementary Matrices

Theorem 4.30: Fundamental Factorization Theorem

A square matrix AA is invertible if and only if AA is a product of elementary matrices:

A invertible    A=E1E2EkA \text{ invertible} \iff A = E_1 E_2 \cdots E_k
Proof:

(⇐): Products of invertible matrices are invertible.

(⇒): If AA is invertible, Gaussian elimination reduces AA to II:

EkE2E1A=IE_k \cdots E_2 E_1 A = I

Therefore:

A=E11E21Ek1A = E_1^{-1} E_2^{-1} \cdots E_k^{-1}

Since inverses of elementary matrices are elementary, AA is a product of elementary matrices.

Corollary 4.10: Computing the Inverse

If EkE1A=IE_k \cdots E_1 A = I, then:

A1=EkE1A^{-1} = E_k \cdots E_1

This is why row-reducing [AI][A|I] to [IA1][I|A^{-1}] works!

5. Gaussian Elimination as Matrix Multiplication

Gaussian elimination is not just an algorithm—it's matrix multiplication by a product of elementary matrices.

Theorem 4.31: Gaussian Elimination as Matrix Product

If Gaussian elimination transforms AA to row echelon form UU via row operations E1,E2,,EkE_1, E_2, \ldots, E_k, then:

EkE2E1A=UE_k \cdots E_2 E_1 A = U

Letting P=EkE1P = E_k \cdots E_1, we have PA=UPA = U.

Example 4.13: Elimination as Matrix Product

Consider reducing A=(2413)A = \begin{pmatrix} 2 & 4 \\ 1 & 3 \end{pmatrix}:

Step 1: Swap rows: E12A=(1324)E_{12} A = \begin{pmatrix} 1 & 3 \\ 2 & 4 \end{pmatrix}

Step 2: Subtract 2×row 1 from row 2: E12(2)E12A=(1302)E_{12}(-2) E_{12} A = \begin{pmatrix} 1 & 3 \\ 0 & -2 \end{pmatrix}

So P=E12(2)E12P = E_{12}(-2) E_{12} and PA=UPA = U.

6. Introduction to LU Decomposition

When Gaussian elimination requires no row swaps, we get a beautiful factorization.

Theorem 4.32: LU Decomposition

If AA can be reduced to row echelon form UU using only Type III operations (no row swaps), then:

A=LUA = LU

where LL is lower triangular and UU is upper triangular.

Proof:

If EkE1A=UE_k \cdots E_1 A = U with each EjE_j of Type III, then:

A=E11Ek1U=LUA = E_1^{-1} \cdots E_k^{-1} U = LU

where L=E11Ek1L = E_1^{-1} \cdots E_k^{-1}. Since inverses of lower triangular Type III matrices are lower triangular, LL is lower triangular.

Remark 4.23: Why LU is Useful

To solve Ax=bAx = b with A=LUA = LU:

  1. Solve Ly=bLy = b (forward substitution)
  2. Solve Ux=yUx = y (back substitution)

Each step is O(n2)O(n^2). For multiple right-hand sides, compute LULU once and solve quickly each time.

7. Worked Examples

Example 1: Express as Product of Elementary Matrices

Problem: Express A=(1238)A = \begin{pmatrix} 1 & 2 \\ 3 & 8 \end{pmatrix} as a product of elementary matrices.

Solution: Row reduce to II:

(1238)R23R1(1202)R2/2(1201)R12R2(1001)\begin{pmatrix} 1 & 2 \\ 3 & 8 \end{pmatrix} \xrightarrow{R_2 - 3R_1} \begin{pmatrix} 1 & 2 \\ 0 & 2 \end{pmatrix} \xrightarrow{R_2/2} \begin{pmatrix} 1 & 2 \\ 0 & 1 \end{pmatrix} \xrightarrow{R_1 - 2R_2} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}

So E12(2)E2(1/2)E21(3)A=IE_{12}(-2) E_2(1/2) E_{21}(-3) A = I.

Therefore: A=E21(3)E2(2)E12(2)A = E_{21}(3) E_2(2) E_{12}(2)

Example 2: Finding LU

Problem: Find the LU decomposition of A=(2164)A = \begin{pmatrix} 2 & 1 \\ 6 & 4 \end{pmatrix}.

Solution: Eliminate using R23R1R_2 - 3R_1:

U=(2101)U = \begin{pmatrix} 2 & 1 \\ 0 & 1 \end{pmatrix}

The multiplier was 3, so:

L=(1031)L = \begin{pmatrix} 1 & 0 \\ 3 & 1 \end{pmatrix}

Verify: LU=(1031)(2101)=(2164)=ALU = \begin{pmatrix} 1 & 0 \\ 3 & 1 \end{pmatrix}\begin{pmatrix} 2 & 1 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 2 & 1 \\ 6 & 4 \end{pmatrix} = A

8. Common Mistakes

Mistake 1: Order of Multiplication

If operations are E1,E2,,EkE_1, E_2, \ldots, E_k in order, the product is EkE2E1E_k \cdots E_2 E_1 (reversed!). The LAST operation is on the LEFT.

Mistake 2: Confusing Left and Right

Left multiplication = row operations. Right multiplication = column operations.EAEA changes rows; AEAE changes columns.

Mistake 3: Type III Inverse

Eij(c)1=Eij(c)E_{ij}(c)^{-1} = E_{ij}(-c), not Eji(c)E_{ji}(-c). The indices stay the same; only the scalar changes sign.

9. Key Takeaways

Three Types

Swap (EijE_{ij}), Scale (Ei(c)E_i(c)), Add (Eij(c)E_{ij}(c))

Always Invertible

Elementary matrices are invertible, and inverses are also elementary.

Fundamental Theorem

AA invertible ⟺ AA = product of elementary matrices.

Gaussian = Matrix Mult

EkE1A=UE_k \cdots E_1 A = U encodes entire elimination process.

10. Determinants of Elementary Matrices

Theorem 4.33: Determinants of Elementary Matrices

The determinants of elementary matrices are:

  • det(Eij)=1\det(E_{ij}) = -1 (swap changes sign)
  • det(Ei(c))=c\det(E_i(c)) = c (scaling multiplies determinant by cc)
  • det(Eij(c))=1\det(E_{ij}(c)) = 1 (adding doesn't change determinant)
Proof:

Type I (Swap): EijE_{ij} is obtained from II by swapping two rows. Since swapping rows changes the sign of the determinant, det(Eij)=det(I)=1\det(E_{ij}) = -\det(I) = -1.

Type II (Scale): Ei(c)E_i(c) multiplies one row by cc. This multiplies the determinant by cc, so det(Ei(c))=cdet(I)=c\det(E_i(c)) = c \cdot \det(I) = c.

Type III (Add): Eij(c)E_{ij}(c) adds a multiple of one row to another. This doesn't change the determinant, so det(Eij(c))=det(I)=1\det(E_{ij}(c)) = \det(I) = 1.

Corollary 4.11: Determinant of Product

Since det(EA)=det(E)det(A)\det(EA) = \det(E) \cdot \det(A), we can compute how row operations affect determinants:

  • Swapping rows multiplies det(A)\det(A) by 1-1
  • Scaling row ii by cc multiplies det(A)\det(A) by cc
  • Adding a multiple of one row to another doesn't change det(A)\det(A)

11. Additional Practice Problems

Problem 1

Write down the elementary matrix that multiplies row 3 by 7 for a 4×44 \times 4 matrix.

Problem 2

If E1E2A=IE_1 E_2 A = I, express A1A^{-1} in terms of E1E_1 and E2E_2.

Problem 3

Prove that (Eij)T=Eij(E_{ij})^T = E_{ij} for swap matrices.

Problem 4

Express (2347)\begin{pmatrix} 2 & 3 \\ 4 & 7 \end{pmatrix} as a product of elementary matrices.

Problem 5

Show that (Ei(c))T=Ei(c)(E_i(c))^T = E_i(c) for scaling matrices.

12. Transpose of Elementary Matrices

Theorem 4.34: Transpose Properties

The transpose of each elementary matrix type:

  • EijT=EijE_{ij}^T = E_{ij} (swap is symmetric)
  • Ei(c)T=Ei(c)E_i(c)^T = E_i(c) (scale is symmetric)
  • Eij(c)T=Eji(c)E_{ij}(c)^T = E_{ji}(c) (add transposes indices)
Remark 4.24: Row vs Column Operations

This means that if EE performs a row operation when left-multiplied, then ETE^T performs the "same" column operation when right-multiplied. Specifically:

  • EAEA: row operation on AA
  • AETAE^T: corresponding column operation on AA

13. Block Elementary Matrices

The concept of elementary matrices extends to block matrices, where we perform operations on block rows/columns.

Definition 4.17: Block Row Operations

For a block matrix, block elementary operations are:

  • Swap two block rows
  • Multiply a block row by an invertible matrix (on the left)
  • Add a matrix multiple of one block row to another
Example 4.14: Block Elimination

For (ABCD)\begin{pmatrix} A & B \\ C & D \end{pmatrix} with AA invertible:

(I0CA1I)(ABCD)=(AB0DCA1B)\begin{pmatrix} I & 0 \\ -CA^{-1} & I \end{pmatrix}\begin{pmatrix} A & B \\ C & D \end{pmatrix} = \begin{pmatrix} A & B \\ 0 & D - CA^{-1}B \end{pmatrix}

The matrix DCA1BD - CA^{-1}B is called the Schur complement of AA.

14. Connections to Other Topics

Matrix Rank

Elementary row/column operations don't change the rank. This is because elementary matrices are invertible:

rank(EA)=rank(A)\text{rank}(EA) = \text{rank}(A)
Equivalent Standard Form

Using both row and column operations, any matrix can be reduced to:

PAQ=(Ir000)PAQ = \begin{pmatrix} I_r & 0 \\ 0 & 0 \end{pmatrix}

where rr is the rank and PP, QQ are products of elementary matrices.

Change of Basis

Elementary operations on the matrix of a linear map correspond to changing the basis. This is the foundation for canonical forms like Jordan form.

15. Theoretical Insights

Theorem 4.35: Generation of General Linear Group

The set of n×nn \times n elementary matrices generates GLn(F)GL_n(F) (the group of invertible matrices):

GLn(F)=Eij,Ei(c),Eij(c)GL_n(F) = \langle E_{ij}, E_i(c), E_{ij}(c) \rangle
Remark 4.25: Finite vs Infinite Generation

While there are infinitely many elementary matrices, only finitely many are needed to generate any particular invertible matrix. This is the content of the factorization theorem.

Theorem 4.36: Permutation Matrices

Products of swap matrices EijE_{ij} form the symmetric group SnS_n (permutation matrices). Every permutation can be written as a product of transpositions.

16. Study Tips

Visualize the Operation

Before computing, think about what each elementary matrix does to the identity. That same operation happens to any matrix you multiply.

Remember the Order

Operations E1,E2,E3E_1, E_2, E_3 in that order become E3E2E1AE_3 E_2 E_1 A. The LAST operation is LEFTMOST.

Verify with Small Cases

Check your understanding with 2×22 \times 2 matrices where you can easily verify EAEA does the expected row operation.

Think Algorithmically

Gaussian elimination is just multiplication by elementary matrices. This connects theory to computation.

17. Historical Notes

Origins of Gaussian Elimination: While Carl Friedrich Gauss popularized the elimination method in the early 1800s for astronomical calculations, the algorithm was known in ancient China nearly 2000 years earlier.

Matrix Formulation: The idea of expressing row operations as matrix multiplication came with the development of matrix algebra by Cayley and Sylvester in the mid-1800s. This unified algorithm and algebra.

LU Decomposition: Alan Turing and others developed practical algorithms for LU decomposition in the 1940s for early computers. The method remains central to numerical linear algebra today.

Modern Impact: Elementary matrices connect abstract algebra (group generation) with practical computation (Gaussian elimination), showing the deep unity of mathematics.

18. More Worked Examples

Example: Finding the Inverse via Elementary Matrices

Problem: Find A1A^{-1} for A=(012102211)A = \begin{pmatrix} 0 & 1 & 2 \\ 1 & 0 & 2 \\ 2 & -1 & -1 \end{pmatrix}.

Solution: Row reduce [AI][A|I] to [IA1][I|A^{-1}]:

(012100102010211001)\begin{pmatrix} 0 & 1 & 2 & | & 1 & 0 & 0 \\ 1 & 0 & 2 & | & 0 & 1 & 0 \\ 2 & -1 & -1 & | & 0 & 0 & 1 \end{pmatrix}

Step 1: Swap R1R2R_1 \leftrightarrow R_2 (using E12E_{12})

Step 2: R32R1R_3 - 2R_1 (using E31(2)E_{31}(-2))

Step 3: Continue to reach II on the left

The sequence of operations EkE1E_k \cdots E_1 applied to II gives A1A^{-1}.

Example: PLU Decomposition

Problem: When row swaps are needed, we get PA=LUPA = LU.

Setup: If A=(0123)A = \begin{pmatrix} 0 & 1 \\ 2 & 3 \end{pmatrix}, we need to swap rows first.

P=(0110),PA=(2301)P = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad PA = \begin{pmatrix} 2 & 3 \\ 0 & 1 \end{pmatrix}

Now PA=LUPA = LU where L=IL = I and U=PAU = PA (no further elimination needed).

Example: Product Verification

Problem: Verify that E12(3)E12=(1301)E_{12}(-3) E_{12} = \begin{pmatrix} 1 & -3 \\ 0 & 1 \end{pmatrix}.

Solution:

E12(3)=(1301),E12=(0110)E_{12}(-3) = \begin{pmatrix} 1 & -3 \\ 0 & 1 \end{pmatrix}, \quad E_{12} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}
E12(3)E12=(1301)(0110)=(3110)E_{12}(-3) E_{12} = \begin{pmatrix} 1 & -3 \\ 0 & 1 \end{pmatrix}\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} = \begin{pmatrix} -3 & 1 \\ 1 & 0 \end{pmatrix}

Note: This is NOT what was stated—always verify your computations!

19. Applications of Elementary Matrices

Solving Linear Systems

Gaussian elimination on Ax=bAx = b is equivalent to computing EkE1Ax=EkE1bE_k \cdots E_1 A x = E_k \cdots E_1 b. This transforms the system to Ux=cUx = c where UU is upper triangular.

Computing Determinants

Row reduce to upper triangular form, tracking the effect on the determinant:

det(A)=det(U)det(E1)det(Ek)\det(A) = \frac{\det(U)}{\det(E_1) \cdots \det(E_k)}

For triangular UU, the determinant is the product of diagonal entries.

Matrix Factorizations

Elementary matrices are the building blocks for:

  • LU: A=LUA = LU from elimination without row swaps
  • PLU: PA=LUPA = LU with permutation matrix PP
  • QR: Different elementary matrices (Householder reflections)

20. Quick Reference Summary

TypeNotationInverseDeterminantTranspose
SwapEijE_{ij}EijE_{ij}1-1EijE_{ij}
ScaleEi(c)E_i(c)Ei(1/c)E_i(1/c)ccEi(c)E_i(c)
AddEij(c)E_{ij}(c)Eij(c)E_{ij}(-c)11Eji(c)E_{ji}(c)

21. Challenge Problems

Challenge 1: Minimum Number of Operations

What is the minimum number of elementary row operations needed to reduce(123456789)\begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix} to row echelon form?

Challenge 2: Commuting Elementary Matrices

For which pairs of elementary matrices EE and FF is EF=FEEF = FE? Characterize all commuting pairs.

Challenge 3: Order of Operations

Prove that the set of Type III elementary matrices forms a subgroup of GLn(F)GL_n(F)(it's actually a normal subgroup!).

22. Geometric Interpretation

Type I: Reflections

Row swap matrices EijE_{ij} are reflections across certain hyperplanes. In 2D,E12E_{12} reflects across the line y=xy = x.

Type II: Scaling

Scale matrices Ei(c)E_i(c) stretch or compress along one axis by factor cc. They change volumes by factor c|c|.

Type III: Shears

Add matrices Eij(c)E_{ij}(c) are shear transformations. They preserve volumes (determinant = 1) but change angles.

Key Takeaways

  • Three types: swap (det=-1), scale (det=c), add (det=1)
  • All invertible matrices are products of elementary matrices
  • Row operations = left multiplication by elementary matrices
  • Foundation for LU decomposition and Gaussian elimination
  • Geometric view: reflections, scalings, and shears
  • Every invertible matrix can be written as product of elementary matrices
  • Inverse of elementary matrix is also elementary of same type

Related Topics

Row Reduction
LU Decomposition
Matrix Inverse
Determinants
Gaussian Elimination
Linear Systems
PLU Factorization
Triangular Matrices
QR Decomposition
Householder Reflections

Applications of Elementary Matrices

LU Decomposition

Factor A=LUA = LU where LL is product of Type III inverses and UU is upper triangular.

Gauss-Jordan Method

Compute A1A^{-1} by applying elementary operations to reduce [AI][A|I] to [IA1][I|A^{-1}].

Determinant Computation

Track determinant changes: swap (×−1), scale (×c), add (×1). Get det from upper triangular form.

Numerical Stability

Pivoting strategies (row swaps) improve numerical stability in floating-point computation.

Historical Notes

Carl Friedrich Gauss (1777-1855): Developed the systematic elimination method, though he didn't express it in matrix form.

Camille Jordan (1838-1922): Formalized reduction to canonical form and elementary operations in his work on group theory.

Modern Computing: Elementary matrices form the theoretical basis for LAPACK, BLAS, and all numerical linear algebra libraries.

Pivoting Strategies: Partial and complete pivoting (row swaps) were developed in the 1960s to ensure numerical stability.

These algorithms are now implemented in every scientific computing platform.

💡 Remember

Elementary matrices are the "atoms" of matrix theory. Every matrix operation can be decomposed into a sequence of these simple building blocks.

Mastering elementary matrices gives you deep insight into all matrix algorithms!

Foundation for: LU, QR, SVD decompositions

What's Next?

Now that you understand elementary matrices, the next topics apply these concepts:

  • Matrix Rank: The fundamental dimension that determines solvability
  • Determinants: A scalar that detects invertibility
  • Eigenvalues: The next major topic in linear algebra

Elementary matrices provide the computational foundation for nearly all matrix algorithms.

Elementary Matrices Practice
10
Questions
0
Correct
0%
Accuracy
1
Elementary matrices are always:
Easy
Not attempted
2
Left-multiplying by an elementary matrix performs a:
Easy
Not attempted
3
The inverse of a row-swap elementary matrix is:
Medium
Not attempted
4
If Ei(c)E_i(c) multiplies row ii by c0c \neq 0, then Ei(c)1=E_i(c)^{-1} =
Medium
Not attempted
5
If Eij(c)E_{ij}(c) adds cc times row ii to row jj, then Eij(c)1=E_{ij}(c)^{-1} =
Medium
Not attempted
6
Every invertible matrix is a product of:
Medium
Not attempted
7
The determinant of a row-swap elementary matrix is:
Medium
Not attempted
8
Right-multiplying by an elementary matrix performs a:
Easy
Not attempted
9
Elementary matrices are obtained from II by:
Easy
Not attempted
10
If PA=UPA = U where UU is row echelon form, then PP is:
Hard
Not attempted