Elementary matrices are the building blocks that make Gaussian elimination work. Each row operation corresponds to multiplication by an elementary matrix.
Elementary matrices provide the algebraic foundation for row operations. They transform Gaussian elimination from an algorithm into pure matrix multiplication.
An elementary matrix is a matrix obtained from the identity matrix by performing exactly one elementary row operation.
The three types correspond to the three types of row operations:
Here are examples of each type for matrices:
swaps rows 1 and 2; multiplies row 2 by 5; adds 3 times row 1 to row 2.
Left-multiplying a matrix by an elementary matrix performs the corresponding row operation on :
Consider which adds times row to row . The product :
So row of is row of when , and row of is .
Right-multiplying by an elementary matrix performs the corresponding column operation:
In , each row of the result is a linear combination of rows of (determined by ). In , each column of the result is a linear combination of columns of .
Every elementary matrix is invertible, and its inverse is also an elementary matrix:
Each row operation is reversible:
Since elementary matrices are invertible, products of elementary matrices are invertible. This is crucial for the next theorem.
A square matrix is invertible if and only if is a product of elementary matrices:
(⇐): Products of invertible matrices are invertible.
(⇒): If is invertible, Gaussian elimination reduces to :
Therefore:
Since inverses of elementary matrices are elementary, is a product of elementary matrices.
If , then:
This is why row-reducing to works!
Gaussian elimination is not just an algorithm—it's matrix multiplication by a product of elementary matrices.
If Gaussian elimination transforms to row echelon form via row operations , then:
Letting , we have .
Consider reducing :
Step 1: Swap rows:
Step 2: Subtract 2×row 1 from row 2:
So and .
When Gaussian elimination requires no row swaps, we get a beautiful factorization.
If can be reduced to row echelon form using only Type III operations (no row swaps), then:
where is lower triangular and is upper triangular.
If with each of Type III, then:
where . Since inverses of lower triangular Type III matrices are lower triangular, is lower triangular.
To solve with :
Each step is . For multiple right-hand sides, compute once and solve quickly each time.
Problem: Express as a product of elementary matrices.
Solution: Row reduce to :
So .
Therefore:
Problem: Find the LU decomposition of .
Solution: Eliminate using :
The multiplier was 3, so:
Verify: ✓
If operations are in order, the product is (reversed!). The LAST operation is on the LEFT.
Left multiplication = row operations. Right multiplication = column operations. changes rows; changes columns.
, not . The indices stay the same; only the scalar changes sign.
Swap (), Scale (), Add ()
Elementary matrices are invertible, and inverses are also elementary.
invertible ⟺ = product of elementary matrices.
encodes entire elimination process.
The determinants of elementary matrices are:
Type I (Swap): is obtained from by swapping two rows. Since swapping rows changes the sign of the determinant, .
Type II (Scale): multiplies one row by . This multiplies the determinant by , so .
Type III (Add): adds a multiple of one row to another. This doesn't change the determinant, so .
Since , we can compute how row operations affect determinants:
Problem 1
Write down the elementary matrix that multiplies row 3 by 7 for a matrix.
Problem 2
If , express in terms of and .
Problem 3
Prove that for swap matrices.
Problem 4
Express as a product of elementary matrices.
Problem 5
Show that for scaling matrices.
The transpose of each elementary matrix type:
This means that if performs a row operation when left-multiplied, then performs the "same" column operation when right-multiplied. Specifically:
The concept of elementary matrices extends to block matrices, where we perform operations on block rows/columns.
For a block matrix, block elementary operations are:
For with invertible:
The matrix is called the Schur complement of .
Elementary row/column operations don't change the rank. This is because elementary matrices are invertible:
Using both row and column operations, any matrix can be reduced to:
where is the rank and , are products of elementary matrices.
Elementary operations on the matrix of a linear map correspond to changing the basis. This is the foundation for canonical forms like Jordan form.
The set of elementary matrices generates (the group of invertible matrices):
While there are infinitely many elementary matrices, only finitely many are needed to generate any particular invertible matrix. This is the content of the factorization theorem.
Products of swap matrices form the symmetric group (permutation matrices). Every permutation can be written as a product of transpositions.
Before computing, think about what each elementary matrix does to the identity. That same operation happens to any matrix you multiply.
Operations in that order become . The LAST operation is LEFTMOST.
Check your understanding with matrices where you can easily verify does the expected row operation.
Gaussian elimination is just multiplication by elementary matrices. This connects theory to computation.
Origins of Gaussian Elimination: While Carl Friedrich Gauss popularized the elimination method in the early 1800s for astronomical calculations, the algorithm was known in ancient China nearly 2000 years earlier.
Matrix Formulation: The idea of expressing row operations as matrix multiplication came with the development of matrix algebra by Cayley and Sylvester in the mid-1800s. This unified algorithm and algebra.
LU Decomposition: Alan Turing and others developed practical algorithms for LU decomposition in the 1940s for early computers. The method remains central to numerical linear algebra today.
Modern Impact: Elementary matrices connect abstract algebra (group generation) with practical computation (Gaussian elimination), showing the deep unity of mathematics.
Problem: Find for .
Solution: Row reduce to :
Step 1: Swap (using )
Step 2: (using )
Step 3: Continue to reach on the left
The sequence of operations applied to gives .
Problem: When row swaps are needed, we get .
Setup: If , we need to swap rows first.
Now where and (no further elimination needed).
Problem: Verify that .
Solution:
Note: This is NOT what was stated—always verify your computations!
Gaussian elimination on is equivalent to computing . This transforms the system to where is upper triangular.
Row reduce to upper triangular form, tracking the effect on the determinant:
For triangular , the determinant is the product of diagonal entries.
Elementary matrices are the building blocks for:
| Type | Notation | Inverse | Determinant | Transpose |
|---|---|---|---|---|
| Swap | ||||
| Scale | ||||
| Add |
What is the minimum number of elementary row operations needed to reduce to row echelon form?
For which pairs of elementary matrices and is ? Characterize all commuting pairs.
Prove that the set of Type III elementary matrices forms a subgroup of (it's actually a normal subgroup!).
Row swap matrices are reflections across certain hyperplanes. In 2D, reflects across the line .
Scale matrices stretch or compress along one axis by factor . They change volumes by factor .
Add matrices are shear transformations. They preserve volumes (determinant = 1) but change angles.
Factor where is product of Type III inverses and is upper triangular.
Compute by applying elementary operations to reduce to .
Track determinant changes: swap (×−1), scale (×c), add (×1). Get det from upper triangular form.
Pivoting strategies (row swaps) improve numerical stability in floating-point computation.
Carl Friedrich Gauss (1777-1855): Developed the systematic elimination method, though he didn't express it in matrix form.
Camille Jordan (1838-1922): Formalized reduction to canonical form and elementary operations in his work on group theory.
Modern Computing: Elementary matrices form the theoretical basis for LAPACK, BLAS, and all numerical linear algebra libraries.
Pivoting Strategies: Partial and complete pivoting (row swaps) were developed in the 1960s to ensure numerical stability.
These algorithms are now implemented in every scientific computing platform.
Elementary matrices are the "atoms" of matrix theory. Every matrix operation can be decomposed into a sequence of these simple building blocks.
Mastering elementary matrices gives you deep insight into all matrix algorithms!
Foundation for: LU, QR, SVD decompositions
Now that you understand elementary matrices, the next topics apply these concepts:
Elementary matrices provide the computational foundation for nearly all matrix algorithms.