The rank of a matrix is the fundamental dimension that determines solvability of linear systems and invertibility of square matrices.
The rank of a matrix captures the "essential dimension" of the linear transformation it represents.
For an matrix :
For any matrix :
The row and column vectors of a matrix are completely different sets of vectors, yet they always have the same rank. This is not at all obvious from the definition!
Step 1: Show column rank ≤ row rank.
Let the row rank be , and let be a maximal linearly independent set of rows. Every row is a linear combination of these rows:
Looking at the -th entry of each row, every column is a linear combination of vectors. Therefore column rank ≤ = row rank.
Step 2: Apply the same argument to .
Column rank of ≤ row rank of , which means row rank of ≤ column rank of .
Together: row rank = column rank.
Since transpose swaps rows and columns, and row rank = column rank.
The rank of a matrix equals the number of pivot columns in its row echelon form.
Row operations preserve row rank (they don't change the row space). In row echelon form, the non-zero rows are linearly independent, so row rank = number of non-zero rows = number of pivots.
Find the rank of .
Solution: Row reduce:
There are 2 pivots, so .
Find the rank of .
Solution: Row reduce:
There are 2 pivots, so . Note that despite having 4 rows and 3 columns, the rank is only 2.
Show that has full rank.
Solution: Row reduce:
3 pivots in a 3×3 matrix means (full rank), so is invertible.
Besides counting pivots, rank can be computed by:
For a square matrix , the following are equivalent:
(1) ⇒ (2): Invertible means bijective as a linear map, so dim(im) = n.
(2) ⇒ (3): Rank n means row/column rank is n, so rows/columns are independent.
(3) ⇒ (4): Independent columns means ker = {0}.
(4) ⇒ (1): Trivial kernel means injective, and for square matrices, injective implies surjective.
For the system :
For an matrix :
Analyze the system where has rank 2:
For an matrix :
For left inverse: If , then is with (invertible).
Define . Then .
For a full column rank matrix , the left inverse is:
This gives the least squares solution to : .
For an matrix :
For matrices and where is defined:
For and :
Consider the sequence of linear maps:
We have , so .
Also, has dimension at least (columns of mapped to ).
Since has dimension , we get the inequality.
If has rank 3 and has rank 4, find bounds on .
Solution:
Therefore exactly.
For matrices , , where is defined:
If is invertible, then for invertible .
By Sylvester: .
Since is invertible, , so .
Every matrix of rank can be transformed to:
where and are products of elementary matrices.
Use row and column operations (elementary matrices) to create pivots and clear all other entries. The result has in the upper-left and zeros elsewhere.
Matrices and are equivalent if for invertible , . Two matrices are equivalent iff they have the same rank.
Problem: For which values of does have a unique solution?
Solution: Unique solution requires rank = 2 (full rank).
So unique solution for all .
Problem: If has rank 3 and has rank 2, what are bounds on rank(AB)?
Solution:
So .
Row operations preserve row space but can change column space. However, they preserve column RANK.
in general. The inequality is ≤, and it can be strict.
solvable requires , not just that has high rank.
Row rank = column rank for ANY matrix.
Rank = number of pivots in row echelon form.
Square matrix invertible ⟺ full rank.
Every matrix equivalent to .
Problem 1
Find the rank of .
Problem 2
If and is , what is dim(ker A)?
Problem 3
Prove that .
Problem 4
For which values of is ?
Problem 5
If , prove that where is .
The rank of a matrix equals the dimension of the image of the linear map :
For square matrices, rank is related to the determinant:
The rank relates to eigenvalues: has rank iff 0 is NOT an eigenvalue. Equivalently:
The fastest way to find rank: row reduce and count pivots (leading 1s in REF).
Sometimes it's easier to count independent rows, sometimes columns. Use whichever is easier.
Remember rank ≤ min(m,n). If you need rank 5 but the matrix is 3×7, it's at most 3.
Rank = dim(image). This connects to rank-nullity: n = rank + nullity.
Ferdinand Georg Frobenius: Developed much of the theory of matrix rank in the late 19th century, including the rank-nullity theorem for matrices.
James Joseph Sylvester: Proved the inequality that bears his name: rank(A) + rank(B) - n ≤ rank(AB). This connects products to their factors.
Row Rank = Column Rank: This surprising theorem was established as matrices became understood as representations of linear maps, not just arrays of numbers.
Modern Applications: Matrix rank is fundamental in data science (PCA uses low-rank approximations), control theory, and signal processing.
| Property | Formula |
|---|---|
| Basic bound | |
| Transpose | |
| Product upper bound | |
| Sylvester's inequality | |
| Sum inequality | |
| Invertibility |
SVD approximates a matrix with lower rank, compressing images and data while preserving essential information.
The rank of controllability and observability matrices determines whether a system can be controlled or observed.
Rank determines the existence and uniqueness of least squares solutions. Full column rank guarantees uniqueness.
Matrix factorization techniques (Netflix, Amazon) assume user-item matrices are approximately low-rank.
Rank constraints appear in epipolar geometry, structure from motion, and background subtraction.
The rank of incidence and adjacency matrices reveals connectivity properties of graphs.
For an matrix :
The rank of equals the size of the largest non-zero minor (subdeterminant).
iff there exists an non-zero minor, but all minors are zero.
For :
For any real matrix :
Show :
So nullity() = nullity(), and by rank-nullity, rank() = rank().
In numerical computations, the "numerical rank" may differ from theoretical rank due to floating-point errors. Small singular values are often treated as zero based on a tolerance threshold.
If (idempotent), then .
Eigenvalues of are 0 or 1 only. Say has eigenvalues equal to 1.
Then and .
| Method | Complexity | Notes |
|---|---|---|
| Gaussian Elimination | Standard method | |
| SVD | for | More numerically stable |
| QR Decomposition | Good stability | |
| Rank-revealing LU | = rank, efficient for low rank |
When computing rank numerically:
Consider
Theoretically rank = 2, but numerically the rows are nearly dependent.
SVD gives singular values ≈ 3.16 and 0.00001. With tolerance , numerical rank = 1.
The Eckart-Young theorem states that the best rank- approximation to (in Frobenius or spectral norm) is obtained by keeping only the top singular values in the SVD:
This is fundamental in data compression, noise reduction, and dimensionality reduction.
A set of vectors in is linearly independent if and only if the matrix with these vectors as columns has rank .
The maximum number of linearly independent vectors that can be chosen from the rows (or columns) of a matrix equals .
Matrix rank can be understood from several equivalent perspectives:
Matrix rank is the unifying concept connecting row space, column space, solvability, and invertibility.
Rank determines: solvability of systems, invertibility, and dimension of null space.
rank(A) + nullity(A) = number of columns
Now that you understand matrix rank, you're ready for the next major topics: