MathIsimple
LA-4.5
Available

Matrix Rank

The rank of a matrix is the fundamental dimension that determines solvability of linear systems and invertibility of square matrices.

Learning Objectives
  • Define the rank of a matrix
  • Prove that row rank equals column rank
  • Compute rank using row reduction
  • Understand rank and solvability of linear systems
  • Apply rank to determine invertibility
  • Master rank inequalities
  • Connect rank to dimension of image and kernel
  • Understand the equivalent standard form
Prerequisites
  • Linear independence and span
  • Dimension of vector spaces
  • Elementary matrices (LA-4.4)
  • Gaussian elimination

1. Definition of Matrix Rank

The rank of a matrix captures the "essential dimension" of the linear transformation it represents.

Definition 4.18: Three Equivalent Definitions of Rank

For an m×nm \times n matrix AA:

  • Rank: r(A)=dim(im A)r(A) = \dim(\text{im } A) (dimension of image as a linear map)
  • Row rank: Dimension of the span of row vectors
  • Column rank: Dimension of the span of column vectors
Theorem 4.37: Fundamental Rank Theorem

For any matrix AA:

rank(A)=row rank(A)=column rank(A)\text{rank}(A) = \text{row rank}(A) = \text{column rank}(A)
Remark 4.26: Why This is Remarkable

The row and column vectors of a matrix are completely different sets of vectors, yet they always have the same rank. This is not at all obvious from the definition!

2. Proof of the Rank Theorem

Proof:

Step 1: Show column rank ≤ row rank.

Let the row rank be rr, and let α1,,αr\alpha_1, \ldots, \alpha_r be a maximal linearly independent set of rows. Every row is a linear combination of these rr rows:

rowi=k=1rcikαk\text{row}_i = \sum_{k=1}^{r} c_{ik} \alpha_k

Looking at the jj-th entry of each row, every column is a linear combination of rr vectors. Therefore column rank ≤ rr = row rank.

Step 2: Apply the same argument to ATA^T.

Column rank of ATA^T ≤ row rank of ATA^T, which means row rank of AA ≤ column rank of AA.

Together: row rank = column rank.

Corollary 4.12: Transpose Preserves Rank
rank(AT)=rank(A)\text{rank}(A^T) = \text{rank}(A)

Since transpose swaps rows and columns, and row rank = column rank.

3. Computing Rank

Theorem 4.38: Rank = Number of Pivots

The rank of a matrix equals the number of pivot columns in its row echelon form.

rank(A)=number of pivots in REF of A\text{rank}(A) = \text{number of pivots in REF of } A
Proof:

Row operations preserve row rank (they don't change the row space). In row echelon form, the non-zero rows are linearly independent, so row rank = number of non-zero rows = number of pivots.

Example 4.15: Finding Rank

Find the rank of A=(123246134)A = \begin{pmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 1 & 3 & 4 \end{pmatrix}.

Solution: Row reduce:

(123246134)R22R1,R3R1(123000011)R2R3(123011000)\begin{pmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 1 & 3 & 4 \end{pmatrix} \xrightarrow{R_2 - 2R_1, R_3 - R_1} \begin{pmatrix} 1 & 2 & 3 \\ 0 & 0 & 0 \\ 0 & 1 & 1 \end{pmatrix} \xrightarrow{R_2 \leftrightarrow R_3} \begin{pmatrix} 1 & 2 & 3 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{pmatrix}

There are 2 pivots, so rank(A)=2\text{rank}(A) = 2.

Example 4.15b: Rank of a 4×3 Matrix

Find the rank of B=(102011215113)B = \begin{pmatrix} 1 & 0 & 2 \\ 0 & 1 & 1 \\ 2 & 1 & 5 \\ 1 & 1 & 3 \end{pmatrix}.

Solution: Row reduce:

(102011215113)R32R1,R4R1(102011011011)R3R2,R4R2(102011000000)\begin{pmatrix} 1 & 0 & 2 \\ 0 & 1 & 1 \\ 2 & 1 & 5 \\ 1 & 1 & 3 \end{pmatrix} \xrightarrow{R_3 - 2R_1, R_4 - R_1} \begin{pmatrix} 1 & 0 & 2 \\ 0 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 1 & 1 \end{pmatrix} \xrightarrow{R_3 - R_2, R_4 - R_2} \begin{pmatrix} 1 & 0 & 2 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}

There are 2 pivots, so rank(B)=2\text{rank}(B) = 2. Note that despite having 4 rows and 3 columns, the rank is only 2.

Example 4.15c: Full Rank Matrix

Show that C=(120013102)C = \begin{pmatrix} 1 & 2 & 0 \\ 0 & 1 & 3 \\ 1 & 0 & 2 \end{pmatrix} has full rank.

Solution: Row reduce:

(120013102)R3R1(120013022)R3+2R2(120013008)\begin{pmatrix} 1 & 2 & 0 \\ 0 & 1 & 3 \\ 1 & 0 & 2 \end{pmatrix} \xrightarrow{R_3 - R_1} \begin{pmatrix} 1 & 2 & 0 \\ 0 & 1 & 3 \\ 0 & -2 & 2 \end{pmatrix} \xrightarrow{R_3 + 2R_2} \begin{pmatrix} 1 & 2 & 0 \\ 0 & 1 & 3 \\ 0 & 0 & 8 \end{pmatrix}

3 pivots in a 3×3 matrix means rank(C)=3\text{rank}(C) = 3 (full rank), so CC is invertible.

Remark 4.27: Alternative Rank Computations

Besides counting pivots, rank can be computed by:

  • Finding the size of the largest non-zero minor (subdeterminant)
  • Finding a maximal linearly independent set of rows/columns
  • Using SVD: rank = number of non-zero singular values

4. Rank and Invertibility

Theorem 4.39: Invertibility Criterion

For a square matrix AMn(F)A \in M_n(F), the following are equivalent:

  1. AA is invertible
  2. rank(A)=n\text{rank}(A) = n
  3. The nn rows (columns) are linearly independent
  4. Ax=0Ax = 0 has only the trivial solution
Proof:

(1) ⇒ (2): Invertible means bijective as a linear map, so dim(im) = n.

(2) ⇒ (3): Rank n means row/column rank is n, so rows/columns are independent.

(3) ⇒ (4): Independent columns means ker = {0}.

(4) ⇒ (1): Trivial kernel means injective, and for square matrices, injective implies surjective.

5. Rank and Linear Systems

Theorem 4.40: Solvability via Rank

For the system Ax=bAx = b:

  • Has a solution iff rank(A)=rank([Ab])\text{rank}(A) = \text{rank}([A|b])
  • Has a unique solution iff additionally rank(A)=n\text{rank}(A) = n (number of unknowns)
  • Has infinitely many solutions iff rank(A)<n\text{rank}(A) < n and consistent
Theorem 4.41: Full Row/Column Rank

For an m×nm \times n matrix AA:

  • Full column rank (rank=n\text{rank} = n): AA is injective, ker = {0}, solutions unique if they exist
  • Full row rank (rank=m\text{rank} = m): AA is surjective, Ax=bAx = b always solvable
Example 4.17: Linear System Analysis

Analyze the system Ax=bAx = b where AM2×4A \in M_{2 \times 4} has rank 2:

  • Full row rank: Since rank = 2 = m, system is always consistent (surjective)
  • Nullity: nullity=42=2\text{nullity} = 4 - 2 = 2 (two free parameters)
  • Solution space: If x0x_0 is a particular solution, general solution is x0+ker(A)x_0 + \ker(A)
Theorem 4.41b: Left/Right Inverse Existence

For an m×nm \times n matrix AA:

  • AA has a left inverse iff AA has full column rank (mnm \geq n, rank=n\text{rank} = n)
  • AA has a right inverse iff AA has full row rank (nmn \geq m, rank=m\text{rank} = m)
  • AA has both iff AA is square and invertible
Proof:

For left inverse: If rank(A)=n\text{rank}(A) = n, then ATAA^T A is ntimesnn \\times n with rank(ATA)=n\text{rank}(A^T A) = n (invertible).

Define L=(ATA)1ATL = (A^T A)^{-1} A^T. Then LA=(ATA)1ATA=InLA = (A^T A)^{-1} A^T A = I_n.

Example 4.17b: Moore-Penrose Pseudoinverse

For a full column rank matrix AA, the left inverse is:

A+=(ATA)1ATA^+ = (A^T A)^{-1} A^T

This gives the least squares solution to Ax=bAx = b: x=A+bx = A^+ b.

6. Rank Inequalities

Theorem 4.42: Basic Rank Bounds

For an m×nm \times n matrix AA:

0rank(A)min(m,n)0 \leq \text{rank}(A) \leq \min(m, n)
Theorem 4.43: Product Rank Inequality

For matrices AA and BB where ABAB is defined:

rank(AB)min(rank(A),rank(B))\text{rank}(AB) \leq \min(\text{rank}(A), \text{rank}(B))
Theorem 4.44: Sum Rank Inequality
rank(A+B)rank(A)+rank(B)\text{rank}(A + B) \leq \text{rank}(A) + \text{rank}(B)
Theorem 4.45: Sylvester's Inequality

For AMm×nA \in M_{m \times n} and BMn×pB \in M_{n \times p}:

rank(A)+rank(B)nrank(AB)\text{rank}(A) + \text{rank}(B) - n \leq \text{rank}(AB)
Proof:

Consider the sequence of linear maps:

FpBFnAFmF^p \xrightarrow{B} F^n \xrightarrow{A} F^m

We have im(AB)im(A)\text{im}(AB) \subseteq \text{im}(A), so rank(AB)rank(A)\text{rank}(AB) \leq \text{rank}(A).

Also, ker(A)im(B)\ker(A) \cap \text{im}(B) has dimension at least rank(B)rank(AB)\text{rank}(B) - \text{rank}(AB) (columns of BB mapped to ker(A)\ker(A)).

Since ker(A)Fn\ker(A) \subseteq F^n has dimension nrank(A)n - \text{rank}(A), we get the inequality.

Example 4.16b: Applying Sylvester's Inequality

If AM3×4A \in M_{3 \times 4} has rank 3 and BM4×5B \in M_{4 \times 5} has rank 4, find bounds on rank(AB)\text{rank}(AB).

Solution:

  • Upper bound: rank(AB)min(3,4)=3\text{rank}(AB) \leq \min(3, 4) = 3
  • Sylvester: rank(AB)3+44=3\text{rank}(AB) \geq 3 + 4 - 4 = 3

Therefore rank(AB)=3\text{rank}(AB) = 3 exactly.

Theorem 4.46: Frobenius Inequality

For matrices AA, BB, CC where ABCABC is defined:

rank(AB)+rank(BC)rank(B)+rank(ABC)\text{rank}(AB) + \text{rank}(BC) \leq \text{rank}(B) + \text{rank}(ABC)
Corollary 4.13: Rank Equality under Full Rank

If PP is invertible, then rank(PA)=rank(A)=rank(AQ)\text{rank}(PA) = \text{rank}(A) = \text{rank}(AQ) for invertible QQ.

Proof:

By Sylvester: rank(P)+rank(A)nrank(PA)rank(A)\text{rank}(P) + \text{rank}(A) - n \leq \text{rank}(PA) \leq \text{rank}(A).

Since PP is invertible, rank(P)=n\text{rank}(P) = n, so rank(A)rank(PA)\text{rank}(A) \leq \text{rank}(PA).

7. Equivalent Standard Form

Theorem 4.46: Equivalent Standard Form

Every m×nm \times n matrix AA of rank rr can be transformed to:

PAQ=(Ir000)PAQ = \begin{pmatrix} I_r & 0 \\ 0 & 0 \end{pmatrix}

where PP and QQ are products of elementary matrices.

Proof:

Use row and column operations (elementary matrices) to create pivots and clear all other entries. The result has IrI_r in the upper-left and zeros elsewhere.

Definition 4.19: Matrix Equivalence

Matrices AA and BB are equivalent if B=PAQB = PAQ for invertible PP, QQ. Two matrices are equivalent iff they have the same rank.

8. Worked Examples

Example: Rank and Solutions

Problem: For which values of kk does (123k)x=(13)\begin{pmatrix} 1 & 2 \\ 3 & k \end{pmatrix}x = \begin{pmatrix} 1 \\ 3 \end{pmatrix} have a unique solution?

Solution: Unique solution requires rank = 2 (full rank).

rank<2    det=0    k6=0    k=6\text{rank} < 2 \iff \det = 0 \iff k - 6 = 0 \iff k = 6

So unique solution for all k6k \neq 6.

Example: Using Sylvester

Problem: If AM3×4A \in M_{3 \times 4} has rank 3 and BM4×2B \in M_{4 \times 2} has rank 2, what are bounds on rank(AB)?

Solution:

Upper bound: min(3,2)=2\text{Upper bound: } \min(3, 2) = 2
Lower bound (Sylvester): 3+24=1\text{Lower bound (Sylvester): } 3 + 2 - 4 = 1

So 1rank(AB)21 \leq \text{rank}(AB) \leq 2.

9. Common Mistakes

Mistake 1: Confusing Row and Column Space

Row operations preserve row space but can change column space. However, they preserve column RANK.

Mistake 2: Rank of Sum

rank(A+B)rank(A)+rank(B)\text{rank}(A + B) \neq \text{rank}(A) + \text{rank}(B) in general. The inequality is ≤, and it can be strict.

Mistake 3: Solvability Criterion

Ax=bAx = b solvable requires rank(A)=rank([Ab])\text{rank}(A) = \text{rank}([A|b]), not just that AA has high rank.

10. Key Takeaways

Row = Column

Row rank = column rank for ANY matrix.

Count Pivots

Rank = number of pivots in row echelon form.

Invertibility

Square matrix invertible ⟺ full rank.

Standard Form

Every matrix equivalent to [Ir,0;0,0][I_r, 0; 0, 0].

11. Additional Practice Problems

Problem 1

Find the rank of (123424681111)\begin{pmatrix} 1 & 2 & 3 & 4 \\ 2 & 4 & 6 & 8 \\ 1 & 1 & 1 & 1 \end{pmatrix}.

Problem 2

If rank(A)=3\text{rank}(A) = 3 and AA is 5×75 \times 7, what is dim(ker A)?

Problem 3

Prove that rank(AAT)=rank(A)\text{rank}(AA^T) = \text{rank}(A).

Problem 4

For which values of kk is rank(1k1k1111k)<3\text{rank}\begin{pmatrix} 1 & k & 1 \\ k & 1 & 1 \\ 1 & 1 & k \end{pmatrix} < 3?

Problem 5

If AB=0AB = 0, prove that rank(A)+rank(B)n\text{rank}(A) + \text{rank}(B) \leq n where AA is m×nm \times n.

12. Connections to Other Topics

Linear Maps

The rank of a matrix AA equals the dimension of the image of the linear map xAxx \mapsto Ax:

rank(A)=dim(im(TA))\text{rank}(A) = \dim(\text{im}(T_A))
Determinants

For square matrices, rank is related to the determinant:

rank(A)=n    det(A)0\text{rank}(A) = n \iff \det(A) \neq 0
Eigenvalues

The rank relates to eigenvalues: AA has rank nn iff 0 is NOT an eigenvalue. Equivalently:

rank(A)<n    λ=0 is an eigenvalue\text{rank}(A) < n \iff \lambda = 0 \text{ is an eigenvalue}

13. Study Tips

Count Pivots

The fastest way to find rank: row reduce and count pivots (leading 1s in REF).

Check Both Ways

Sometimes it's easier to count independent rows, sometimes columns. Use whichever is easier.

Use Upper Bounds

Remember rank ≤ min(m,n). If you need rank 5 but the matrix is 3×7, it's at most 3.

Think Linear Maps

Rank = dim(image). This connects to rank-nullity: n = rank + nullity.

14. Historical Notes

Ferdinand Georg Frobenius: Developed much of the theory of matrix rank in the late 19th century, including the rank-nullity theorem for matrices.

James Joseph Sylvester: Proved the inequality that bears his name: rank(A) + rank(B) - n ≤ rank(AB). This connects products to their factors.

Row Rank = Column Rank: This surprising theorem was established as matrices became understood as representations of linear maps, not just arrays of numbers.

Modern Applications: Matrix rank is fundamental in data science (PCA uses low-rank approximations), control theory, and signal processing.

15. Quick Reference Summary

PropertyFormula
Basic boundrank(A)min(m,n)\text{rank}(A) \leq \min(m, n)
Transposerank(AT)=rank(A)\text{rank}(A^T) = \text{rank}(A)
Product upper boundrank(AB)min(rank(A),rank(B))\text{rank}(AB) \leq \min(\text{rank}(A), \text{rank}(B))
Sylvester's inequalityrank(A)+rank(B)nrank(AB)\text{rank}(A) + \text{rank}(B) - n \leq \text{rank}(AB)
Sum inequalityrank(A+B)rank(A)+rank(B)\text{rank}(A+B) \leq \text{rank}(A) + \text{rank}(B)
InvertibilityA invertible    rank(A)=nA \text{ invertible} \iff \text{rank}(A) = n

16. Applications of Rank

Data Compression (Low-Rank Approximation)

SVD approximates a matrix with lower rank, compressing images and data while preserving essential information.

Control Theory

The rank of controllability and observability matrices determines whether a system can be controlled or observed.

Statistics (Least Squares)

Rank determines the existence and uniqueness of least squares solutions. Full column rank guarantees uniqueness.

Recommender Systems

Matrix factorization techniques (Netflix, Amazon) assume user-item matrices are approximately low-rank.

Computer Vision

Rank constraints appear in epipolar geometry, structure from motion, and background subtraction.

Network Analysis

The rank of incidence and adjacency matrices reveals connectivity properties of graphs.

17. Advanced Topics

Theorem 4.47: Rank and Determinant

For an ntimesnn \\times n matrix AA:

  • rank(A)=n    det(A)0\text{rank}(A) = n \iff \det(A) \neq 0
  • rank(A)<n    det(A)=0\text{rank}(A) < n \iff \det(A) = 0
Theorem 4.48: Rank via Minors

The rank of AA equals the size of the largest non-zero minor (subdeterminant).

rank(A)=r\text{rank}(A) = r iff there exists an rtimesrr \\times r non-zero minor, but all (r+1)×(r+1)(r+1) \times (r+1) minors are zero.

Example 4.18: Rank via Minors

For A=(123246011)A = \begin{pmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 0 & 1 & 1 \end{pmatrix}:

  • det(A)=0\det(A) = 0 (row 2 = 2 × row 1)
  • Check 2×2 minors: det(1201)=10\det\begin{pmatrix} 1 & 2 \\ 0 & 1 \end{pmatrix} = 1 \neq 0
  • Therefore rank(A)=2\text{rank}(A) = 2
Theorem 4.49: Rank of A^T A

For any real matrix AA:

rank(ATA)=rank(AAT)=rank(A)\text{rank}(A^T A) = \text{rank}(A A^T) = \text{rank}(A)
Proof:

Show ker(A)=ker(ATA)\ker(A) = \ker(A^T A):

  • Ax=0ATAx=0Ax = 0 \Rightarrow A^T A x = 0 (clearly)
  • ATAx=0xTATAx=0Ax2=0Ax=0A^T A x = 0 \Rightarrow x^T A^T A x = 0 \Rightarrow \|Ax\|^2 = 0 \Rightarrow Ax = 0

So nullity(AA) = nullity(ATAA^T A), and by rank-nullity, rank(AA) = rank(ATAA^T A).

Remark 4.28: Numerical Rank

In numerical computations, the "numerical rank" may differ from theoretical rank due to floating-point errors. Small singular values are often treated as zero based on a tolerance threshold.

Theorem 4.50: Rank of Idempotent Matrix

If P2=PP^2 = P (idempotent), then rank(P)=tr(P)\text{rank}(P) = \text{tr}(P).

Proof:

Eigenvalues of PP are 0 or 1 only. Say PP has rr eigenvalues equal to 1.

Then rank(P)=r\text{rank}(P) = r and tr(P)=1+1++1=r\text{tr}(P) = 1 + 1 + \cdots + 1 = r.

18. Computational Aspects

Complexity of Rank Computation

MethodComplexityNotes
Gaussian EliminationO(mnmin(m,n))O(mn \cdot \min(m,n))Standard method
SVDO(mn2)O(mn^2) for mnm \geq nMore numerically stable
QR DecompositionO(mn2)O(mn^2)Good stability
Rank-revealing LUO(mnr)O(mn \cdot r)rr = rank, efficient for low rank
Remark 4.29: Rank Determination in Practice

When computing rank numerically:

  • Use SVD and count singular values above a threshold
  • Threshold ≈ ϵAmax(m,n)\epsilon \cdot \|A\| \cdot \max(m,n) where ϵ\epsilon is machine epsilon
  • Pivoting strategies in LU/QR help reveal numerical rank
Example 4.19: Numerical Rank

Consider A=(121.000012.00001)A = \begin{pmatrix} 1 & 2 \\ 1.00001 & 2.00001 \end{pmatrix}

Theoretically rank = 2, but numerically the rows are nearly dependent.

SVD gives singular values ≈ 3.16 and 0.00001. With tolerance 10410^{-4}, numerical rank = 1.

Remark 4.30: Low-Rank Approximation

The Eckart-Young theorem states that the best rank-kk approximation to AA (in Frobenius or spectral norm) is obtained by keeping only the top kk singular values in the SVD:

Ak=i=1kσiuiviTA_k = \sum_{i=1}^{k} \sigma_i u_i v_i^T

This is fundamental in data compression, noise reduction, and dimensionality reduction.

Theorem 4.51: Rank and Linear Independence

A set of kk vectors {v1,,vk}\{v_1, \ldots, v_k\} in FnF^n is linearly independent if and only if the matrix with these vectors as columns has rank kk.

Corollary 4.14: Maximum Independent Vectors

The maximum number of linearly independent vectors that can be chosen from the rows (or columns) of a matrix AA equals rank(A)\text{rank}(A).

Remark 4.31: Summary: Multiple Views of Rank

Matrix rank can be understood from several equivalent perspectives:

  • Row space: Dimension of span of rows
  • Column space: Dimension of span of columns
  • Pivot count: Number of pivots in REF
  • Linear map: Dimension of image
  • Minors: Size of largest non-zero minor
  • SVD: Number of non-zero singular values

Key Takeaways

  • Row rank = Column rank for any matrix
  • rank(A) = # pivots in row echelon form
  • rank + nullity = n (number of columns)
  • Ax = b solvable iff rank(A) = rank([A|b])
  • Unique solution iff rank(A) = n (full column rank)
  • Sylvester inequality: rank(A) + rank(B) - n ≤ rank(AB) ≤ min(rank A, rank B)
  • Full rank ⟺ invertible for square matrices

Related Topics

Row Echelon Form
Nullity
Rank-Nullity Theorem
Linear Independence
Column Space
Row Space
Invertibility
SVD
Kernel
Image

💡 Remember

Matrix rank is the unifying concept connecting row space, column space, solvability, and invertibility.

Rank determines: solvability of systems, invertibility, and dimension of null space.

rank(A) + nullity(A) = number of columns

What's Next?

Now that you understand matrix rank, you're ready for the next major topics:

  • Special Matrices: Diagonal, triangular, symmetric, orthogonal matrices with special properties
  • Determinants: A scalar invariant that detects invertibility
  • Eigenvalues: The values that reveal matrix structure
  • Diagonalization: Finding the simplest form of a matrix
Matrix Rank Practice
15
Questions
0
Correct
0%
Accuracy
1
The rank of a matrix equals:
Easy
Not attempted
2
Row rank equals column rank:
Medium
Not attempted
3
For an m×nm \times n matrix, rank is at most:
Easy
Not attempted
4
A square matrix is invertible iff its rank equals:
Easy
Not attempted
5
If AA is 3×53 \times 5 with rank 2, then dim(kerA)=\dim(\ker A) =
Medium
Not attempted
6
Row operations preserve:
Medium
Not attempted
7
rank(AT)=\text{rank}(A^T) =
Easy
Not attempted
8
If AA has full column rank, then Ax=bAx = b has:
Medium
Not attempted
9
If AA has full row rank, then Ax=bAx = b has:
Medium
Not attempted
10
rank(AB)\text{rank}(AB) \leq
Hard
Not attempted
11
rank(ATA)\text{rank}(A^T A) equals:
Medium
Not attempted
12
If PP is idempotent (P2=PP^2=P), rank(P)\text{rank}(P) equals:
Hard
Not attempted
13
The rank of the zero matrix is:
Easy
Not attempted
14
If AA has full column rank, ATAA^T A is:
Medium
Not attempted
15
Sylvester's inequality gives a _____ bound on rank(AB)\text{rank}(AB):
Medium
Not attempted