MathIsimple
Math

Matrix Multiplication Calculator

Multiply two matrices with step-by-step solutions. Learn matrix multiplication rules.

100% FreeStep-by-Step Solutions
Matrix Multiplication Calculator
Enter two matrices. Matrix A columns must equal Matrix B rows.

Auto-set to match A columns

Try These Examples
Click on any example to automatically fill the calculator
Example 1

2×2 matrices

Matrix Multiplication Rules

For matrices A (m×n) and B (n×p), the product AB is defined only when:

  • Number of columns in A = Number of rows in B
  • Result matrix will be m×p
(AB)ij=k=1nAikBkj(AB)_{ij} = \sum_{k=1}^{n} A_{ik} B_{kj}
Important Properties

Not commutative: AB ≠ BA (in general)

Associative: (AB)C = A(BC)

Distributive: A(B+C) = AB + AC

How the Dot Product Works
Row i of A dots with Column j of B to produce element C[i][j]

Matrix A

a11
a12
a21
a22
×

Matrix B

b11
b12
b21
b22
=

Result C

c11
c12
c21
c22

Highlighted element c11 = Row 1 of A · Column 1 of B

a11×b11+a12×b21=c11
Matrix Multiplication Visual
Row of A dotted with column of B to produce one cell in C.
A matrixxB matrix=C matrix

Highlighted row from A times highlighted column from B yields C(1,1).

Understanding Matrix Multiplication

Matrix multiplication is not simply element-wise multiplication — it is a structured operation that combines rows of the first matrix with columns of the second via the dot product. This seemingly complex process is the mathematical backbone of coordinate transformations, linear systems, and virtually all of modern data science. Understanding it deeply unlocks an enormous range of applied mathematics.

The fundamental rule of matrix multiplication is dimensional compatibility: to multiply matrix A (m × n) by matrix B (n × p), the number of columns in A must equal the number of rows in B. The result is a new matrix C of size m × p. Each element C[i][j] is computed by taking row i of A and column j of B, multiplying corresponding entries pairwise, and summing the products. This is precisely the dot product of the i-th row vector of A with the j-th column vector of B.

Let's walk through a full 2×2 example step by step. Given A = [[1, 2], [3, 4]] and B = [[5, 6], [7, 8]]:

  • C[1,1] = 1×5 + 2×7 = 5 + 14 = 19
  • C[1,2] = 1×6 + 2×8 = 6 + 16 = 22
  • C[2,1] = 3×5 + 4×7 = 15 + 28 = 43
  • C[2,2] = 3×6 + 4×8 = 18 + 32 = 50

So the result is [[19, 22], [43, 50]]. Notice that the inner dimension (columns of A = rows of B = 2) "disappears" in the result: a 2×2 matrix times a 2×2 matrix yields a 2×2 matrix. More generally, an m×n matrix times an n×p matrix yields an m×p matrix — the shared dimension n is consumed by the summation process.

Matrix multiplication is associative — (AB)C = A(BC) — and distributive over addition — A(B+C) = AB+AC — but it is not commutative. In general, AB ≠ BA, and sometimes the product exists in one order but not the other (e.g., a 2×3 matrix can multiply a 3×4 matrix from the left, but not from the right). This non-commutativity reflects the fact that the order of transformations matters: rotating then scaling a coordinate system gives a different result than scaling then rotating.

Real-World Applications

Matrix multiplication is the engine behind computer graphics. Every time a 3D model is rotated, scaled, translated, or projected onto a 2D screen, the vertices of that model are being multiplied by transformation matrices. A rotation matrix R encodes a rotation in 3D space; multiplying the position vector of each vertex by R rotates the entire model. Composing multiple transformations — say, rotate then translate then project — requires only multiplying the corresponding matrices together first, then applying the single resulting matrix to every vertex. This is why GPUs are specialized matrix multiplication hardware, performing billions of matrix-vector products per second to render scenes in real time.

In machine learning, the forward pass of a neural network is dominated by matrix multiplication. Each layer of a fully-connected network takes its input vector, multiplies it by the layer's weight matrix, and adds a bias vector to produce the output for the next layer. Training adjusts these weight matrices by computing gradients via backpropagation — also a series of matrix multiplications. Modern large language models like GPT contain billions of weight parameters organized into matrices, and their inference speed is almost entirely determined by how fast the underlying hardware can multiply matrices.

Physics and quantum mechanics use matrix multiplication to describe state evolution. In quantum mechanics, physical observables correspond to matrices (operators), and measuring the expected value of an observable A in state ψ requires computing ψ†Aψ — a sequence of matrix multiplications. Transition probabilities between quantum states are encoded in transfer matrices, and raising a transfer matrix to the power n gives the probability distribution after n time steps — a computation equivalent to repeated matrix multiplication.

Economics uses Leontief input-output analysis to model how industries depend on each other. If each sector's output depends on inputs from every other sector, the entire economy can be represented as a matrix equation I = (I − A)⁻¹D, where A is the technology matrix (encoding inter-industry flows), I is the identity matrix, and D is the final demand vector. Solving this requires computing the inverse of a matrix — itself accomplished via Gaussian elimination, which is equivalent to a structured sequence of matrix multiplications. Input-output analysis is used by governments to forecast the ripple effects of policy changes, infrastructure investments, and supply chain disruptions.

Frequently Asked Questions

Matrix multiplication combines two matrices to produce a third. For element (i,j) of the result, multiply row i of A by column j of B and sum. Requires columns of A = rows of B.
Ask AI ✨
Matrix Multiplication Calculator - Multiply Matrices Online | MathIsimple