Vanilla RNNs lose long-range dependencies. Backpropagation through time multiplies the same Jacobian dozens or hundreds of times, and the gradient either vanishes or explodes long before it reaches the early layers. Gated cells — GRU and LSTM — fix this with a simple but powerful idea: let the network learn when to remember, when to forget, and when to update.
Both architectures replace the single tanh transform of a vanilla RNN with multiple gates that control the flow of information. This article walks through GRU first, then LSTM, then explains when to use each.
GRU: two gates that decide what to keep
The Gated Recurrent Unit introduces two gates that act element-wise on the hidden state. Each gate is a vector in produced by a sigmoid; values close to zero mean "block" and values close to one mean "allow."
The reset gate decides how much of the past hidden state to ignore when computing the candidate update:
The update gate decides how much of the past to carry forward versus how much to overwrite with the new candidate state:
The candidate hidden state mixes the current input with a reset-gated version of the past:
And the final hidden state is a learned interpolation between the previous state and the candidate:
The symbol is element-wise multiplication. Each dimension of the hidden state has its own gate value, so the cell can remember some features while overwriting others.
Reading the GRU equations
The two extremes clarify what the gates do:
- When : . The cell ignores the new input entirely and carries the past forward unchanged. Useful for "hold this information across many time steps."
- When : . The cell completely overwrites memory with the new candidate. Useful for "forget the past, the situation just changed."
- When : the candidate ignores the previous hidden state. The cell processes the current input fresh.
- When : the candidate uses the full past hidden state, like a vanilla RNN.
The crucial property is the additive update . Information from earlier time steps reaches later ones via a path that does not pass through a tanh nonlinearity at every step. That path lets gradients flow back through long sequences without the multiplicative attenuation that vanilla RNNs suffer.
LSTM: separate cell state and three gates
LSTM was invented before GRU and is more elaborate. It maintains two running states: a hidden state (used by other layers) and a separate cell state (the "long-term memory"). Three gates control the cell:
A candidate cell state uses tanh (so its values are in ):
The cell state update combines the previous cell state filtered by the forget gate with the candidate filtered by the input gate:
Finally, the hidden state is a tanh-squashed cell state filtered by the output gate:
Why the separate cell state matters
The defining feature of LSTM is the cell state highway: . The cell state passes through one element-wise multiplication and one element-wise addition per step — no tanh, no squashing, no aggressive nonlinearity in the path that carries information forward.
When the forget gate stays near 1 and the input gate stays near 0, and information is preserved verbatim across arbitrarily many steps. Gradients flowing backward through this path are multiplied only by the forget-gate values, which the network learns to keep near 1 for important memory.
The LSTM cell state is engineered to be a low-resistance pathway for long-range information. The hidden state is then a filtered, tanh-squashed view of that long-term memory presented to subsequent layers.
A common diagnostic when LSTMs underperform is to check the forget-gate biases. Initializing to a positive value (commonly ) starts the network in a "remember by default" regime, which empirically improves training on tasks with long dependencies.
GRU vs. LSTM: parameters, speed, performance
The two cells are close cousins. Comparing them directly:
| Property | GRU | LSTM |
|---|---|---|
| Gates | 2 (reset, update) | 3 (input, forget, output) |
| Internal states | Hidden only | Hidden + cell |
| Parameters per cell | ~3× vanilla RNN | ~4× vanilla RNN |
| Speed | Faster | Slower |
| Long-sequence performance | Strong | Slightly stronger on hardest tasks |
Empirically, GRU and LSTM perform very similarly on most language and time-series tasks. GRU is often preferred when training time matters or when the dataset is moderate; LSTM still wins on the longest sequences with the most demanding long-range dependencies. For practical purposes, "try GRU first and only switch to LSTM if you need to" is reasonable advice.
Stacking gated cells: deep RNNs
A single layer of GRU or LSTM is rarely enough for complex tasks. Deep RNNs stack multiple recurrent layers, where the hidden state of layer at time becomes the input of layer at time :
Two to four layers are typical. Deeper stacks are possible but require careful regularization (dropout between layers, layer normalization). The PyTorch one-liner nn.LSTM(num_layers=2) handles this; the equations above are what is happening inside.
Bidirectional RNNs: looking forward and backward
Some tasks need access to both past and future context. In named entity recognition, knowing the next word helps disambiguate the current one. Bidirectional RNNs run two independent recurrent passes — one forward in time, one backward — and concatenate their hidden states at each position:
Bidirectional RNNs are useful for tagging, classification, and any task with the full sequence available at inference. They are not suitable for autoregressive generation, where future tokens do not exist yet at decoding time.
A practical recipe
Three rules cover most situations:
- Default to GRU for most language and time-series tasks. It trains faster and reaches similar performance.
- Use LSTM when the task has very long dependencies (hundreds of steps) and benchmarks show LSTM matters, or when the literature for your specific problem uses LSTM and you want to compare directly.
- Add bidirectionality for tagging and classification tasks where the full sequence is available at inference. Skip it for generation.
Across all variants, the implementation pattern is the same: replace the vanilla recurrence with a gated cell, keep gradient clipping, keep truncated BPTT, and let the network learn how to manage memory.
The main takeaway
Gated cells solve the vanishing-gradient problem of vanilla RNNs by introducing element-wise gates that control information flow. GRU uses two gates (reset and update) and a single hidden state. LSTM uses three gates (input, forget, output) and adds a separate cell state that acts as a long-term memory highway. The shared insight is the same: replace the multiplicative cascade through tanh with an additive update path, and gradients can flow across hundreds of time steps.
In practice, GRU is the modern default for most sequence tasks, with LSTM held in reserve for the hardest long-range problems. Both are usually stacked in two- or three-layer configurations, and bidirectional variants extend them to non-autoregressive tasks. Understanding the gating math also makes the next architectural step — attention and Transformers — easier to motivate, because attention is essentially what happens when you take the "weighted sum of past states" intuition behind gating and let it operate over arbitrary positions instead of just the immediately previous step.