Module 03 · Section 3.2

The Attention Mechanism

Teaching neural networks where to look: from fixed context vectors to dynamic, learned alignment

I finally learned to pay attention, and now I can't stop staring at every token in the sequence. My therapist calls it hypervigilance; Bahdanau calls it alignment.

Attentive Anya, a decoder with boundary issues
★ Big Picture

Why attention changed everything. In Section 3.1, we saw that the encoder-decoder architecture forces the entire source sentence through a single fixed-size bottleneck vector. Attention eliminates this bottleneck by allowing the decoder to dynamically "look back" at every encoder hidden state at every generation step. Instead of asking "what does the whole sentence mean?", the decoder can ask "which parts of the sentence are relevant right now?" This single idea, introduced in 2014 by Bahdanau et al., improved machine translation dramatically and became the foundational building block of the Transformer architecture that powers GPT, BERT, and every modern LLM.

1. The "Where to Look" Intuition

Consider a human translator converting an English sentence to French. They do not read the entire English sentence, memorize it as a single compressed thought, and then produce the French sentence from memory. Instead, they glance back at specific parts of the source sentence as they write each word of the translation. When writing the French verb, they look at the English verb. When writing the French adjective, they look at the English adjective (which might be in a different position due to word order differences between the two languages).

Attention gives neural networks this same ability. At each decoder step, the model computes a set of attention weights over all encoder positions. These weights determine how much "focus" to place on each source token. The model then takes a weighted sum of the encoder hidden states to produce a context vector that is specific to the current decoding step.

Crucially, these weights are not hardcoded. They are computed dynamically based on the decoder's current state and each encoder state. Different decoder steps attend to different parts of the source, creating a soft, differentiable alignment between source and target.

2. Bahdanau Additive Attention

The first attention mechanism for seq2seq, proposed by Bahdanau, Cho, and Bengio (2014), uses a small feedforward network (often called an alignment model) to compute the compatibility between the decoder state and each encoder state.

The Computation

Let si be the decoder hidden state at step i, and let hj be the encoder hidden state at position j. Bahdanau attention computes:

eij = vT tanh(W1 si + W2 hj)
αij = softmaxj(eij) = exp(eij) / Σk exp(eik)
contexti = Σj αij hj

Here is what each piece does:

  1. Score function eij: Projects both the decoder state and encoder state into a common space, adds them, applies tanh, then reduces to a scalar with v. This is called "additive" attention because the two projections are added together.
  2. Softmax normalization: Converts raw scores into a probability distribution over source positions. The weights αij sum to 1 and are all non-negative.
  3. Weighted sum: Combines encoder states according to the attention weights, producing a context vector tailored to the current decoding step.
h1 h2 h3 h4 The cat is small e = 0.2 e = 2.8 e = 0.1 e = 1.5 softmax α=0.05 α=0.68 α=0.04 α=0.23 contexti = Σαh si (dec) used in scoring fed to decoder at step i
Figure 3.4: Bahdanau additive attention. At each decoder step, scores are computed between the decoder state si and every encoder state hj. After softmax normalization, the attention weights form a probability distribution that determines how much each source position contributes to the context vector.

Attention as Soft Alignment

In traditional machine translation, an alignment specifies which source word(s) correspond to each target word. Classic statistical MT systems learned hard alignments (each target word aligns to exactly one or a few source words). Attention produces a soft alignment: every target word is connected to every source word, but with varying strength.

This soft alignment has several advantages. It can handle one-to-many and many-to-one relationships naturally. It is fully differentiable, allowing end-to-end training with backpropagation. And it provides interpretability: by visualizing the attention weights as a matrix (source positions on one axis, target positions on the other), we get an alignment map that shows which source words the model "looked at" when generating each target word.

3. Luong Dot-Product Attention

Shortly after Bahdanau's work, Luong et al. (2015) proposed several alternative score functions. The most widely used is dot-product attention, which replaces the feedforward network with a simple dot product:

eij = siT hj

Luong also proposed a "general" variant that inserts a learnable matrix:

eij = siT W hj
Variant Score Function Parameters Complexity
Bahdanau (additive) vT tanh(W1s + W2h) O(d²) Two projections + tanh + dot
Luong (dot) sTh 0 Single dot product
Luong (general) sTWh O(d²) One projection + dot

The dot-product score is computationally cheaper and can be computed as a single matrix multiplication across all source positions simultaneously. This efficiency advantage becomes critical as we scale attention to the Transformer architecture in Section 3.3. However, note that dot-product attention requires the decoder and encoder states to have the same dimensionality, while additive attention can handle different sizes through its projection matrices.

4. Attention as a Differentiable Dictionary Lookup

There is a powerful analogy that helps unify all forms of attention: think of it as a soft dictionary lookup.

In a traditional Python dictionary, you provide a key and retrieve the exact matching value. If the key is not present, you get nothing. Attention works similarly but in a "soft" way:

Instead of exact matching, the query is compared against all keys simultaneously using a similarity function. The result is not a single value but a weighted combination of all values, where the weights reflect how well each key matches the query.

💡 Key Insight

In Bahdanau and Luong attention, the keys and values are the same (both are encoder hidden states). The Transformer generalizes this by using separate linear projections to create distinct key and value representations from the same source, which is much more expressive. We will formalize this query/key/value framework in Section 3.3.

5. Implementing Attention from Scratch

Let us build both Bahdanau and Luong attention in PyTorch, step by step.

import torch
import torch.nn as nn
import torch.nn.functional as F

class BahdanauAttention(nn.Module):
    """Additive (Bahdanau) attention mechanism."""
    def __init__(self, enc_dim, dec_dim, attn_dim):
        super().__init__()
        self.W1 = nn.Linear(dec_dim, attn_dim, bias=False)
        self.W2 = nn.Linear(enc_dim, attn_dim, bias=False)
        self.v  = nn.Linear(attn_dim, 1, bias=False)

    def forward(self, decoder_state, encoder_outputs):
        """
        decoder_state:   (batch, dec_dim)
        encoder_outputs: (batch, src_len, enc_dim)
        Returns: context (batch, enc_dim), weights (batch, src_len)
        """
        # Expand decoder state to match encoder sequence length
        dec_proj = self.W1(decoder_state).unsqueeze(1)  # (batch, 1, attn_dim)
        enc_proj = self.W2(encoder_outputs)              # (batch, src_len, attn_dim)

        # Additive scoring: v^T tanh(W1*s + W2*h)
        scores = self.v(torch.tanh(dec_proj + enc_proj)) # (batch, src_len, 1)
        scores = scores.squeeze(-1)                      # (batch, src_len)

        # Normalize to get attention weights
        weights = F.softmax(scores, dim=-1)              # (batch, src_len)

        # Weighted sum of encoder outputs
        context = torch.bmm(weights.unsqueeze(1),       # (batch, 1, src_len)
                            encoder_outputs)              # x (batch, src_len, enc_dim)
        context = context.squeeze(1)                     # (batch, enc_dim)
        return context, weights

# Test it
attn = BahdanauAttention(enc_dim=128, dec_dim=128, attn_dim=64)
enc_out = torch.randn(2, 6, 128)   # batch=2, 6 source tokens
dec_s   = torch.randn(2, 128)      # decoder state

ctx, wts = attn(dec_s, enc_out)
print(f"Context shape: {ctx.shape}")
print(f"Weights shape: {wts.shape}")
print(f"Weights sum:   {wts[0].sum().item():.4f}")
print(f"Weights[0]:    {wts[0].detach().numpy().round(3)}")
Context shape: torch.Size([2, 128]) Weights shape: torch.Size([2, 6]) Weights sum: 1.0000 Weights[0]: [0.042 0.531 0.018 0.003 0.376 0.03 ]

The weights sum to 1 (as guaranteed by softmax) and the model has learned to concentrate most of its attention on positions 1 and 4 for this particular query. Now the Luong dot-product variant:

class LuongDotAttention(nn.Module):
    """Dot-product (Luong) attention mechanism."""
    def forward(self, decoder_state, encoder_outputs):
        """
        decoder_state:   (batch, dim)
        encoder_outputs: (batch, src_len, dim)   -- same dim required!
        """
        # Dot product: s^T h for each encoder position
        scores = torch.bmm(
            encoder_outputs,                          # (batch, src_len, dim)
            decoder_state.unsqueeze(-1)               # (batch, dim, 1)
        ).squeeze(-1)                                  # (batch, src_len)

        weights = F.softmax(scores, dim=-1)
        context = torch.bmm(weights.unsqueeze(1), encoder_outputs).squeeze(1)
        return context, weights

# Compare both mechanisms
luong_attn = LuongDotAttention()
ctx_l, wts_l = luong_attn(dec_s, enc_out)

print(f"Bahdanau weights: {wts[0].detach().numpy().round(3)}")
print(f"Luong weights:    {wts_l[0].detach().numpy().round(3)}")
print(f"Both produce context of shape: {ctx_l.shape}")
Bahdanau weights: [0.042 0.531 0.018 0.003 0.376 0.03 ] Luong weights: [0.001 0.894 0.002 0.000 0.102 0.001] Both produce context of shape: torch.Size([2, 128])

Notice that Luong attention produces sharper weights (more concentrated on a single position). This is because the raw dot product can produce larger magnitude scores than the bounded tanh in Bahdanau attention, leading to more peaked softmax outputs.

6. Backpropagation Through Attention

Attention is fully differentiable, which means gradients flow through it naturally during backpropagation. The key takeaway: positions that receive high attention weights also receive strong gradient signals, creating a virtuous cycle where the model learns to attend to the right places.

Advanced Deep Dive (Optional): Matrix calculus of gradient flow through attention

The Jacobian of Softmax

The attention weights are computed as α = softmax(e). The Jacobian of softmax with respect to its input is:

∂αi / ∂ej = αiij - αj)

where δij is the Kronecker delta (1 if i = j, 0 otherwise). This has an important consequence: increasing score ej increases weight αj but decreases all other weights (since they must sum to 1). This competitive dynamic is what makes attention selective.

Gradient Flow Through the Context Vector

The context vector is c = Σj αj hj. The gradient of the loss L with respect to encoder state hj has two paths:

  1. Direct path: ∂L/∂c · αj. The gradient flows directly through the weighted sum, scaled by the attention weight. Positions with high attention receive more gradient.
  2. Indirect path: Through the score function. Changing hj changes the scores, which changes the weights, which changes the context. This path allows the network to learn better scoring functions.
📝 Note

The direct path is analogous to a residual connection: gradient flows directly from the output back to the relevant encoder states, scaled by the attention weight. This means that positions the model attends to receive strong gradient signals, making training much more effective than in a vanilla seq2seq model where gradients must traverse the entire encoder recurrence.

Let us verify this gradient flow empirically:

import torch
import torch.nn.functional as F

# Create encoder outputs with gradient tracking
enc = torch.randn(1, 5, 8, requires_grad=True)   # 5 positions, dim 8
query = torch.randn(1, 8)

# Compute Luong dot-product attention
scores = torch.bmm(enc, query.unsqueeze(-1)).squeeze(-1)
weights = F.softmax(scores, dim=-1)
context = torch.bmm(weights.unsqueeze(1), enc).squeeze(1)

# Compute a scalar loss and backpropagate
loss = context.sum()
loss.backward()

print("Attention weights:", weights[0].detach().numpy().round(4))
print("Gradient norms per position:")
for j in range(5):
    grad_norm = enc.grad[0, j].norm().item()
    print(f"  Position {j}: weight={weights[0,j]:.4f}, grad_norm={grad_norm:.4f}")
Attention weights: [0.0312 0.7841 0.0028 0.1753 0.0066] Gradient norms per position: Position 0: weight=0.0312, grad_norm=0.2489 Position 1: weight=0.7841, grad_norm=1.3572 Position 2: weight=0.0028, grad_norm=0.0891 Position 3: weight=0.1753, grad_norm=0.6734 Position 4: weight=0.0066, grad_norm=0.1243

Observe the strong correlation: positions with higher attention weights receive proportionally larger gradients. Position 1, which has 78% of the attention weight, receives by far the largest gradient. This is the mechanism by which attention guides learning: the model receives the strongest training signal from the source positions it deems most relevant.

Loss L context = Σαh α = softmax(scores) h1 h2 h3 h4 direct path indirect path Line thickness = gradient magnitude (proportional to attention weight)
Figure 3.5: Gradient flow through attention. The direct path scales gradients by αj, giving highly-attended positions stronger learning signals. The indirect path flows through the score function, allowing the network to improve its attention distribution.

7. Integrating Attention into Seq2Seq

Now let us see how attention fits into a complete encoder-decoder model. The decoder uses the context vector (alongside its hidden state) to make predictions at each step:

import torch
import torch.nn as nn
import torch.nn.functional as F

class AttentionDecoder(nn.Module):
    """Decoder with Bahdanau attention."""
    def __init__(self, vocab_size, emb_dim, hidden_dim, enc_dim):
        super().__init__()
        self.embedding = nn.Embedding(vocab_size, emb_dim)
        self.rnn = nn.GRU(emb_dim + enc_dim, hidden_dim, batch_first=True)
        self.attention = BahdanauAttention(enc_dim, hidden_dim, attn_dim=64)
        self.fc = nn.Linear(hidden_dim + enc_dim, vocab_size)

    def forward_step(self, token, hidden, encoder_outputs):
        """One decoding step with attention."""
        # Compute attention over encoder outputs
        context, weights = self.attention(
            hidden.squeeze(0),   # (batch, hidden_dim)
            encoder_outputs       # (batch, src_len, enc_dim)
        )

        # Embed current token and concatenate with context
        emb = self.embedding(token)                     # (batch, 1, emb_dim)
        rnn_input = torch.cat([emb, context.unsqueeze(1)], dim=-1)

        # RNN step
        output, hidden = self.rnn(rnn_input, hidden)    # output: (batch, 1, hidden)

        # Combine RNN output with context for prediction
        combined = torch.cat([output.squeeze(1), context], dim=-1)
        logits = self.fc(combined)
        return logits, hidden, weights

# Demo: decode 4 steps with attention
dec = AttentionDecoder(vocab_size=6000, emb_dim=64, hidden_dim=128, enc_dim=128)
enc_outputs = torch.randn(1, 8, 128)  # 8 source tokens
h = torch.randn(1, 1, 128)             # initial decoder hidden
tok = torch.tensor([[1]])              # <SOS>

print("Attention patterns during decoding:")
for step in range(4):
    logits, h, weights = dec.forward_step(tok, h, enc_outputs)
    tok = logits.argmax(dim=-1, keepdim=True)
    w = weights[0].detach().numpy()
    peak = w.argmax()
    print(f"  Step {step}: peak attention at source pos {peak} ({w[peak]:.2%})")
Attention patterns during decoding: Step 0: peak attention at source pos 3 (34.21%) Step 1: peak attention at source pos 6 (28.57%) Step 2: peak attention at source pos 1 (41.83%) Step 3: peak attention at source pos 5 (22.94%)

Each decoding step focuses on a different source position, demonstrating the dynamic nature of attention. The decoder is no longer stuck with a single context vector; it constructs a fresh, step-specific context at every position.

⚠ Historical Context

Bahdanau attention was originally called the "alignment model" because the attention weights directly correspond to soft word alignments in translation. This connection to alignment is why the score function parameters are sometimes denoted with alignment-related variable names in the literature. Keep in mind that when the Transformer paper (Vaswani et al., 2017) uses the term "attention," it refers to the same fundamental concept but in a more general query/key/value framework that we develop in Section 3.3.

✍ Self-Check Quiz

1. What problem does attention solve that LSTMs and GRUs do not?
Show Answer
Attention solves the information bottleneck in the encoder-decoder architecture. LSTMs and GRUs help with the vanishing gradient problem during training, but the encoder still compresses the entire source into a single fixed-size vector. Attention allows the decoder to access all encoder hidden states at every generation step, eliminating the need for compression into a single vector.
2. In Bahdanau attention, why is the score function called "additive"?
Show Answer
Because the decoder state and encoder state are projected into a common space and then added together: eij = vT tanh(W1si + W2hj). The two projections are summed before the nonlinearity, in contrast to dot-product attention where the two vectors interact through multiplication.
3. Why does Luong dot-product attention require the encoder and decoder states to have the same dimensionality?
Show Answer
Because the score is computed as a dot product siT hj, which requires both vectors to have the same number of dimensions. The Luong "general" variant overcomes this by inserting a matrix W that maps from one space to the other: siT W hj. Bahdanau attention handles dimension mismatches naturally through its separate projection matrices.
4. Explain the two paths through which gradients reach an encoder hidden state hj in an attention mechanism.
Show Answer
Direct path: Through the weighted sum c = Σ αj hj. The gradient is ∂L/∂c · αj, scaled by the attention weight. Indirect path: Through the score function. Changing hj changes the attention scores, which changes the weights via softmax, which changes the context vector. This allows the network to learn to produce better attention distributions.
5. Why does attention make training more effective, beyond just solving the information bottleneck?
Show Answer
Attention creates direct gradient paths from the decoder output to each encoder position, bypassing the sequential recurrence. In a vanilla seq2seq model, gradients from the decoder loss must flow backward through the entire encoder RNN to reach early positions (suffering from vanishing gradients). With attention, the gradient reaches encoder position j directly through the attention weight αj, providing a shortcut similar to residual connections. This makes it much easier to train the encoder to produce useful representations at every position.

✓ Key Takeaways

  1. Attention eliminates the information bottleneck by allowing the decoder to access all encoder hidden states at every step, rather than relying on a single compressed vector.
  2. Bahdanau (additive) attention uses a learned feedforward network to score the compatibility between decoder and encoder states. It can handle different dimensionalities but is computationally heavier.
  3. Luong (dot-product) attention uses a simple dot product for scoring, which is faster and can be computed as a single matrix multiplication. It requires matching dimensions.
  4. Attention is a soft dictionary lookup: a query (decoder state) is compared against keys (encoder states) to produce weights that determine how to combine values (also encoder states).
  5. Gradients flow through attention via two paths: directly through the weighted sum (scaled by α) and indirectly through the score function. This creates shortcut gradient paths that improve training.
  6. Attention weights are interpretable, forming soft alignment matrices that reveal which source tokens influence each target token.