LLM Construction
Transformer Architecture
The mathematical formulation of the transformer block: self-attention, multi-head attention, layer normalization, FFN blocks, positional encoding, and parameter counting.
Prerequisites
Why This Matters
The transformer is the architecture behind every modern large language model: GPT-4, Claude, Gemini, Llama. It replaced recurrent and convolutional architectures for sequence modeling because of one key property: self-attention allows every token to attend to every other token in parallel, enabling the model to learn long-range dependencies without the vanishing gradient problem of RNNs.
Understanding the transformer mathematically (not just as a diagram, but as a sequence of matrix operations with specific dimensions, costs, and properties) is essential for understanding everything built on top of it: RLHF, mechanistic interpretability, scaling laws, and efficiency research.
Mental Model
A transformer processes a sequence of tokens by passing them through a stack of identical blocks. Each block has two sub-layers: a self-attention layer (which lets tokens communicate with each other) and a feed-forward network (which processes each token independently). Residual connections and layer normalization stabilize training.
Self-attention is the key innovation. Each token creates a query ("what am I looking for?"), a key ("what do I contain?"), and a value ("what do I contribute?"). Tokens attend to each other based on query-key similarity, and the output is a weighted sum of values.
Formal Setup and Notation
Let the input sequence have tokens, each represented as a -dimensional vector. The input is a matrix .
Self-Attention
Scaled Dot-Product Attention
Given an input , compute queries, keys, and values:
where and are learned weight matrices.
The attention output is:
where the softmax is applied row-wise (each row sums to 1).
Attention Dimensions
Statement
For input :
- , ,
- : the attention matrix
- : each row is a probability distribution
- : the output
The output of attention for token is a weighted average of value vectors:
Intuition
Each token computes a query and compares it against all keys via dot products. The softmax converts these similarity scores into attention weights . The output is a weighted sum of value vectors, where tokens with similar query-key pairs get higher weight. The scaling prevents the dot products from becoming too large (which would cause the softmax to saturate).
Why It Matters
Tracking dimensions through the transformer is the single most useful exercise for understanding the architecture. Every research paper assumes you can do this fluently. The attention matrix is both the source of the transformer's power (global context) and its main computational bottleneck.
Why scale by ? If the entries of and are independent with zero mean and unit variance, then has variance . Without scaling, large causes the dot products to have large magnitude, pushing the softmax into regions with near-zero gradients. Dividing by normalizes the variance to 1, keeping the softmax in a useful range.
Multi-Head Attention
Multi-Head Attention
Instead of computing a single attention function, use heads in parallel:
where and with .
Concatenate the heads and project:
where .
Why multiple heads? Each head can attend to different aspects of the input: one head might focus on syntactic relationships, another on semantic similarity, another on positional proximity. Multi-head attention allows the model to jointly attend to information from different representation subspaces. Mechanistic interpretability work gives concrete examples of specialization: previous-token heads that copy information from position to position (Elhage et al., 2021), induction heads that implement in-context pattern completion (Olsson et al., 2022), and name mover heads that move subject tokens to the final position in the indirect object identification circuit (Wang et al., 2022).
Parameter count for MHA: Each head has each of size . With heads, total QKV parameters are . The output projection adds . Total: parameters (ignoring biases).
Residual Connections and Layer Normalization
Transformer Sub-Layer with Residual Connection
Each sub-layer (attention or FFN) is wrapped with a residual connection:
This allows gradients to flow directly through the network and enables training of deep transformers.
Layer Normalization
For a vector , layer normalization computes:
where , , and are learned scale and shift parameters.
Pre-norm vs. post-norm. The original transformer (Vaswani et al., 2017) uses post-norm: . Most modern LLMs use pre-norm: . Pre-norm is more stable for training deep networks because the residual path is unobstructed.
Feed-Forward Network
Position-wise Feed-Forward Network
The original 2017 transformer FFN applies two linear transformations with a nonlinearity:
where , , and is ReLU (Vaswani et al., 2017) or GeLU in later variants. The standard choice is .
Parameter count for the 2017 FFN. has parameters, has parameters. With : total is parameters (ignoring biases).
Gated FFN (SwiGLU / GeGLU)
Since 2023, essentially every frontier LLM (Llama 2, Llama 3, Mistral, Mixtral, DeepSeek, Qwen, Gemma) replaces the two-matrix FFN with a gated variant that has three projection matrices :
where is elementwise product, , , and is SiLU (SwiGLU) or GeLU (GeGLU). The gate modulates the activation elementwise.
Parameter count for gated FFN. Three projections give parameters. To keep the parameter budget comparable to the 2017 FFN, Shazeer (2020) recommends , which yields parameters. This matches the legacy per-layer formula used below. Llama 2 7B () picks , rounded up to a multiple of 256 for kernel alignment.
The role of the FFN. Geva et al. (2021) proposed that FFN layers act as key-value memories: maps inputs to a high-dimensional space where patterns are detected, and maps back to the residual stream with the associated information. Under this view, the FFN is where factual knowledge is primarily stored. This is an empirical interpretation from mechanistic interpretability, not a settled fact. Hase et al. (2023) show that the location where Causal Tracing localizes a fact is not a reliable predictor of which layer is best to edit, which complicates the simple "localization equals storage" reading.
Positional Encoding
Self-attention is permutation-equivariant: shuffling the input tokens shuffles the output tokens identically. Without positional information, the model cannot distinguish "the dog bit the man" from "the man bit the dog."
Sinusoidal Positional Encoding
The original transformer uses fixed sinusoidal encodings added to the input embeddings:
for position and dimension . This allows the model to attend to relative positions because is a linear function of .
Rotary Position Embedding (RoPE)
RoPE (Su et al., 2021) encodes position by rotating the query and key vectors in independent 2D subspaces:
where is a block-diagonal matrix of planar rotations, with the -th block a 2x2 rotation by angle using the base frequencies
Rotations that act on the same 2D subspace commute and satisfy block-by-block. Since is block-diagonal, the full matrix product also satisfies , so the attention score becomes:
This depends only on the relative position , giving the model translation-invariant attention.
Why RoPE dominates. RoPE naturally encodes relative positions (not absolute), extrapolates better to longer sequences than seen during training, and does not add parameters. It is used in Llama, Mistral, and most modern open-source LLMs.
Computational Complexity
Attention is Quadratic in Sequence Length
Statement
The computational cost of self-attention is:
The factor comes from computing the attention matrix . The memory cost for storing attention weights is per head.
For a full transformer with layers and heads:
- Attention cost per layer:
- FFN cost per layer: with
- Total cost:
Intuition
Every token must attend to every other token, producing an matrix. For short sequences (), the FFN dominates. For long sequences (), attention dominates. This is why extending context length is hard: doubling quadruples the attention cost.
Why It Matters
The quadratic cost is the fundamental bottleneck for long-context models. A model processing 100K tokens needs attention matrices with entries per layer. This has motivated extensive research into efficient attention: sparse attention, linear attention, FlashAttention (which reduces memory but not FLOPs), and sub-quadratic architectures like Mamba.
Failure Mode
The scaling is for standard dense attention. Methods like FlashAttention reduce the memory cost from to by computing attention in tiles, but the compute cost remains . True sub-quadratic compute requires architectural changes (sparse or linear attention), which can reduce model quality.
Parameter Counting
Transformer Parameter Count
Statement
A decoder-only transformer with layers has approximately:
Breaking this down:
- Token embedding: parameters
- Per layer:
- Multi-head attention (QKV + output):
- FFN (two linear layers):
- Layer norm (2 per layer): (negligible)
- Subtotal: per layer
- Output projection (often tied with embedding):
For GPT-3 scale (, , ): approximately B parameters.
This is an order-of-magnitude approximation. It ignores biases, layer norm scale and shift, the final output projection head, and absolute positional embeddings, and it double-counts in the presence of weight tying. Many modern implementations tie the input embedding with the output unembedding matrix, so only one block is counted. The headline 175B figure works out because the omitted and double-counted terms approximately cancel against the neglected layer norm and bias parameters, not because plus two embedding copies is exact.
Intuition
The vast majority of parameters are in the transformer layers, not the embeddings (unless the vocabulary is very large). Within each layer, the FFN contains of the parameters ( vs. for attention). This is why the FFN layers are where most of the model's knowledge capacity resides.
Why It Matters
Parameter counting is essential for: (1) estimating compute costs for training and inference, (2) understanding scaling laws, (3) comparing architectures, and (4) estimating memory requirements. A model with parameters in fp16 requires bytes of memory just for weights, plus additional memory for activations and optimizer states. Techniques like speculative decoding and quantization reduce these costs at serving time.
A Complete Transformer Block
Putting it all together, one transformer block computes (using pre-norm):
The full model stacks such blocks, preceded by token embedding + positional encoding and followed by a final layer norm and linear output projection to vocabulary logits.
Common Confusions
Attention is not a learned weight matrix
The attention weights are computed dynamically from the input. They change for every input sequence. The learned parameters are , which determine how attention is computed. This input-dependence is what gives transformers their flexibility compared to fixed-weight architectures.
Multi-head attention does not multiply the cost by h
Each head operates on dimensions, so the total computation across all heads is the same as a single head with full dimensions. Multi-head attention is a reorganization of computation, not a multiplication.
FlashAttention reduces memory, not FLOPs
FlashAttention computes the same mathematical operation as standard attention. It reduces memory from to by computing attention in blocks and never materializing the full matrix. But the number of floating-point operations is unchanged. True compute savings require architectural changes.
Summary
- Self-attention:
- Multi-head attention: parallel heads with , concatenated and projected
- Each transformer block: attention + residual + LayerNorm + FFN + residual + LayerNorm
- Attention cost is . quadratic in sequence length
- FFN cost is . dominates for short sequences
- Per-layer parameters: (attention + FFN ). Modern LLMs replace the two-matrix FFN with a SwiGLU or GeGLU gated FFN that has three projections and , which preserves the budget. Mixture-of-experts variants sparsely activate a subset of FFN parameters.
- RoPE gives relative position encoding via rotation of Q and K
- Pre-norm (LayerNorm before sub-layer) is standard in modern LLMs
Exercises
Problem
For a transformer with , heads, and , compute the number of parameters in one transformer block (ignoring biases and layer norm parameters).
Problem
If the sequence length doubles from to , by what factor does the attention computation cost increase? By what factor does the FFN computation cost increase?
Problem
Show that without positional encoding, self-attention is permutation-equivariant: if you permute the input tokens by a permutation , the output tokens are permuted by the same .
Problem
A transformer model has layers, , heads, (as in Llama 2 7B, which uses a SwiGLU-gated FFN with , rounded to a multiple of 256 for kernel alignment), and vocabulary . Estimate the total parameter count and the memory required to store weights in fp16.
Related Comparisons
- Autoregressive Models vs. Diffusion Models
- Autoregressive Models vs. JEPA
- Dense Transformers vs. Mixture-of-Experts
- Transformer vs. Mamba vs. TTT
References
Canonical:
- Vaswani et al., "Attention Is All You Need" (2017). The original transformer paper, Sections 3.1-3.3 and 3.5 for architecture and positional encoding.
Current:
- Su et al., "RoFormer: Enhanced Transformer with Rotary Position Embedding" (2021). RoPE, Sections 3.1-3.4 for the construction.
- Shazeer, "GLU Variants Improve Transformer" (2020). Motivates SwiGLU and GeGLU, and recommends shrinking by to match the 2017 parameter budget.
- Touvron et al., "Llama 2: Open Foundation and Fine-Tuned Chat Models" (2023). SwiGLU-gated FFN with at , Section 2.
- Dao et al., "FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness" (2022).
Mechanistic interpretability:
- Elhage et al., "A Mathematical Framework for Transformer Circuits" (Anthropic, 2021). Previous-token heads and the residual-stream view.
- Olsson et al., "In-context Learning and Induction Heads" (Anthropic, 2022). Induction heads as a mechanism for in-context learning.
- Wang et al., "Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 Small" (2022). Name mover heads.
- Geva et al., "Transformer Feed-Forward Layers Are Key-Value Memories" (EMNLP 2021). FFN-as-memory hypothesis.
- Hase et al., "Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models" (NeurIPS 2023). Evidence that Causal Tracing localizations do not predict editable layers.
Textbooks:
- Jurafsky & Martin, Speech and Language Processing (3rd ed., draft), Chapters 7-12.
- Goodfellow, Bengio, Courville, Deep Learning (2016), Chapters 10-12.
Next Topics
The natural next steps from transformer architecture:
- Mechanistic interpretability: what do the attention heads and FFN layers actually compute?
- Hallucination theory: why the next-token prediction objective leads to confabulation
- RLHF and alignment: fine-tuning the transformer for human preferences
- Vision transformer lineage: how the transformer was adapted for computer vision (ViT, Swin, DINO, CLIP)
Last reviewed: April 2026
Prerequisites
Foundations this topic depends on.
- Attention Mechanism TheoryLayer 4
- Matrix Operations and PropertiesLayer 0A
- Sets, Functions, and RelationsLayer 0A
- Basic Logic and Proof TechniquesLayer 0A
- Softmax and Numerical StabilityLayer 1
- Feedforward Networks and BackpropagationLayer 2
- Differentiation in RnLayer 0A
- Matrix CalculusLayer 1
- The Jacobian MatrixLayer 0A
- The Hessian MatrixLayer 0A
- Activation FunctionsLayer 1
- Convex Optimization BasicsLayer 1
Builds on This
- Attention Is All You Need (Paper)Layer 4
- Audio Language ModelsLayer 5
- BERT and the Pretrain-Finetune ParadigmLayer 4
- Claude Model FamilyLayer 5
- Decoding StrategiesLayer 3
- DeepSeek ModelsLayer 5
- Donut and OCR-Free Document UnderstandingLayer 5
- Fine-Tuning and AdaptationLayer 3
- Forgetting Transformer (FoX)Layer 4
- Gemini and Google ModelsLayer 5
- Hallucination TheoryLayer 4
- Induction HeadsLayer 4
- LLaMA and Open Weight ModelsLayer 5
- Mechanistic InterpretabilityLayer 4
- Mixture of ExpertsLayer 4
- Model Comparison TableLayer 5
- Model Merging and Weight AveragingLayer 5
- Multi-Token PredictionLayer 5
- Plan-then-GenerateLayer 5
- Post-Training OverviewLayer 5
- Prompt Engineering and In-Context LearningLayer 5
- Qwen and Chinese ModelsLayer 5
- Residual Stream and Transformer InternalsLayer 4
- Speculative Decoding and QuantizationLayer 5
- Structured Output and Constrained GenerationLayer 5
- Vision Transformer LineageLayer 4