Paper breakdown
QTIP: Quantization with Trellises and Incoherence Processing
Albert Tseng, Qingyao Sun, David Hou, and Christopher De Sa · 2024 · NeurIPS 2024
Post-training quantization that decouples codebook size from bitrate by replacing vector quantization with trellis-coded quantization. Uses random Hadamard incoherence processing to make LLM weights approximately i.i.d. Gaussian, then quantizes 256-dimensional Gaussian sequences via a Viterbi-decoded bitshift trellis. Two-bit Llama-2-70B reaches within 0.7 perplexity of FP16.
Overview
Tseng et al. (2024) introduce QTIP, a post-training quantization (PTQ) method for large language model weights that combines two ideas: trellis-coded quantization (TCQ) instead of vector quantization (VQ), and incoherence processing via random orthogonal transforms to standardize the weight distribution before coding. The motivation is simple. LLM inference is memory-bandwidth bound: a 70B-parameter model in FP16 needs 140 GB just to load weights, and decoding throughput is throttled by how fast the weight tile can stream from HBM into SRAM. Halving the precision halves the memory traffic and roughly doubles the throughput. Two-bit weights would multiply throughput by eight at the cost of representable levels per scalar, which is far too coarse for direct scalar quantization.
Vector quantization mitigates this by quantizing groups of scalars at a time to one of codebook vectors. The codebook is a learned set of points in , chosen to fit the weight distribution. Higher means better packing density but the codebook size explodes; the prior state-of-the-art (QuIP#, AQLM) is stuck at because the codebook must fit in L1 cache for fast inference. QTIP replaces the unstructured codebook with a stateful trellis decoder, so the effective codebook is the set of length- walks on a small directed graph. This decouples the bitrate (bits per scalar) from the codebook size and from the effective dimension, allowing at no cache cost.
The result is that two-bit Llama-2-70B with QTIP reaches Wikitext2 perplexity versus FP16 at — about half the gap of QuIP# () or AQLM (, slower decoder). Three- and four-bit variants are within of FP16 across model sizes. The decoder runs at tokens/s for 7B and tokens/s for 70B at batch one on an RTX 6000 Ada — strictly faster than QuIP# at the same bitrate and dramatically faster than AQLM.
Mathematical Contributions
The per-layer proxy loss
Quantizing the entire network end-to-end is intractable for a 70B model. Following the standard PTQ template (Nagel et al., 2020), QTIP minimizes the per-layer reconstruction error in input-activated form:
where is the FP16 weight matrix, is its quantized counterpart, and is the input second-moment matrix, computed from a calibration set of a few hundred forward passes. The Hessian tells the quantizer which directions matter: rounding error in eigen-directions of with large eigenvalues hurts loss the most, error in the null-space of is free.
Incoherence processing
The Hessian and the weight matrix are not Gaussian or even rotationally symmetric; they have outliers, heavy tails, and structured spikes that hurt vector-quantization shapes built for i.i.d. data. Chee et al. (2024, QuIP#) proposed a fix: pre-multiply and by random orthogonal matrices to flatten the distribution.
Definition 2.1 (-incoherence). A matrix with eigendecomposition is -incoherent if A weight matrix is -incoherent if .
Small means no entry of or is much larger than the average. The random Hadamard transform (RHT) achieves this with high probability. Let be the (deterministic) Hadamard matrix and a length- random sign vector. Then satisfies, with probability at least , Because and are orthogonal, the per-layer loss is invariant: on conjugated activations. Quantization is then performed in the rotated domain, where has entries that are approximately independent and Gaussian-distributed by a CLT-style argument over the random signs. This reduces the quantization problem to: code an i.i.d. Gaussian source.
Trellis-coded quantization
A length- scalar sequence is quantized to by walking a directed graph — the trellis — with nodes, each carrying an outgoing edge to neighbors and a node value in . The reconstruction is the concatenated sequence of node values along the chosen walk. Each step records which of the outgoing edges was taken, which costs bits per length- output block, i.e. bits per scalar. The codebook is implicit: it is the set of all length- walks on , which has size . Storing this explicitly would be infeasible; the trellis structure stores it in space.
For the squared-error distortion metric, the optimal walk is found by Viterbi dynamic programming. Let be the minimum cumulative error among all length- walks ending at node . The recurrence is where is the value of node . With nodes and time steps, total work is — linear in , not exponential. This is the central asymptotic difference from VQ. Brute-force searching a codebook (the -dimensional VQ analogue) is exponential in .
The optimal quantization distortion under a -bit i.i.d. Gaussian source is the rate-distortion function , which Marcellin and Fischer (1989) showed TCQ approaches as . For on a unit-variance Gaussian, the scalar Lloyd-Max quantizer has MSE , QuIP#'s 8-dimensional codebook achieves , QTIP's 256-dimensional () TCQ achieves , against the information-theoretic lower bound .
The bitshift trellis
A general trellis with nodes requires storing a adjacency table and a node-value codebook in inference cache, which is prohibitive for . QTIP uses the bitshift trellis (Mao & Gray, 2010) to remove the adjacency table entirely. In the bitshift trellis, node has an edge to node iff Equivalently, the top bits of equal the bottom bits of . The encoded walk is a single bitstring; advancing one step is a left-shift by bits with the new bits packed in. Decoding the -th block of weights only depends on bits of , so blocks can be decoded in parallel with no inter-block sequentiality. There is no adjacency table to store; the structure is implicit in the bit layout.
Lookup-free Gaussian codes
Even with the bitshift trellis, a vanilla TCQ still needs the size- codebook of node values . For , a 16-bit codebook of 16-bit floats fits in 128 KB — tight in L1, lost in L2. QTIP introduces compute-based Gaussian codes that produce a pseudorandom approximate Gaussian from an -bit input in GPU instructions, with no lookup at all.
The "1MAD" code is the simplest. Given -bit input , run a linear congruential generator to scramble bits, then sum the four bytes: Each byte is a roughly uniform value; the sum is approximately by the CLT, so is approximately . The constants are chosen so neighboring trellis nodes — which share many bit positions — produce near-uncorrelated outputs (the empirical correlation graph is shown in Figure 3 of the paper). The "3INST" code is a 3-instruction variant using float bit-tricks that achieves the same effect without an LCG-style multiply.
The MSE numbers in Table 1 of the paper: 1MAD gives , 3INST gives , vs. the RPTC random-permutation code (a true random Gaussian codebook stored as a lookup) at — the lookup-free codes match the random codebook within rounding noise. Inference now requires zero codebook reads per weight; the decoder is pure ALU.
Tail-biting trellises
A length- encoded walk costs bits because the starting state takes an extra bits. To match GPU word boundaries (typically ), QTIP requires that the start and end states share bits — a tail-biting trellis. Exact tail-biting via DP is time and intractable for . Algorithm 4 in the paper gives an approximation: rotate the sequence by , run Viterbi, take the overlap of the Viterbi-decoded start and end, then re-run Viterbi on the original sequence with that overlap fixed. Two Viterbi calls instead of candidate state pairs. Empirically the MSE is within of optimal (Table 2 of the paper for , ).
Why this works at 2 bits where VQ stalls
The VQ quality ceiling at fixed -bit codebooks comes from finite-dimension shape distortion: at , the codebook lives in and cannot perfectly tile the 8-dimensional Gaussian. Adding more dimensions improves the tiling (the rate-distortion function is monotone in ), but the codebook size blows up. QTIP gets effectively, with a -state trellis that does fit in cache, and uses the trellis edges instead of unstructured codebook entries to capture the high-dimensional shape. The quantization error halves the gap to FP16 in 2-bit Llama-2-70B (Table 5 in the paper) precisely because the Gaussian source coding rate is closer to at high .
What QTIP does not do
It is a weight-only PTQ method. Activations stay in FP16 or BF16. It is orthogonal to KV-cache quantization, attention quantization, and quantization-aware training; it composes with any of them. It assumes the layer-wise proxy loss is a good objective; for layers where it isn't (e.g., the embedding table or output head, which hit different distributions), the paper recommends keeping those at higher precision. It also does not eliminate calibration data — the Hessian is computed from a few hundred forward passes on a representative corpus.
Connections to TheoremPath Topics
- Quantization theory — the rate-distortion framing QTIP optimizes against.
- Model compression and pruning — broader landscape; QTIP is specifically weight-only PTQ.
- Speculative decoding and quantization — the inference-time companion technique.
- Inner product spaces and orthogonality — the orthogonal Hadamard transform that does the incoherence processing.
- Dimensionality reduction theory — the Johnson-Lindenstrauss line of work that proves random projections approximately preserve norms.
- Measure concentration and geometric FA — the concentration arguments that make approximately Gaussian.
- Llama and open-weight models — the empirical target; QTIP tables report Llama-2-7B/13B/70B and Llama-3 numbers.
Why It Matters Now
The systems landscape for LLM inference is bandwidth-bound. Half-precision Llama-3-405B needs 810 GB just for weights; even with 8-way tensor parallel on H200s, the per-step cost is dominated by HBM-to-SMC weight streaming. Two-bit weights mean the same model fits in 100 GB and the per-step bandwidth cost drops . The question is not whether to quantize — every production deployment does — but how aggressively and at what quality cost.
Before QTIP, the answer was "down to 4 bits cleanly, 3 bits if you can tolerate ppl, 2 bits with a real quality cliff." QTIP closes most of the 2-bit gap. The empirical numbers on Llama-3-70B at 2 bits are now within Wikitext2 perplexity of FP16, which is roughly the gap from one Llama-3 generation to the next. Two-bit serving is suddenly competitive.
The methodological lesson is broader. The dominant prior on weight distributions was wrong: VQ-style methods spent their representational budget shape-fitting an idiosyncratic empirical distribution, and the codebook size was the binding constraint. Pre-rotating to make the distribution i.i.d. Gaussian moves the problem into a regime where 60 years of source-coding theory directly applies. The right reference points are Marcellin-Fischer 1990 and Cover-Thomas Chapter 13, not the LLM-PTQ literature. This is a clean case of "apply the textbook result once you have set up the assumptions."
The trellis design choices — bitshift structure, compute-based codes, tail-biting approximation — are GPU-specific. The paper is honest that the engineering wins come from matching the code to the hardware: 4 ALU instructions per weight matters because that is what an H100 SM can execute alongside the matrix-multiply. Codes designed for AMD MI300, Apple Silicon, or future TPU generations would look different. The general framework — incoherence processing plus trellis coding — is hardware-agnostic; the specific 1MAD and 3INST instantiations are not.
References
Canonical:
- Tseng, A., Sun, Q., Hou, D., & De Sa, C. (2024). "QTIP: Quantization with Trellises and Incoherence Processing." NeurIPS 2024. arXiv:2406.11235.
Direct precursors (incoherence processing):
- Chee, J., Cai, Y., Kuleshov, V., & De Sa, C. (2024). "QuIP: 2-Bit Quantization of Large Language Models With Guarantees." NeurIPS 2023. arXiv:2307.13304. Introduces incoherence processing for LLM PTQ.
- Tseng, A., Chee, J., Sun, Q., Kuleshov, V., & De Sa, C. (2024). "QuIP#: Even Better LLM Quantization with Hadamard Incoherence and Lattice Codebooks." ICML 2024. arXiv:2402.04396. The 8-dimensional -lattice predecessor.
Direct precursors (trellis coding):
- Marcellin, M. W., & Fischer, T. R. (1990). "Trellis coded quantization of memoryless and Gauss-Markov sources." IEEE Transactions on Communications, 38(1). The original TCQ paper.
- Mao, X., & Gray, R. M. (2010). "On a normal-form for the random permutation trellis coder." IEEE Transactions on Information Theory, 56(10). The bitshift trellis structure QTIP uses.
Competing PTQ methods:
- Frantar, E., Ashkboos, S., Hoefler, T., & Alistarh, D. (2023). "GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers." ICLR 2023. arXiv:2210.17323. The Hessian-aware sequential rounding baseline.
- Lin, J., Tang, J., Tang, H., Yang, S., Dang, X., Gan, C., & Han, S. (2024). "AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration." MLSys 2024. arXiv:2306.00978. Outlier-aware scaling baseline.
- Egiazarian, V., Panferov, A., Kuznedelev, D., Frantar, E., Babenko, A., & Alistarh, D. (2024). "Extreme Compression of Large Language Models via Additive Quantization." ICML 2024. arXiv:2401.06118. AQLM, the high-quality VQ baseline QTIP outperforms.
The Hessian-loss framing:
- Nagel, M., Amjad, R. A., van Baalen, M., Louizos, C., & Blankevoort, T. (2020). "Up or Down? Adaptive Rounding for Post-Training Quantization." ICML. arXiv:2004.10568. The per-layer proxy loss QTIP minimizes.
Standard textbook:
- Cover, T. M., & Thomas, J. A. (2006). Elements of Information Theory, 2nd ed. Wiley. Chapter 13 — rate-distortion theory; the that QTIP approaches.
Connected topics
Last reviewed: May 6, 2026