Prerequisite chain
Prerequisites for CLIP, OpenCLIP, and SigLIP: Contrastive Language-Image Pretraining
Topics you need before working through CLIP, OpenCLIP, and SigLIP: Contrastive Language-Image Pretraining. Direct prerequisites are listed first; transitive prerequisites (the chain reachable through them) follow.
Direct prerequisites (3)
- Contrastive Learninglayer 3, tier 2
- Vision Transformer Lineage: ViT, DeiT, Swin, MAE, DINOv2, SAMlayer 4, tier 1
- Information Theory Foundationslayer 0B, tier 2
Reachable through the chain (19)
These topics are not directly cited as prerequisites but are reached transitively by following the chain upward. Working through the direct prerequisites pulls these in.
- Feedforward Networks and Backpropagationlayer 2, tier 1
- Differentiation in Rnlayer 0A, tier 1
- Sets, Functions, and Relationslayer 0A, tier 1
- Basic Logic and Proof Techniqueslayer 0A, tier 2
- Vectors, Matrices, and Linear Mapslayer 0A, tier 1
- Continuity in Rⁿlayer 0A, tier 1
- Metric Spaces, Convergence, and Completenesslayer 0A, tier 1
- Matrix Calculuslayer 1, tier 1
- The Jacobian Matrixlayer 0A, tier 1
- The Hessian Matrixlayer 0A, tier 1
- Matrix Operations and Propertieslayer 0A, tier 1
- Eigenvalues and Eigenvectorslayer 0A, tier 1
- Activation Functionslayer 1, tier 1
- Convex Optimization Basicslayer 1, tier 1
- Transformer Architecturelayer 4, tier 2
- Attention Mechanism Theorylayer 4, tier 2
- Softmax and Numerical Stabilitylayer 1, tier 1
- Convolutional Neural Networkslayer 3, tier 2
- Self-Supervised Visionlayer 4, tier 2