Learning Track
ML Research Interview Prep
The theory you'll actually be asked at top ML labs. Five phases, from foundations assumed cold through the 2026 research frontier. Every topic linked to its exact theorem statements, proofs, and failure modes. No hand-waving.
Phase 1: Foundations assumed cold
Often asked as warm-ups. Weak answers here usually end the round early, regardless of how strong later material is.
Phase 2: Learning theory classics
The "why do models generalize" thread. Expect precise statements of VC, Rademacher, PAC, and uniform convergence, not verbal summaries.
Phase 3: Optimization and training
Practical questions with theoretical answers. Can you reason about convergence rates and failure modes?
Phase 4: Modern deep learning
What research teams care about right now: transformer internals, generalization puzzles, and scaling behavior.
Phase 5: Research frontier
Asked at alignment, interpretability, and frontier-lab research roles. Know the questions; know what's open.
Representative questions
A sample of what gets asked. Each links to the page that answers it.
- Derive the bias-variance decomposition.
- What is VC dimension? Give an example where it's infinite.
- Why does batch normalization work?
- State Hoeffding's inequality precisely.
- Why the 1/sqrt(d) scaling in attention?
- What is double descent? Why does it not contradict bias-variance?
- Derive backprop on a 2-layer MLP.
- Compare PPO and DPO for RLHF.
Strategy
- Breadth before depth. Know every Phase 1-3 topic well enough to state the theorem and sketch the proof. Interviewers follow up on what you state.
- Pick one frontier area. You cannot cover everything in Phase 5. Pick scaling, interpretability, or alignment and go deep.
- Expect failure-mode questions. "When does X fail?" is the most common follow-up. Every page here has a FailureMode section for a reason.
- Start with the gap finder. Pick the topic you want to reach. It walks prerequisites backward and gives you a reading list.