Run a focused checkpoint.
Answer what you know, skip what you do not, and let the run turn weak signals into a short review path.
Scope
Run at a glance
Starts with lower-friction anchors, then checks the concentration-to-learning-theory bridge.
Focused checkpoint
Ready for the checkpoint run?
Answer what you know and skip what you do not. Misses adjust the remaining order, and the result becomes a short review path for this checkpoint.
Optimization foundations
A curated 10-question diagnostic spine for gradients, convexity, SGD, batch-size noise, step-size failure, Hessian checks, Newton updates, and Adam.
Learner can explain descent direction, convexity, mini-batch SGD, batch-size variance, step-size caveats, Hessian convexity, Newton updates, Robbins-Monro schedules, and Adam errors.
How this run adapts
Review focus
- gradient direction
- convexity assumptions
- stochastic-gradient noise
- optimizer update mechanics
Could unlock
- Optimizer Debug Checklist
- Learning Rate Failure Diagnoser
Optional self-check
Leave blank if you want the neutral ramp. Pick only the areas where you have a strong signal.
Vectors, matrices, derivatives, notation.
Distributions, estimators, likelihood, concentration.
Gradients, convexity, SGD, Adam, proximal ideas.
ERM, VC, PAC, Rademacher, generalization.
Deep nets, attention, transformers, value functions.
Moving ideas across topics and checking assumptions.
This focused run uses the gold questions attached to the checkpoint. Sign in to attach it to your profile.
Browser-only run
This diagnostic will save for this browser, but it will not create account-level learning events. Sign in first if you want the run to count toward your profile.