Skip to main content

Math Foundations

Master Linear Algebra

A focused route through the linear algebra that powers PCA, optimization, embeddings, attention, and neural networks.

Time

~14 hours

Core loop

object → shape → operation → geometric meaning → ML use

Topics

8 ordered topics

End state

You can read matrix-heavy ML pages without stopping at every symbol, and you know which algebraic object is doing the work.

Checkpoint 1

Vectors and Linear Maps

Treat vectors as objects and matrices as transformations, not just tables of numbers.

Step 1

Vectors, Matrices, and Linear Maps

Know what the objects are and how a matrix acts on a vector.

Open Vectors, Matrices, and Linear Maps
Practice: Given a batch matrix, state what each axis means before multiplying.
Step 2

Matrix Operations and Properties

Multiply, transpose, invert when valid, and check dimensions before computing.

Open Matrix Operations and Properties
Practice: Predict the output shape of three chained matrix products.
Checkpoint 2

Eigenvectors, SVD, and Geometry

Explain directions, variance, rank, projections, and low-dimensional structure.

Step 1

Matrix Norms

Measure vector and matrix size in ways that match ML stability arguments.

Open Matrix Norms
Practice: Compare Frobenius and spectral norm interpretations for a weight matrix.
Step 2

Eigenvalues and Eigenvectors

Identify invariant directions and why they matter for covariance and dynamics.

Open Eigenvalues and Eigenvectors
Practice: Explain what a dominant eigenvector says about repeated multiplication.
Step 3

Singular Value Decomposition

Decompose a matrix into rotations, scalings, and low-rank structure.

Open Singular Value Decomposition
Practice: Use singular values to reason about rank and reconstruction error.
Step 4

Principal Component Analysis

See how covariance, eigenvectors, and projection become a learning method.

Open Principal Component Analysis
Practice: Describe why PCA keeps directions with high variance.
Checkpoint 3

Matrix Calculus for ML

Connect Jacobians, Hessians, and matrix derivatives to optimization and backprop.

Step 1

The Jacobian Matrix

Track how vector-valued functions change with respect to vector inputs.

Open The Jacobian Matrix
Practice: State the Jacobian shape for a function from R^d to R^k.
Step 2

Matrix Calculus

Use gradients and matrix derivatives without losing shape discipline.

Open Matrix Calculus
Practice: Derive the shape of a gradient before writing the formula.

How to use this path

Do not only read the pages. For each step, write the shape ledger, answer the practice prompt, and then run a small quiz or diagnostic. The goal is operational fluency: you should be able to predict what changes before code or algebra tells you.

Back to reading paths →