High-Dimensional Probability Lab
Random vectors concentrate, random covariance spectra spread, and weak signals disappear into noise. This lab ties those facts together.
From thin shells to spectral spikes
Build the bridge from Vershynin-style concentration to random matrices, sample covariance, and PCA failure modes.
Most random vectors in high dimension have almost the same length. The visible idea is the thin shell: randomness remains, but relative norm variation shrinks.
Drag the slider and watch this board change.
This lab is the visual side of measure concentration and geometric functional analysis.
Norm concentration says stays near 1.
Levy-style geometry says most mass is near level sets.
Distance preservation is one tail bound repeated over pairs.
Length becomes predictable
At d = 96, a Gaussian vector still has random coordinates, but its normalized length is mostly trapped within a band of about 0.118 around 1.
Vershynin Chapter 3 turns this picture into norm concentration and random projection results.
ML translation: high dimension turns randomness into geometry, and random-matrix spectra decide when covariance, PCA, and signal recovery are trustworthy.
Vershynin: Ch. 2 for Chernoff and sub-Gaussian tails, Ch. 3 for random vectors, Ch. 4 for random matrices, Ch. 5 for nets.