Concentration Probability
Bernstein Inequality
A variance-sensitive concentration inequality for independent bounded random variables. Bernstein sharpens Hoeffding when the variance is much smaller than the worst-case range.
Prerequisites
Why This Matters
Hoeffding's inequality uses only the range of each summand. That is safe, but it can be loose when the summands are usually small. Bernstein's inequality adds one more piece of information: the variance. The reward is a tighter tail bound in low-variance or sparse-loss regimes.
This is the concentration inequality behind many variance-sensitive learning bounds. If a classifier makes rare errors, or a loss is usually near zero, a range-only bound treats the problem as if worst-case losses happen often. Bernstein separates two facts:
- the variables are bounded, so very large jumps cannot happen;
- the variance is small, so moderate deviations are rarer than Hoeffding predicts.
Mental Model
Bernstein has two regimes.
| Deviation size | Dominant term | Tail shape | Interpretation |
|---|---|---|---|
| Small | variance | Gaussian-like concentration with the true variance | |
| Large | range | sub-exponential tail caused by bounded but nonzero jump size |
Hoeffding behaves as if the variance were always near the worst-case range variance. Bernstein uses the actual variance proxy.
Formal Setup
Let be independent real-valued random variables. Write , assume each centered variable is bounded by :
and define the variance proxy
The quantity of interest is the centered sum
Scalar Bernstein Inequality
Statement
If are independent, , almost surely, and , then for every :
A two-sided version follows by applying the same bound to and using a union bound:
Intuition
The denominator has two terms. The variance term controls ordinary fluctuations near the mean. The bounded-jump term prevents the bound from pretending the tail is Gaussian forever. Far enough from the mean, a single large-but-bounded jump becomes the limiting obstruction.
Proof Sketch
Use the Chernoff method. For :
Independence factors the moment generating function into a product. A Bernstein MGF lemma bounds each centered bounded summand:
for . Multiplying over gives:
Optimizing the resulting exponent over gives the displayed Bernstein bound.
Failure Mode
The bounded-deviation assumption is doing real work. If the variables are unbounded but merely have finite variance, Bernstein does not apply. Use Chebyshev, a sub-exponential condition, or a truncation argument instead.
Bernstein Bound for a Sample Mean
Statement
If are i.i.d., , almost surely, and , then:
Why It Matters
This is the form used in statistics and learning theory. It says the sample mean concentrates at a rate controlled by the true variance for small , while still respecting the bounded range for larger deviations.
Bernstein vs. Hoeffding
For , Hoeffding gives:
Bernstein gives:
If is near , the two bounds are comparable up to constants. If , Bernstein can be much tighter.
Suppose , , and . To bound , Hoeffding uses only the range and behaves like . Bernstein uses the variance and behaves like under the common two-sided sample-mean form. The exponent is about 19 times larger, so the required sample size is far smaller.
Common Confusions
Bernstein is not always smaller than Hoeffding
Bernstein is variance-sensitive, not magic. Constants matter. At small sample sizes or when the variance is not much smaller than the worst-case range variance, Hoeffding can be just as useful.
The variance must be controlled
The bound needs a variance proxy. If you estimate variance from the same sample, that estimate needs its own concentration argument. Plugging in an empirical variance without accounting for its error is a different theorem.
Bounded range and sub-exponential are related but not identical
The scalar Bernstein inequality above assumes bounded centered deviations. The sub-exponential Bernstein bound replaces boundedness with an MGF condition that holds near zero. The two statements have similar two-regime shapes, but their assumptions are not the same.
Exercises
Problem
Let be i.i.d. with . Use the sample mean Bernstein bound to upper-bound .
Problem
Explain why Bernstein is often the right bound for finite-class generalization when the loss is bounded and usually near zero.
Related Comparisons
References
Canonical:
- Bernstein, S. N. (1924). Original paper introducing Bernstein-type probability inequalities.
- Boucheron, Lugosi, Massart, Concentration Inequalities: A Nonasymptotic Theory of Independence (2013), Chapters 2-3
- Massart, Concentration Inequalities and Model Selection (2007), Chapter 2
Current:
- Vershynin, High-Dimensional Probability (2018), Chapter 2
- Wainwright, High-Dimensional Statistics (2019), Chapter 2
- Mohri, Rostamizadeh, Talwalkar, Foundations of Machine Learning (2018), Chapters 2-3
Next Topics
- Sub-exponential random variables: the distributional class behind Bernstein-type tails
- Matrix concentration: Bernstein bounds for random matrices
- Uniform convergence: where Hoeffding and Bernstein feed learning bounds
Last reviewed: April 30, 2026
Canonical graph
Required before and derived from this topic
These links come from prerequisite edges in the curriculum graph. Editorial suggestions are shown here only when the target page also cites this page as a prerequisite.
Required prerequisites
3- Expectation, Variance, Covariance, and Momentslayer 0A · tier 1
- Concentration Inequalitieslayer 1 · tier 1
- Moment Generating Functionslayer 0A · tier 2
Derived topics
0No published topic currently declares this as a prerequisite.