Statistical Estimation
Basu's Theorem
A complete sufficient statistic is independent of every ancillary statistic. This provides the cleanest method for proving independence between statistics without computing joint distributions.
Prerequisites
Why This Matters
Proving that two statistics are independent usually requires computing their joint distribution and factoring it. This can be painful. Basu's theorem gives a shortcut: if one statistic is complete sufficient and the other is ancillary, they are independent. No joint distribution computation needed.
The classic application: in a normal sample, the sample mean is independent of the sample variance . This fact underpins the derivation of the t-test. Basu's theorem proves it in two lines.
Formal Setup
Ancillary Statistic
A statistic is ancillary for a parameter if its distribution does not depend on . It carries no information about by itself, but may carry information in combination with other statistics.
Complete Statistic
A statistic is complete if for every measurable function :
Completeness means there are no nontrivial unbiased estimators of zero based on . Informally, contains no "wasted" information.
Main Theorems
Basu's Theorem
Statement
If is a complete sufficient statistic for and is ancillary for , then and are independent (under every ).
Intuition
Sufficiency means that the conditional distribution of the data given does not depend on . Ancillarity means the marginal distribution of does not depend on . Completeness forces these two facts to combine into independence: the conditional distribution of given must equal the marginal distribution of .
Proof Sketch
Let be any measurable set. Define . By sufficiency, does not depend on . By ancillarity, does not depend on . So does not depend on , and for all (using ancillarity again). By completeness, a.s., meaning a.s. This is independence.
Why It Matters
Without this theorem, proving independence of and in normal sampling requires computing the joint density via a change of variables. With Basu's theorem, you only need three facts: (1) is complete sufficient for when is known, (2) is ancillary for , (3) apply the theorem. This pattern extends to many other settings.
Failure Mode
If the sufficient statistic is not complete, the theorem fails. For example, in a uniform distribution on , the order statistics are sufficient but not complete. The range is ancillary but not independent of the midrange .
Canonical Examples
Normal sampling: mean and variance independence
Let with known. The sample mean is complete sufficient for (this follows from the normal distribution being an exponential family). The statistic has a distribution that depends only on , not on . So is ancillary for . By Basu's theorem, and are independent. This is the fact that makes the t-statistic have a t-distribution.
Exponential distribution: mean and coefficient of variation
Let . The sample sum is complete sufficient for . The vector of ratios is ancillary (its distribution is uniform on the simplex, independent of ). By Basu's theorem, is independent of all the ratios .
Common Confusions
Ancillary does not mean useless
An ancillary statistic carries no information about by itself. But conditionally, given the ancillary, the precision of estimation can change. This is the basis of conditional inference. Basu's theorem says: if you have a complete sufficient statistic, you cannot improve estimation by conditioning on the ancillary.
Completeness is doing the heavy lifting
Sufficiency alone does not imply independence from ancillary statistics. Completeness is the key condition. Think of completeness as saying the sufficient statistic has no redundancy: there is no function of that is itself ancillary.
Summary
- Complete sufficient + ancillary implies independent
- The proof uses completeness to upgrade "same expectation" to "equal a.s."
- The main application is proving independence without computing joint distributions
- Fails without completeness: sufficiency alone is not enough
Exercises
Problem
Let . Identify a complete sufficient statistic and an ancillary statistic. State what Basu's theorem tells you.
Problem
Give an example where a sufficient statistic and an ancillary statistic are not independent. What condition of Basu's theorem fails?
References
Canonical:
- Casella & Berger, Statistical Inference, Chapter 6.2
- Lehmann & Casella, Theory of Point Estimation, Chapter 4
Current:
-
Keener, Theoretical Statistics, Chapter 3
-
van der Vaart, Asymptotic Statistics (1998), Chapters 2-8
Next Topics
- Fisher information: the natural next step in estimation theory
Last reviewed: April 2026
Prerequisites
Foundations this topic depends on.
- Sufficient Statistics and Exponential FamiliesLayer 0B
- Maximum Likelihood EstimationLayer 0B
- Common Probability DistributionsLayer 0A
- Sets, Functions, and RelationsLayer 0A
- Basic Logic and Proof TechniquesLayer 0A
- Differentiation in RnLayer 0A