Sampling MCMC
Slice Sampling
Slice sampling draws from a target distribution by uniformly sampling from the region under its density curve. It introduces an auxiliary variable to avoid tuning proposal distributions, unlike random-walk Metropolis-Hastings.
Prerequisites
Why This Matters
Random-walk Metropolis-Hastings requires choosing a proposal distribution, and performance is sensitive to this choice. A proposal that is too narrow gives high acceptance but slow exploration. A proposal that is too wide gives low acceptance and wastes computation. Slice sampling sidesteps this: it introduces an auxiliary variable and samples uniformly from the region under the density curve, automatically adapting to the local scale of the target.
The algorithm has no tuning parameters in its idealized form. In practice, the "stepping-out" procedure introduces a width parameter, but performance is robust to this choice.
The Idea
To sample from a density (up to a normalizing constant), the key observation is:
If you can sample uniformly from the region under the curve of , the -marginal is exactly .
The Algorithm
Slice Sampler
Given current state and (unnormalized) target density :
- Draw auxiliary variable: sample .
- Define the slice: .
- Sample from slice: draw uniformly from .
This defines a Markov chain with stationary distribution proportional to .
Step 3 is the hard part. The slice can be a complicated, disconnected set. In one dimension, the "stepping-out" and "shrinking" procedure makes this practical.
Stepping-out procedure (univariate):
- Start with an interval of initial width around .
- Randomly position the interval: , where .
- Step out: while , set . While , set .
- The interval now contains the slice (at least the connected component containing ).
- Sample . If , accept . Otherwise, shrink the interval (replace or with depending on which side) and repeat.
The shrinking step guarantees that the algorithm eventually finds a point in the slice.
Correctness
Slice Sampler Preserves the Target Distribution
Statement
The joint distribution that is uniform on is invariant under the slice sampling Markov chain. The marginal distribution of is proportional to .
Intuition
Each conditional distribution is correct by construction. Given , is uniform on . Given , is uniform on the slice . Since each Gibbs step preserves the joint, the overall chain preserves it.
Proof Sketch
The slice sampler is a Gibbs sampler on the joint space . The joint density is , which is the uniform distribution on the subgraph of . The conditional and are both correct. Each step of the Gibbs sampler preserves the joint, so is the correct -marginal (up to normalization).
Why It Matters
This shows that slice sampling is exact (no approximation error in the stationary distribution), unlike methods that truncate or approximate the target. The only source of error is finite-time bias from not having mixed, which is common to all MCMC methods.
Failure Mode
The proof assumes exact uniform sampling from the slice. The stepping-out procedure finds a superset of the slice and uses rejection within it, which is exact. But in multivariate settings, the slice can be a complicated non-convex region, and uniform sampling becomes hard. Multivariate slice sampling typically updates one coordinate at a time (like Gibbs), which can be slow for correlated targets.
Uniform Ergodicity of the Slice Sampler
Statement
If is bounded (i.e., for all ) and , the univariate slice sampler is uniformly ergodic: there exist constants and such that:
for all starting points .
Intuition
When is bounded, the slice at any height is contained in a bounded region. This ensures that the chain can "reach" any part of the state space in a bounded number of steps, giving geometric convergence.
Proof Sketch
Mira and Roberts (2002) show that the slice sampler satisfies a minorization condition when is bounded. For any starting point, with positive probability the auxiliary variable falls below a threshold where the slice covers a fixed "regeneration set." From this set, the chain has a fixed probability of reaching any target region, giving the uniform ergodicity bound.
Why It Matters
Uniform ergodicity is a strong mixing guarantee: convergence is geometric and uniform over all starting points. Many Metropolis-Hastings chains are only geometrically ergodic (not uniformly), so the slice sampler has an advantage for bounded targets.
Failure Mode
For unbounded densities (e.g., those with heavy tails or singularities), uniform ergodicity can fail. The chain may get stuck in the tails where the density is very small and the slices are very wide. For heavy-tailed targets, the slice sampler may still be geometrically ergodic, but the rate depends on the tail behavior.
Advantages Over Metropolis-Hastings
- No proposal distribution to tune: the width parameter in stepping-out affects efficiency but not correctness.
- No rejection of moves: every iteration produces a new sample (though the stepping-out procedure requires multiple density evaluations).
- Automatic scale adaptation: the slice width adjusts to the local curvature of .
Common Confusions
Slice sampling still requires density evaluations
Slice sampling is not "free." The stepping-out procedure requires evaluating at each boundary extension, and the shrinking procedure evaluates for each rejected proposal. The total number of density evaluations per step depends on the width and the target geometry. For expensive densities, this cost can dominate.
The width parameter w is not a proposal variance
In Metropolis-Hastings, the proposal variance directly determines the acceptance rate and mixing. In slice sampling, determines how many steps the stepping-out procedure takes. Too small: many stepping-out evaluations. Too large: many shrinking evaluations. Either way, the chain is correct. Performance degrades gracefully, not catastrophically.
Canonical Examples
Slice sampling from a mixture of Gaussians
Target: where is the Gaussian density. At , compute . Draw , say . The slice consists of two intervals (one around each mode). The stepping-out procedure finds bounds containing these intervals. Sampling uniformly, the chain can jump between modes in a single step, unlike random-walk MH which must traverse the low-density region between modes.
Exercises
Problem
For the uniform distribution on , describe what the slice sampler does at each step. What is the slice for any value of the auxiliary variable ?
Problem
Consider the target (a Laplace distribution). At , draw auxiliary . What is the slice ? Approximately how wide is it? Compare this to the slice width at with the same value.
References
Canonical:
- Neal, "Slice Sampling" (2003), Annals of Statistics 31(3), 705-767
Current:
-
Mira & Roberts, "Slice Sampling" (2002), in Highly Structured Stochastic Systems
-
Murray, Adams, MacKay, "Elliptical Slice Sampling" (2010)
-
Gelman et al., Bayesian Data Analysis (2013), Chapters 10-12
-
Brooks et al., Handbook of MCMC (2011), Chapters 1-5
Next Topics
- Burn-in and convergence diagnostics: practical methods for assessing when MCMC has converged
Last reviewed: April 2026
Prerequisites
Foundations this topic depends on.
- Metropolis-Hastings AlgorithmLayer 2
- Common Probability DistributionsLayer 0A
- Sets, Functions, and RelationsLayer 0A
- Basic Logic and Proof TechniquesLayer 0A