Beta. Content is under active construction and has not been peer-reviewed. Report errors on GitHub.Disclaimer

Numerical Optimization

Tabu Search

Local search with memory: maintain a list of recently visited solutions to prevent cycling, use aspiration criteria to override the tabu when a move leads to a new best, and balance intensification against diversification.

CoreTier 3Stable~35 min

Prerequisites

0

Why This Matters

Greedy algorithms and simple local search get stuck in local optima. The greedy approach accepts improving moves only, so once it reaches a local optimum it stops. Allowing non-improving moves helps escape local optima, but creates a new problem: the search can cycle between the same few solutions indefinitely.

Tabu search (Glover, 1986) solves the cycling problem by adding memory. It maintains a list of recently visited solutions (or recently made moves) and forbids the search from revisiting them. This forces exploration of new regions. Combined with aspiration criteria and diversification strategies, tabu search is one of the most effective metaheuristics for combinatorial optimization.

Mental Model

Think of local search as walking through a landscape of solutions. At each step, you look at all neighboring solutions and move to the best one, even if it is worse than where you are. To prevent walking in circles, you keep a short-term memory (the tabu list) of where you have been recently, and you refuse to go back there. If a forbidden move would lead to the best solution you have ever seen, you override the prohibition (aspiration criterion).

Formal Setup

Definition

Tabu Search Framework

Let SS be a finite set of feasible solutions, f:SRf: S \to \mathbb{R} an objective to minimize, and N(s)SN(s) \subseteq S the neighborhood of solution ss.

A tabu search proceeds as follows:

  1. Start at initial solution s0s_0. Set s=s0s^* = s_0 (best known).
  2. At iteration kk, let TkST_k \subseteq S be the tabu list.
  3. Choose sk+1=argminsN(sk)Tkf(s)s_{k+1} = \arg\min_{s \in N(s_k) \setminus T_k} f(s), with aspiration override.
  4. Update Tk+1T_{k+1}: add sks_k (or the move that produced sk+1s_{k+1}), remove old entries.
  5. If f(sk+1)<f(s)f(s_{k+1}) < f(s^*), set s=sk+1s^* = s_{k+1}.
  6. Repeat until a stopping criterion is met.
Definition

Tabu List

The tabu list TkT_k stores attributes of recently visited solutions or recently performed moves. A common implementation stores the last LL moves (e.g., in TSP, the last LL edge swaps), where LL is called the tabu tenure. A move is forbidden if it reverses a recent move on the list.

Definition

Aspiration Criterion

An aspiration criterion overrides the tabu status of a move. The most common: if the move leads to a solution with objective value strictly better than f(s)f(s^*), accept it regardless of tabu status. This prevents the tabu list from blocking moves that would improve the best known solution.

Main Theorems

Theorem

Finite Convergence of Tabu Search

Statement

If the solution space SS is finite and the tabu tenure LmaxsSN(s)L \geq \max_{s \in S} |N(s)|, then tabu search with aspiration criterion visits every reachable solution in finite time and therefore finds the global optimum in at most S|S| iterations after leaving a local basin.

Intuition

With tabu tenure at least as large as the neighborhood size, the search cannot revisit any neighbor of the current solution within LL steps. Since SS is finite and all neighbors are eventually forbidden, the search is forced to move to new regions. It cannot cycle, so it must eventually visit every reachable solution.

Proof Sketch

At each step, the search moves to a non-tabu neighbor. With tenure LN(s)L \geq |N(s)| for all ss, every neighbor of ss is tabu for at least one step after being visited. The search therefore visits at least one new solution every LL steps. Since S|S| is finite, all reachable solutions are visited in at most LSL \cdot |S| steps.

Why It Matters

The theorem provides a theoretical guarantee, but it is impractical. Setting LL to the maximum neighborhood size makes the tabu list too long and the search too aggressive. In practice, LL is set much smaller (e.g., N(s)\sqrt{|N(s)|}) and the search trades the global guarantee for practical efficiency.

Failure Mode

If the tabu tenure is too short, the search can cycle. If it is too long, the search rejects too many good moves and explores slowly. The optimal tenure depends on the problem structure and is typically tuned empirically. There is no practical formula for the optimal LL.

Intensification vs Diversification

Intensification means concentrating search near the best known solution. Strategies include reducing the tabu tenure, restricting the neighborhood to moves similar to those in the best solution, or restarting from ss^*.

Diversification means forcing the search into unexplored regions. Strategies include increasing the tabu tenure, penalizing frequently visited solution attributes, or introducing random perturbations.

Effective tabu search balances these two forces. A common scheme: intensify when the best solution has not improved for a moderate number of iterations, diversify when it has not improved for a large number.

Canonical Examples

Example

Tabu search for TSP

Solution: a tour visiting all nn cities. Neighborhood: all tours reachable by a 2-opt swap (remove two edges, reconnect the tour). Tabu list: store the last LL reversed edges. Aspiration: accept any move that produces a shorter tour than the current best.

For n=100n = 100, the neighborhood size is O(n2)5000O(n^2) \approx 5000. A typical tabu tenure is L=10L = 10 to 2020. Each iteration evaluates all non-tabu neighbors, picks the best, and moves. This is much faster than exact methods and produces tours within a few percent of optimal on benchmark instances.

Common Confusions

Watch Out

Tabu search is not simulated annealing

Both accept non-improving moves, but the mechanisms differ. Simulated annealing accepts worse moves with a probability that decreases over time (temperature schedule). Tabu search accepts the best non-tabu neighbor deterministically and uses memory to prevent cycling. Simulated annealing is memoryless; tabu search is memory-based.

Watch Out

The tabu list stores moves, not solutions

Storing entire solutions would require O(Ls)O(L \cdot |s|) space and O(s)O(|s|) lookup time. In practice, the tabu list stores move attributes (e.g., which edge was swapped in TSP) rather than full solutions. This makes lookup fast and storage compact, at the cost of occasionally forbidding a move that leads to a genuinely new solution (false tabu). The aspiration criterion mitigates this.

Watch Out

Tabu tenure is not the tabu list size

The tabu tenure LL is the number of iterations a move stays on the tabu list. The tabu list at any given time contains at most LL entries. Some implementations use adaptive tenure that changes during the search based on performance.

Summary

  • Local search plus memory to prevent cycling
  • Tabu list stores recent moves (not full solutions) with a fixed tenure
  • Aspiration criterion overrides tabu when a move yields a new best
  • Balance intensification (search near best) and diversification (explore new regions)
  • Theoretical guarantee requires impractically large tenure; practical tenure is tuned empirically
  • Effective for TSP, scheduling, graph coloring, and other combinatorial problems

Exercises

ExerciseCore

Problem

In a graph coloring problem with n=50n = 50 vertices and k=3k = 3 colors, the neighborhood is defined by changing the color of one vertex. What is the neighborhood size N(s)|N(s)|? If the tabu tenure is L=7L = 7, what fraction of the neighborhood is forbidden at each step (in the worst case)?

ExerciseAdvanced

Problem

Suppose tabu search with tenure L=5L = 5 on a problem with neighborhood size 10 revisits a solution after 6 steps. Explain how this can happen despite the tabu list, and describe how an adaptive tenure strategy could prevent it.

References

Canonical:

  • Glover, "Future Paths for Integer Programming and Links to Artificial Intelligence" (1986), Computers and Operations Research
  • Glover & Laguna, Tabu Search (1997), Chapters 1-6

Current:

  • Gendreau & Potvin (eds.), Handbook of Metaheuristics (3rd ed., 2019), Chapter 2

  • Hastie, Tibshirani, Friedman, The Elements of Statistical Learning (2009)

Next Topics

  • Search for related metaheuristics in the numerical-optimization module

Last reviewed: April 2026

Prerequisites

Foundations this topic depends on.