Simple modular hashing (`id(packet) mod N`) distributes connections evenly but catastrophically remaps almost every connection when a single backend fails (N becomes N-1). Consistent hashing (Karger et al., 1997) provides a mapping algorithm that remains relatively stable when backends are added or removed, minimizing disruption to existing connections.
## Why It Matters
The failure mode of modular hashing is invisible during normal operation — everything works until a backend goes down, then nearly all connections reset simultaneously. Consistent hashing converts a catastrophic failure into a proportional one: only 1/N of connections are affected.
## Application Beyond Load Balancing
| Domain | Use Case |
|--------|----------|
| **Distributed caches** | Memcached/Redis cluster — adding a node only invalidates ~1/N of keys |
| **Database sharding** | Adding a shard doesn't force full data migration |
| **CDN routing** | Cache node failures don't flush entire content cache |
| **Load balancing** | VIP backends — connection tracking falls back to consistent hashing under pressure |
## Trade-off
Google's approach: use simple connection tracking normally, fall back to consistent hashing under pressure (e.g., DDoS). This gives optimal performance in the common case while maintaining stability during failures.
## Source
- [[SRE Book Ch 19 - Load Balancing at the Frontend]]
- Original paper: Karger et al., 1997
## Related Concepts
- [[Load Balancer Function]]
- [[Horizontal Scaling Foundation]]
- [[Hotspot Key Problem]]