Introduction
Hash tables are popular data structures for storing key-value pairs. A hash function is used to map the key value (usually a string
) to array index. The functions are different from cryptographic hash functions, because they should be much faster and don't need to be resistant to preimage attack. There are two classes of the functions used in hash tables:
- Multiplicative hash functions, which are simple and fast, but have a high number of collisions;
- More complex functions, which have better quality, but take more time to calculate.
Hash function benchmarks usually include theoretical metrics such as the number of collisions or distribution uniformity (see, for example, hash function comparison in the Red Dragon book). Obviously, you will have a better distribution with more complex functions, so they are winners in these benchmarks.
The question is whether using complex functions gives you a faster program. The complex functions require more operations per one key, so they can be slower. Is the price of collisions high enough to justify the additional operations?
Multiplicative Hash Functions
Any multiplicative hash function is a special case of the following algorithm:
unsigned HashMultiplicative(const char *key, size_t len) {
unsigned hash = INITIAL_VALUE;
for(unsigned i = 0; i < len; ++i)
hash = M * hash + key[i];
return hash % TABLE_SIZE;
}
(Sometimes XOR operation is used instead of addition, but it does not make much difference.) The hash functions differ only by values of INITIAL_VALUE
and multiplier (M). For example, the popular Bernstein's function uses INITIAL_VALUE
of 5381
and M
of 33
; Kernighan and Ritchie's function uses INITIAL_VALUE
of 0
and M
of 31
.
A multiplicative function works by adding together the letters weighted by powers of multiplier. For example, the hash for the word TONE will be:
INITIAL_VALUE * M^4 + 'T' * M^3 + 'O' * M^2 + 'N' * M + 'E'
Let's enter several similar string
s and watch the output of the functions:
Bernstein Kernighan
(M=33) (M=31)
too b88af17 1c154
top b88af18 1c155
tor b88af1a 1c157
tpp b88af39 1c174
a000 7c9312d6 2cd22f
a001 7c9312d7 2cd230
a002 7c9312d8 2cd231
a003 7c9312d9 2cd232
a004 7c9312da 2cd233
a005 7c9312db 2cd234
a006 7c9312dc 2cd235
a007 7c9312dd 2cd236
a008 7c9312de 2cd237
a009 7c9312df 2cd238
a010 7c9312f7 2cd24e
a 2b606 61
aa 597727 c20
aaa b885c68 17841
Too and top are different in the last letter only. The letter P is the next one after O, so the values of hash function are different by 1 (1c154 and 1c155, b88af17 and b88af18). Ditto for a000..a009.
Now let's compare top with tpp. Their hashes will be:
INITIAL_VALUE * M^3 + 'T' * M^2 + 'O' * M + 'P'
INITIAL_VALUE * M^3 + 'T' * M^2 + 'P' * M + 'P'
The hashes will be different by M * ('P' - 'O') = M
. Similarly, when the first letters are different by x
, their hashes will be different by x * M^2
.
When there are less than 33 possible letters, Bernstein's function will pack them into a number (similar to Radix40 packing scheme). For example, hash table of size 333 will provide perfect hashing (without any collisions) for all three-letter English words written in small letters. In practice, the words are longer and hash tables are smaller, so there will be some collisions (situations when different string
s have the same hash value).
If the string
is too long to fit into the 32-bit number, the first letters will still affect the value of the hash function, because the multiplication is done modulo 232 (in a 32-bit register), and the multiplier is chosen to have no common divisors with 232 (in other words, it must be odd), so the bits will not be just shifted away.
There are no exact rules for choosing the multiplier, only some heuristics:
- The multiplier should be large enough to accommodate most of the possible letters (e.g., 3 or 5 is too small).
- The multiplier should be fast to calculate with shifts and additions [e.g., 33 * hash can be calculated as (hash << 5) + hash].
- The multiplier should be odd for the reason explained above.
- Prime numbers are good multipliers.
Complex Hash Functions
These functions do a good job of mixing together the bits of the source word. The change in one input bit changes a half of the bits in the output (see Avalanche_effect), so the result looks completely random:
Paul Hsieh One At Time
too 3ad11d33 3a9fad1e
top 78b5a877 4c5dd09a
tor c09e2021 f2aa9d35
tpp 3058996d d5e9e480
a000 7552599f ed3859d8
a001 3cc1d896 fef7fd57
a002 c6ff5c9b 08a610b3
a003 dcab7b0c 1a88b478
a004 780c7202 3621ebaa
a005 7eb63e3a 47db8f1d
a006 6b0a7a17 b901717b
a007 cb5cb1ab caec1550
a008 5c2a15c0 e58d4a92
a009 33339829 f75aee2d
a010 eb1f336e bd097a6b
a 115ea782 ca2e9442
aa 008ad357 7081738e
aaa 7dfdc310 ae4f22ec
To achieve this behavior, the hash functions perform a lot of shifts, XORs, and additions. But do we need a complex function? What is faster: tolerating the collisions and resolving them with chaining, or avoiding them with a more complex function?
Test Conditions
The benchmark uses separate chaining algorithm for collision resolution. Memory allocation and other "heavy" functions were excluded from the benchmarked code. The RDTSC instruction was used for benchmarking. The test was performed on a Pentium-M processor.
The benchmark inserts some keys in the table, then looks them up in the same order as they were inserted. The test data include:
- The list of common words from Wiktionary (500 items)
- The list of Win32 functions from Colorer syntax highlight scheme (1992 items)
- 500 names from a000 to a499 (imitates the names in auto-generated source code)
- The list of common words with a long prefix and postfix
- All variable names from WordPress 2.3.2 source code in wp-includes folder (1842 names)
- List of all words in Sonnets by W. Shakespeare (imitates a word counting program; 3228 words)
Results
| Words | Win32 | Numbers | Prefix | Postfix | Variables | Shakespeare |
Bernstein | 145 | [135] | 889 | [478] | 92 | [500] | 325 | [116] | 320 | [121] | 659 | [391] | 895 | [646] |
K&R | 145 | [117] | 883 | [511] | 88 | [500] | 321 | [113] | 316 | [117] | 659 | [411] | 897 | [641] |
x17 unrolled | 136 | [99] | 842 | [491] | 73 | [24] | 303 | [107] | 298 | [113] | 636 | [434] | 856 | [638] |
x65599 | 138 | [129] | 857 | [432] | 82 | [258] | 319 | [119] | 314 | [146] | 642 | [440] | 861 | [628] |
FNV-1a | 157 | [154] | 975 | [528] | 88 | [124] | 366 | [106] | 362 | [118] | 711 | [443] | 955 | [640] |
Sedgewick | 153 | [126] | 963 | [477] | 84 | [48] | 366 | [113] | 361 | [119] | 704 | [404] | 948 | [627] |
Weinberger | 169 | [125] | 1214 | [495] | 73 | [100] | 474 | [138] | 472 | [152] | 834 | [421] | 1052 | [892] |
Paul Larson | 139 | [117] | 879 | [470] | 64 | [16] | 323 | [119] | 317 | [117] | 649 | [421] | 858 | [656] |
Two chars | 126 | [305] | 738 | [3458] | 500 | [12250] | 297 | [1323] | 223 | [877] | 1109 | [12431] | 1693 | [14425] |
Paul Hsieh | 157 | [130] | 851 | [476] | 107 | [138] | 282 | [120] | 275 | [113] | 678 | [399] | 991 | [676] |
One At Time | 160 | [118] | 1018 | [481] | 99 | [131] | 382 | [120] | 376 | [125] | 745 | [422] | 990 | [601] |
lookup3 | 150 | [114] | 837 | [474] | 95 | [108] | 290 | [118] | 282 | [106] | 656 | [412] | 958 | [619] |
Arash Partow | 154 | [120] | 1009 | [510] | 149 | [1530] | 376 | [123] | 367 | [97] | 731 | [402] | 951 | [643] |
CRC-32 | 154 | [114] | 986 | [519] | 84 | [64] | 371 | [125] | 362 | [108] | 720 | [385] | 958 | [632] |
Ramakrishna | 151 | [130] | 971 | [476] | 80 | [100] | 370 | [153] | 360 | [124] | 705 | [414] | 925 | [603] |
Fletcher | 129 | [158] | 691 | [467] | 190 | [2880] | 233 | [155] | 221 | [124] | 581 | [732] | 874 | [1453] |
Murmur2 | 140 | [122] | 795 | [469] | 84 | [119] | 262 | [133] | 257 | [131] | 632 | [445] | 875 | [654] |
Each cell includes the execution time, then the number of collisions in square brackets. Execution time is measured in thousands of clock cycles (a lower number is better). The three best results in each test are highlighted with bold typeface.
The function by Kernighan and Ritchie is from their famous book "The C programming Language", 3rd edition; Weinberger's hash and the hash with multiplier 65599 are from the Red Dragon book. The latter function is used in gawk, sdbm, and other Linux programs. x17 is my own function (multiplier = 17; 32 is subtracted from each letter code).
As you can see from the table, the function with the lowest number of collisions is not always the fastest one. For example, compare CRC-32 and Larson's hash in the "numbers" test.
Conclusion
Complex functions by Paul Hsieh and Bob Jenkins are tuned for long keys, such as the ones in postfix and prefix tests. Note that they do not provide the best number of collisions for these tests, but do have the best time, which means that the functions are faster than the others because of loop unrolling. At the same time, they are suboptimal for short keys ("common words" and "Shakespeare" tests).
For a word counting program, a compiler, or another application that typically handles short keys, it's often advantageous to use a simple multiplicative function such as x17 or Larson's hash. However, these functions perform badly on long keys.
Murmur2 is the only complex hash function that provides good performance for all kinds of keys. It can be recommended as a general-purpose hashing function.
Variations
XORing High and Low Part
For table size less than 216, we can improve the quality of hash function by XORing high and low words, so that more letters will be taken into account:
return hash ^ (hash >> 16);
Subtracting a Constant
My x17 hash function subtracts a space from each letter to cut off the control characters in the range 0x00..0x1F. If the hash keys are long and contain only Latin letters and numbers, the letters will be less frequently shifted out, and the overall number of collisions will be lower. You can even subtract 'A' when you know that the keys will be only English words.
Using Larger Multipliers for a Compiler
Paul Hsieh noted that large multipliers may provide better results for the hash table in a compiler, because a typical source code contains a lot of one-letter variable names (i, j, s, etc.), and they will collide if the multiplier is less than the number of letters in the alphabet.
The test confirms this assumption: the function by Kernighan & Ritchie (M = 33) has lower number of collisions than x17 (M = 17), but the latter is still faster (see Variables column in the table above).
Setting Hash Table Size to a Prime Number
A test showed that the number of collisions will usually be lower if you use a prime, but the calculations modulo prime take much more time than the calculations for a power of 2, so this method is impractical. Even replacing division with multiplication by reciprocal values do not help here:
| Words | Win32 | Numbers | Prefix | Postfix | Variables | Shakespeare |
Bernstein % 2^K | 145 | [261] | 880 | [889] | 426 | [8030] | 326 | [214] | 316 | [226] | 649 | [697] | 874 | [1131] |
Bernstein % prime | 186 | [221] | 1049 | [995] | 445 | [5621] | 364 | [194] | 357 | [217] | 805 | [800] | 1123 | [1051] |
Bernstein optimized mod | 160 | [221] | 960 | [995] | 416 | [5621] | 341 | [194] | 334 | [217] | 722 | [800] | 969 | [1051] |
x17 % 2^K | 137 | [193] | 847 | [1002] | 81 | [340] | 314 | [244] | 300 | [228] | 641 | [863] | 832 | [1012] |
x17 % prime | 173 | [256] | 1010 | [1026] | 104 | [324] | 356 | [246] | 339 | [216] | 760 | [760] | 1046 | [1064] |
x17 optimized mod | 155 | [256] | 915 | [1026] | 96 | [324] | 330 | [246] | 315 | [216] | 691 | [760] | 930 | [1064] |
Implementing Open Addressing vs. Separate Chaining
With open addressing, most hash functions show awkward clustering behavior in "Numbers" test:
| Bernst. | K&R | x17 | x17 unroll | x65599 | FNV | Univ | Weinb. | Hsieh | One-at | Lookup3 | Partow | CRC |
OA | 426 | 866 | 81 | 84 | 207 | 88 | 91 | 273 | 110 | 103 | 92 | 1042 | 79 |
[8030] | [20810] | [340] | [340] | [3158] | [207] | [480] | [4360] | [342] | [267] | [205] | [20860] | [96] |
h32 | 179 | 320 | 69 | 74 | 114 | 86 | 80 | 125 | 105 | 99 | 92 | 347 | 82 |
[8030] | [20810] | [340] | [340] | [3158] | [207] | [480] | [4360] | [342] | [267] | [205] | [20860] | [96] |
C | 92 | 88 | 68 | 73 | 82 | 88 | 84 | 73 | 107 | 99 | 95 | 149 | 84 |
[500] | [500] | [24] | [24] | [258] | [124] | [48] | [100] | [138] | [131] | [108] | [1530] | [64] |
You can avoid the worst case by using chaining for collision resolution. However, chaining requires more memory for the next item pointers, so the performance improvement does not come for free. A custom memory allocator should be usually written, because calling malloc()
for a large number of small structures is suboptimal.
Some implementations (e.g., hash table in Python interpreter) store a full 32-bit hash with the item to speed up the string
comparison, but this is less effective than chaining.
Credits
Many thanks to Nils, Ace, and Won for their ideas and advices, which helped me to make this article better.