Efficient Implementation
The population count of a bitstring is often needed in cryptography and other applications. The Hamming distance of two words A and B can be calculated as the Hamming weight of A xor B.
The problem of how to implement it efficiently has been widely studied. Some processors have a single command to calculate it (see below), and some have parallel operations on bit vectors. For processors lacking those features, the best solutions known are based on adding counts in a tree pattern. For example, to count the number of 1 bits in the 16-bit binary number A=0110110010111010, these operations can be done:
Expression | Binary | Decimal | Comment |
A | 01 10 11 00 10 11 10 10 | The original number | |
B = A & 01 01 01 01 01 01 01 01 | 01 00 01 00 00 01 00 00 | 1,0,1,0,0,1,0,0 | every other bit from A |
C = (A >> 1) & 01 01 01 01 01 01 01 01 | 00 01 01 00 01 01 01 01 | 0,1,1,0,1,1,1,1 | the remaining bits from A |
D = B + C | 01 01 10 00 01 10 01 01 | 1,1,2,0,1,2,1,1 | list giving # of 1s in each 2-bit piece of A |
E = D & 0011 0011 0011 0011 | 0001 0000 0010 0001 | 1,0,2,1 | every other count from D |
F = (D >> 2) & 0011 0011 0011 0011 | 0001 0010 0001 0001 | 1,2,1,1 | the remaining counts from D |
G = E + F | 0010 0010 0011 0010 | 2,2,3,2 | list giving # of 1s in each 4-bit piece of A |
H = G & 00001111 00001111 | 00000010 00000010 | 2,2 | every other count from G |
I = (G >> 4) & 00001111 00001111 | 00000010 00000011 | 2,3 | the remaining counts from G |
J = H + I | 00000100 00000101 | 4,5 | list giving # of 1s in each 8-bit piece of A |
K = J & 0000000011111111 | 0000000000000101 | 5 | every other count from J |
L = (J >> 8) & 0000000011111111 | 0000000000000100 | 4 | the remaining counts from J |
M = K + L | 0000000000001001 | 9 | the final answer |
Here, the operations are as in C, so X >> Y means to shift X right by Y bits, X & Y means the bitwise AND of X and Y, and + is ordinary addition. The best algorithms known for this problem are based on the concept illustrated above and are given here:
//types and constants used in the functions below typedef unsigned __int64 uint64; //assume this gives 64-bits const uint64 m1 = 0x5555555555555555; //binary: 0101... const uint64 m2 = 0x3333333333333333; //binary: 00110011.. const uint64 m4 = 0x0f0f0f0f0f0f0f0f; //binary: 4 zeros, 4 ones ... const uint64 m8 = 0x00ff00ff00ff00ff; //binary: 8 zeros, 8 ones ... const uint64 m16 = 0x0000ffff0000ffff; //binary: 16 zeros, 16 ones ... const uint64 m32 = 0x00000000ffffffff; //binary: 32 zeros, 32 ones const uint64 hff = 0xffffffffffffffff; //binary: all ones const uint64 h01 = 0x0101010101010101; //the sum of 256 to the power of 0,1,2,3... //This is a naive implementation, shown for comparison, //and to help in understanding the better functions. //It uses 24 arithmetic operations (shift, add, and). int popcount_1(uint64 x) { x = (x & m1 ) + ((x >> 1) & m1 ); //put count of each 2 bits into those 2 bits x = (x & m2 ) + ((x >> 2) & m2 ); //put count of each 4 bits into those 4 bits x = (x & m4 ) + ((x >> 4) & m4 ); //put count of each 8 bits into those 8 bits x = (x & m8 ) + ((x >> 8) & m8 ); //put count of each 16 bits into those 16 bits x = (x & m16) + ((x >> 16) & m16); //put count of each 32 bits into those 32 bits x = (x & m32) + ((x >> 32) & m32); //put count of each 64 bits into those 64 bits return x; } //This uses fewer arithmetic operations than any other known //implementation on machines with slow multiplication. //It uses 17 arithmetic operations. int popcount_2(uint64 x) { x -= (x >> 1) & m1; //put count of each 2 bits into those 2 bits x = (x & m2) + ((x >> 2) & m2); //put count of each 4 bits into those 4 bits x = (x + (x >> 4)) & m4; //put count of each 8 bits into those 8 bits x += x >> 8; //put count of each 16 bits into their lowest 8 bits x += x >> 16; //put count of each 32 bits into their lowest 8 bits x += x >> 32; //put count of each 64 bits into their lowest 8 bits return x & 0x7f; } //This uses fewer arithmetic operations than any other known //implementation on machines with fast multiplication. //It uses 12 arithmetic operations, one of which is a multiply. int popcount_3(uint64 x) { x -= (x >> 1) & m1; //put count of each 2 bits into those 2 bits x = (x & m2) + ((x >> 2) & m2); //put count of each 4 bits into those 4 bits x = (x + (x >> 4)) & m4; //put count of each 8 bits into those 8 bits return (x * h01)>>56; //returns left 8 bits of x + (x<<8) + (x<<16) + (x<<24) + ... }The above implementations have the best worst-case behavior of any known algorithm. However, when a value is expected to have few nonzero bits, it may instead be more efficient to use algorithms that count these bits one at a time. As Wegner (1960) described, the bitwise and of x with x − 1 differs from x only in zeroing out the least significant nonzero bit: subtracting 1 changes the rightmost string of 0s to 1s, and changes the rightmost 1 to a 0. If x originally had n bits that were 1, then after only n iterations of this operation, x will be reduced to zero. The following implementation is based on this principle.
//This is better when most bits in x are 0 //It uses 3 arithmetic operations and one comparison/branch per "1" bit in x. int popcount_4(uint64 x) { int count; for (count=0; x; count++) x &= x-1; return count; }If we are allowed greater memory usage, we can calculate the Hamming weight faster than the above methods. With unlimited memory, we could simply create a large lookup table of the Hamming weight of every 64 bit integer. If we can store a lookup table of the hamming function of every 16 bit integer, we can do the following to compute the Hamming weight of every 32 bit integer.
static unsigned char wordbits = { bitcounts of ints between 0 and 65535 }; static int popcount(uint32 i) { return (wordbits + wordbits); }Read more about this topic: Hamming Weight
Famous quotes containing the word efficient:
“I make no secret of the fact that I would rather lie on a sofa than sweep beneath it. But you have to be efficient if youre going to be lazy.”
—Shirley Conran (b. 1932)