Use a custom kernel / image filter to find a specific pattern in a 2d array - image-processing

Given an image im,
>>> np.random.seed(0)
>>> im = np.random.randint(0, 100, (10,5))
>>> im
array([[44, 47, 64, 67, 67],
[ 9, 83, 21, 36, 87],
[70, 88, 88, 12, 58],
[65, 39, 87, 46, 88],
[81, 37, 25, 77, 72],
[ 9, 20, 80, 69, 79],
[47, 64, 82, 99, 88],
[49, 29, 19, 19, 14],
[39, 32, 65, 9, 57],
[32, 31, 74, 23, 35]])
what is the best way to find a specific segment of this image, for instance
>>> im[6:9, 2:5]
array([[82, 99, 88],
[19, 19, 14],
[65, 9, 57]])
If the specific combination does not exist (maybe due to noise), I would like to have a similarity measure, which searches for segments with a similar distribution and tells me for each pixel of im, how good the agreement is. For instance something like
array([[0.03726647, 0.14738364, 0.04331007, 0.02704363, 0.0648282 ],
[0.02993497, 0.04446428, 0.0772978 , 0.1805197 , 0.08999 ],
[0.12261269, 0.18046972, 0.01985607, 0.19396181, 0.13062801],
[0.03418192, 0.07163043, 0.15013723, 0.12156613, 0.06500945],
[0.00768509, 0.12685481, 0.19178985, 0.13055806, 0.12701177],
[0.19905991, 0.11637007, 0.08287372, 0.0949395 , 0.12470202],
[0.06760152, 0.13495046, 0.06344035, 0.1556691 , 0.18991421],
[0.13250537, 0.00271433, 0.12456922, 0.97 , 0.194389 ],
[0.17563869, 0.10192488, 0.01114294, 0.09023184, 0.00399753],
[0.08834218, 0.19591735, 0.07188889, 0.09617871, 0.13773224]])
The example code is python.
I think there should be a solution correlating a kernel with im. This will have the issue though, that a segment with the same value but scaled, will give a sharper response.

Template matching would be one of the ways to go about it. Of course deep learning/ML can also be used for more complicated matching.
Most image processing libraries support some sort of matching function which compares a set of 2 image - reference and the one to match. In OpenCV it returns a score which can used to determine a match. The matching method uses various functions that support scale and/or rotation invariant matching. Beware of licensing constraints in the method you plan to use.
In case the images may not always be exact, you can use standard deviation (StdDev) to allow for permissible deviation and yet classify them into buckets. Histogram matching may also be used depending on the condition of image to be matched (lighting, color can be important, unless you use specific channels). Use of histogram will avoid matching template in its entirety.
Ref for Template Matching:
OpenCV - https://docs.opencv.org/master/d4/dc6/tutorial_py_template_matching.html
SciPy - https://scikit-image.org/docs/dev/auto_examples/features_detection/plot_template.html

Thanks to banerjk for the great answer - template matching is exactly the solution!
some backup method
Considering my correlating-with-a-kernel idea, there is some progress:
When one correlates the image with the template (i.e. what I called target segment in the question), chances are high, that the most intense point in the correlated image (relative to the mean intensity) matches the template position (see im and m in the example). Seems like I am not the first, who comes up with this idea, as can be see in these lecture notes on page 39.
However, this is not always true. This method, more or less, just detects weight at the largest values in the template. In the example, im2 is constructed such, that it tricks this concept.
Maybe it gets more reliable if one applies some filter (for instance median) on the image beforehand.
I just wanted to mention it here, as it might have advantages for certain situations (it should be more performant compared to the Wikipedia-implementation of template_matching).
example
import numpy as np
from scipy import ndimage
np.random.seed(0)
im = np.random.randint(0, 100, (10,5))
t = im[6:9, 2:5]
print('t', t, sep='\n')
m = ndimage.correlate(im, t) / ndimage.correlate(im, np.ones(t.shape))
m /= np.amax(m)
print('im', im, sep='\n')
print('m', m, sep='\n')
print("this can be 'tricked', however")
im2 = im.copy()
im2[6:9, :3] = 0
im2[6,1] = 1
m2 = ndimage.correlate(im2, t) / ndimage.correlate(im2, np.ones(t.shape))
m2 /= np.amax(m2)
print('im2', im2, sep='\n')
print('m2', m2, sep='\n')
output
t
[[82 99 88]
[19 19 14]
[65 9 57]]
im
[[44 47 64 67 67]
[ 9 83 21 36 87]
[70 88 88 12 58]
[65 39 87 46 88]
[81 37 25 77 72]
[ 9 20 80 69 79]
[47 64 82 99 88]
[49 29 19 19 14]
[39 32 65 9 57]
[32 31 74 23 35]]
m
[[0.73776208 0.62161208 0.74504705 0.71202601 0.66743979]
[0.70809611 0.70617161 0.70284942 0.80653741 0.67067733]
[0.55047727 0.61675268 0.5937487 0.70579195 0.74351706]
[0.7303857 0.77147963 0.74809273 0.59136392 0.61324214]
[0.70041161 0.7717032 0.69220064 0.72463532 0.6957257 ]
[0.89696894 0.69741108 0.64136612 0.64154719 0.68621613]
[0.48509474 0.60700037 0.65812918 0.68441118 0.68835903]
[0.73802038 0.83224745 0.87301124 1. 0.92272565]
[0.72708573 0.64909142 0.54540817 0.60859883 0.52663327]
[0.72061572 0.70357846 0.61626289 0.71932261 0.75028955]]
this can be 'tricked', however
im2
[[44 47 64 67 67]
[ 9 83 21 36 87]
[70 88 88 12 58]
[65 39 87 46 88]
[81 37 25 77 72]
[ 9 20 80 69 79]
[ 0 1 0 99 88]
[ 0 0 0 19 14]
[ 0 0 0 9 57]
[32 31 74 23 35]]
m2
[[0.53981867 0.45483201 0.54514907 0.52098765 0.48836403]
[0.51811216 0.51670401 0.51427317 0.59014141 0.49073293]
[0.40278285 0.4512764 0.43444444 0.51642621 0.54402958]
[0.5344214 0.56448972 0.54737758 0.43269951 0.44870774]
[0.51248943 0.56465331 0.50648148 0.53021386 0.50906076]
[0.78923691 0.56633529 0.51641414 0.44336403 0.50210263]
[0.88137788 0.89779614 0.63552189 0.55070797 0.50367059]
[0.88888889 1. 0.75544508 0.75694003 0.67515605]
[0.43965976 0.48492221 0.37490287 0.48511085 0.38533625]
[0.30754918 0.32478065 0.27066895 0.46685032 0.548985 ]]
Maybe someone can contribute on the background of the lecture notes.
update: It is discussed in J. P. Lewis, “Fast Normalized Cross-Correlation”, Industrial Light and Magic. on the very first page.

Related

Q: sorting a table of numbers in Lua

I've looked around for a bit to find a solution to my problem but I haven't gotten anything that completely fixes it. Essentially the function does sort but it doesn't sort the numbers in the table just the numbers 1 through 10
local numbers = {18, 45, 90, 77, 65, 18, 3, 57, 81, 10}
local function selectionSort(t)--t is the table to be sorted
local t = {18, 45, 90, 77, 65, 18, 3, 57, 81, 10}
local tkeys = {}
for k in pairs(t) do table.insert(tkeys, k) end
table.sort(tkeys)
for _, k in ipairs(tkeys) do print(k, t[k]) end
return t -- return the sorted table
end
list = selectionSort(list)
and this is what comes out
1 18
2 45
3 90
4 77
5 65
6 18
7 3
8 57
9 81
10 10
and what I want is
3 18
10 45
18 90
18 77
45 65
57 18
65 3
77 57
81 81
90 10
any solutions?
You are taking the key from your input and you want the value.
you can change it to:
local list = {18, 45, 90, 77, 65, 18, 3, 57, 81, 10}
local function selectionSort(t)--t is the table to be sorted
local tSorted = {}
for _,v in pairs(t) do
table.insert(tSorted, v)
end
table.sort(tSorted)
for i=1,#t,1 do
print(tSorted[i], t[i])
end
return tSorted -- return the sorted table
end
list = selectionSort(numbers)
and you will get:
sorted original
3 18
10 45
18 90
18 77
45 65
57 18
65 3
77 57
81 81
90 10

iOS Swift Mi Scale 2 Bluetooth Get Weight

I am writing an app that can get weight measurement from Xiaomi mi scale 2. After reading all available uuid's, only "181B" connection, specifically "2A9C" characteristic (Body weight measurement in bluetooth gatt) gets notifications.
Value data is [2, 164, 178, 7, 1, 1, 2, 58, 56, 253, 255, 240, 60]. Only last two values vary, the rest is time and date, witch is not set currently (253, 255 are zeroes when the weight varies on the scale until it stabilizes).
Can someone help me get only persons weight, should i be getting data maybe in a different way, from other uuid's (like custom ones: 00001530-0000-3512-2118-0009AF100700, 00001542-0000-3512-2118-0009AF100700), and how do i retrieve them.
Correct answer by Paulw11: You need to look at bit 0 of the first byte to determine if the weight is in imperial or SI; the bit is 0 so the data is SI. The to get the weight, convert the last two bytes to a 16 bit integer (60*256+240 = 15,600) and multiply by 0.005 = 78kg
In my case, it was a little different:
I got data like this [207, 0, 0, 178, 2, 0, 0, 0, 0, 0, 127] (6.9 KG) and the solution is:
let bytesArray = [207, 0, 0, 178, 2, 0, 0, 0, 0, 0, 127]
let weight = (( bytesArray[4] * 256 + bytesArray[3] ) * 10.0) / 1000
And now I have my 6.9 kg.
I was using Mi Smart scale and i had the following byte array.
02-A4-B2-07-02-13-06-33-35-FD-FF-EC-09" received - 12.7 KG
02-A4-B2-07-02-13-06-3B-17-FD-FF-C8-3C" - 77.8 KG
I used the last two bytes to get the weights in KG.
(09*256 + EC)/200 = 12.7
(3C*256+C8)/200 = 77.8
My byte array was 13 bytes long.
bytes 0 and 1: control bytes
bytes 2 and 3: year
byte 4: month
byte 5: day
byte 6: hours
byte 7: minutes
byte 8: seconds
bytes 9 and 10: impedance
bytes 11 and 12: weight (divide by 100 for pounds and catty, divide by 200 for kilograms)

python cmbus2 block read starting with 2nd memory address on EEPROM

I am interfacing with 24LC512 and 47C16 EEProm/EERam
I have stored multiple bytes starting at location 0 on both chips.
on the 24LC512 I get these results - notice when I execute a block read using
bus.read_i2c_block_data, it starts with the 2nd memory location. When I reset the pointer back to 0 and read with bus.read_byte, it reads the location at 0 and correctly increments the pointer by 1
>>> bus.write_byte_data(0x54, 0, 0)
>>> bus.read_byte(0x54)
45
>>> bus.write_byte_data(0x54, 0, 0)
>>> bus.read_i2c_block_data(0x54, 0, 6)
[46, 71, 65, 32, 48, 255]
>>> bus.write_byte_data(0x54, 0, 0)
>>> bus.read_byte(0x54)
45
>>> bus.read_byte(0x54)
46
>>> bus.read_byte(0x54)
71
>>> bus.read_byte(0x54)
65
>>> bus.read_byte(0x54)
32
>>> bus.read_byte(0x54)
48
I get the same results on the 47C16
>>> bus.write_byte_data(0x50, 0, 0)
>>> bus.read_i2c_block_data(0x50, 0, 6)
[46, 71, 70, 32, 49, 255]
>>> bus.write_byte_data(0x50, 0, 0)
>>> bus.read_byte(0x50)
45
>>>
>>> bus.read_byte(0x50)
46
>>> bus.read_byte(0x50)
71
>>> bus.read_byte(0x50)
70
>>> bus.read_byte(0x50)
32
>>> bus.read_byte(0x50)
49
It appears I am not using the read_i2c_block_data correctly - even though this is what I read in the documentation. What am I doing wrong ??
thank you for any pointers you can provide

What is correct implementation of LDA (Linear Discriminant Analysis)?

I found that the result of LDA in OpenCV is different from other libraries. For example, the input data was
DATA (13 data samples with 4 dimensions)
7 26 6 60
1 29 15 52
11 56 8 20
11 31 8 47
7 52 6 33
11 55 9 22
3 71 17 6
1 31 22 44
2 54 18 22
21 47 4 26
1 40 23 34
11 66 9 12
10 68 8 12
LABEL
0 1 2 0 1 2 0 1 2 0 1 2 0
The OpenCV code is
Mat data = (Mat_<float>(13, 4) <<\
7, 26, 6, 60,\
1, 29, 15, 52,\
11, 56, 8, 20,\
11, 31, 8, 47,\
7, 52, 6, 33,\
11, 55, 9, 22,\
3, 71, 17, 6,\
1, 31, 22, 44,\
2, 54, 18, 22,\
21, 47, 4, 26,\
1, 40, 23, 34,\
11, 66, 9, 12,\
10, 68, 8, 12);
Mat mean;
reduce(data, mean, 0, CV_REDUCE_AVG);
mean.convertTo(mean, CV_64F);
Mat label(data.rows, 1, CV_32SC1);
for (int i=0; i<label.rows; i++)
label.at<int>(i) = i%3;
LDA lda(data, label);
Mat projection = lda.subspaceProject(lda.eigenvectors(), mean, data);
The matlab code is (used Matlab Toolbox for Dimensionality Reduction)
cd drtoolbox\techniques\
load hald
label=[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0]
[projection, trainedlda] = lda(ingredients, label)
The eigenvalues are
OpenCV (lda.eigenvectors())
0.4457 4.0132
0.4880 3.5703
0.5448 3.3466
0.5162 3.5794
Matlab Toolbox for Dimensionality Reduction (trainedlda.M)
0.5613 0.7159
0.6257 0.6203
0.6898 0.5884
0.6635 0.6262
Then the projections of data are
OpenCV
1.3261 7.1276
0.8892 -4.7569
-1.8092 -6.1947
-0.0720 1.1927
0.0768 3.3105
-0.7200 0.7405
-0.3788 -4.7388
1.5490 -2.8255
-0.3166 -8.8295
-0.8259 9.8953
1.3239 -3.1406
-0.5140 4.2194
-0.5285 4.0001
Matlab Toolbox for Dimensionality Reduction
1.8030 1.3171
1.2128 -0.8311
-2.3390 -1.0790
-0.0686 0.3192
0.1583 0.5392
-0.9479 0.1414
-0.5238 -0.9722
1.9852 -0.4809
-0.4173 -1.6266
-1.1358 1.9009
1.6719 -0.5711
-0.6996 0.7034
-0.6993 0.6397
The eigenvectors and projections are different even though these LDAs have the same data. I believe there are 2 possibilities.
One of the libraries is wrong.
I am doing it wrong.
Thank you!
The difference is because eigenvectors are not normalized.
The normalized (L2 norm) eigenvectors are
OpenCV
0.44569 0.55196
0.48798 0.49105
0.54478 0.46028
0.51618 0.49230
Matlab Toolbox for Dimensionality Reduction
0.44064 0.55977
0.49120 0.48502
0.54152 0.46008
0.52087 0.48963
They look simliar now, although they have quite different eigenvalues.
Even though the PCA in OpenCV returns normalized eigenvectors, LDA does not. My next question is 'Is normalizing eigenvectors in LDA not necessary?'

Best way to convert bit offset to an integer [duplicate]

I have a 64-bit unsigned integer with exactly 1 bit set. I’d like to assign a value to each of the possible 64 values (in this case, the odd primes, so 0x1 corresponds to 3, 0x2 corresponds to 5, …, 0x8000000000000000 corresponds to 313).
It seems like the best way would be to convert 1 → 0, 2 → 1, 4 → 2, 8 → 3, …, 263 → 63 and look up the values in an array. But even if that’s so, I’m not sure what the fastest way to get at the binary exponent is. And there may be more efficient ways, still.
This operation will be used 1014 to 1016 times, so performance is a serious issue.
Finally an optimal solution. See the end of this section for what to do when the input is guaranteed to have exactly one non-zero bit: http://graphics.stanford.edu/~seander/bithacks.html#IntegerLogDeBruijn
Here's the code:
static const int MultiplyDeBruijnBitPosition2[32] =
{
0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8,
31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9
};
r = MultiplyDeBruijnBitPosition2[(uint32_t)(v * 0x077CB531U) >> 27];
You may be able to adapt this to a direct multiplication-based algorithm for 64-bit inputs; otherwise, simply add one conditional to see if the bit is in the upper 32 positions or the lower 32 positions, then use the 32-bit algorithm here.
Update: Here's at least one 64-bit version I just developed myself, but it uses division (actually modulo).
r = Table[v%67];
For each power of 2, v%67 has a distinct value, so just put your odd primes (or bit indices if you don't want the odd-prime thing) at the right positions in the table. 3 positions (0, 17, and 34) are not used, which might be convenient if you also want to accept all-bits-zero as an input.
Update 2: 64-bit version.
r = Table[(uint64_t)(val * 0x022fdd63cc95386dull) >> 58];
This is my original work, but I got the B(2,6) De Bruijn sequence from this chess site so I can't take credit for anything but figuring out what a De Bruijn sequence is and using Google. ;-)
Some additional remarks on how this works:
The magic number is a B(2,6) De Bruijn sequence. It has the property that, if you look at a 6-consecutive-bit window, you can obtain any six-bit value in that window by rotating the number appropriately, and that each possible six-bit value is obtained by exactly one rotation.
We fix the window in question to be the top 6 bit positions, and choose a De Bruijn sequence with 0's in the top 6 bits. This makes it so we never have to deal with bit rotations, only shifts, since 0's will come into the bottom bits naturally (and we could never end up looking at more than 5 bits from the bottom in the top-6-bits window).
Now, the input value of this function is a power of 2. So multiplying the De Bruijn sequence by the input value performs a bitshift by log2(value) bits. We now have in the upper 6 bits a number which uniquely determines how many bits we shifted by, and can use that as an index into a table to get the actual length of the shift.
This same approach can be used for arbitrarily-large or arbitrarily-small integers, as long as you're willing to implement the multiplication. You simply have to find a B(2,k) De Bruijn sequence where k is the number of bits. The chess wiki link I provided above has De Bruijn sequences for values of k ranging from 1 to 6, and some quick Googling shows there are a few papers on optimal algorithms for generating them in the general case.
If performance is a serious issue, then you should use intrinsics/builtins to use CPU specific instructions, such as the ones found here for GCC:
http://gcc.gnu.org/onlinedocs/gcc-4.5.0/gcc/Other-Builtins.html
Built-in function int __builtin_ffs(unsigned int x).
Returns one plus the index of the least significant 1-bit of x, or if x is zero, returns zero.
Built-in function int __builtin_clz(unsigned int x).
Returns the number of leading 0-bits in x, starting at the most significant bit position. If x is 0, the result is undefined.
Built-in function int __builtin_ctz(unsigned int x).
Returns the number of trailing 0-bits in x, starting at the least significant bit position. If x is 0, the result is undefined.
Things like this are the core of many O(1) algorithms, such as kernel schedulers which need to find the first non-empty queue signified by an array of bits.
Note: I’ve listed the unsigned int versions, but GCC has unsigned long long versions, as well.
You could use a binary search technique:
int pos = 0;
if ((value & 0xffffffff) == 0) {
pos += 32;
value >>= 32;
}
if ((value & 0xffff) == 0) {
pos += 16;
value >>= 16;
}
if ((value & 0xff) == 0) {
pos += 8;
value >>= 8;
}
if ((value & 0xf) == 0) {
pos += 4;
value >>= 4;
}
if ((value & 0x3) == 0) {
pos += 2;
value >>= 2;
}
if ((value & 0x1) == 0) {
pos += 1;
}
This has the advantage over loops that the loop is already unrolled. However, if this is really performance critical, you will want to test and measure every proposed solution.
Some architectures (a suprising number, actually) have a single instruction that can do the calculation you want. On ARM it would be the CLZ (count leading zeroes) instruction. For intel, the BSF (bit-scan forward) or BSR (bit-scan reverse) instruction would help you out.
I guess this isn't really a C answer, but it will get you the speed you need!
precalculate 1 << i (for i = 0..63) and store them in an array
use a binary search to find the index into the array of a given value
look up the prime number in another array using this index
Compared to the other answer I posted here, this should only take 6 steps to find the index (as opposed to a maximum of 64). But it's not clear to me whether one step of this answer is not more time consuming than just bit shifting and incrementing a counter. You may want to try out both though.
See http://graphics.stanford.edu/~seander/bithacks.html - specifically "Finding integer log base 2 of an integer (aka the position of the highest bit set)" - for some alternative algorithsm. (If you're really serious about speed, you might consider ditching C if your CPU has a dedicated instruction).
Since speed, presumably not memory usage, is important, here's a crazy idea:
w1 = 1st 16 bits
w2 = 2nd 16 bits
w3 = 3rd 16 bits
w4 = 4th 16 bits
result = array1[w1] + array2[w2] + array3[w3] + array4[w4]
where array1..4 are sparsely populated 64K arrays that contain the actual prime values (and zero in the positions that don't correspond to bit positions)
#Rs solution is excellent this is just the 64 bit variant, with the table already calculated ...
static inline unsigned char bit_offset(unsigned long long self) {
static const unsigned char mapping[64] = {
[0]=0, [1]=1, [2]=2, [4]=3, [8]=4, [17]=5, [34]=6, [5]=7,
[11]=8, [23]=9, [47]=10, [31]=11, [63]=12, [62]=13, [61]=14, [59]=15,
[55]=16, [46]=17, [29]=18, [58]=19, [53]=20, [43]=21, [22]=22, [44]=23,
[24]=24, [49]=25, [35]=26, [7]=27, [15]=28, [30]=29, [60]=30, [57]=31,
[51]=32, [38]=33, [12]=34, [25]=35, [50]=36, [36]=37, [9]=38, [18]=39,
[37]=40, [10]=41, [21]=42, [42]=43, [20]=44, [41]=45, [19]=46, [39]=47,
[14]=48, [28]=49, [56]=50, [48]=51, [33]=52, [3]=53, [6]=54, [13]=55,
[27]=56, [54]=57, [45]=58, [26]=59, [52]=60, [40]=61, [16]=62, [32]=63
};
return mapping[((self & -self) * 0x022FDD63CC95386DULL) >> 58];
}
I built the table using the provided mask.
>>> ', '.join('[{0}]={1}'.format(((2**bit * 0x022fdd63cc95386d) % 2**64) >> 58, bit) for bit in xrange(64))
'[0]=0, [1]=1, [2]=2, [4]=3, [8]=4, [17]=5, [34]=6, [5]=7, [11]=8, [23]=9, [47]=10, [31]=11, [63]=12, [62]=13, [61]=14, [59]=15, [55]=16, [46]=17, [29]=18, [58]=19, [53]=20, [43]=21, [22]=22, [44]=23, [24]=24, [49]=25, [35]=26, [7]=27, [15]=28, [30]=29, [60]=30, [57]=31, [51]=32, [38]=33, [12]=34, [25]=35, [50]=36, [36]=37, [9]=38, [18]=39, [37]=40, [10]=41, [21]=42, [42]=43, [20]=44, [41]=45, [19]=46, [39]=47, [14]=48, [28]=49, [56]=50, [48]=51, [33]=52, [3]=53, [6]=54, [13]=55, [27]=56, [54]=57, [45]=58, [26]=59, [52]=60, [40]=61, [16]=62, [32]=63'
should the compiler complain:
>>> ', '.join(map(str, {((2**bit * 0x022fdd63cc95386d) % 2**64) >> 58: bit for bit in xrange(64)}.values()))
'0, 1, 2, 53, 3, 7, 54, 27, 4, 38, 41, 8, 34, 55, 48, 28, 62, 5, 39, 46, 44, 42, 22, 9, 24, 35, 59, 56, 49, 18, 29, 11, 63, 52, 6, 26, 37, 40, 33, 47, 61, 45, 43, 21, 23, 58, 17, 10, 51, 25, 36, 32, 60, 20, 57, 16, 50, 31, 19, 15, 30, 14, 13, 12'
^^^^ assumes that we iterate over sorted keys, this may not be the case in the future ...
unsigned char bit_offset(unsigned long long self) {
static const unsigned char table[64] = {
0, 1, 2, 53, 3, 7, 54, 27, 4, 38, 41, 8, 34, 55, 48,
28, 62, 5, 39, 46, 44, 42, 22, 9, 24, 35, 59, 56, 49,
18, 29, 11, 63, 52, 6, 26, 37, 40, 33, 47, 61, 45, 43,
21, 23, 58, 17, 10, 51, 25, 36, 32, 60, 20, 57, 16, 50,
31, 19, 15, 30, 14, 13, 12
};
return table[((self & -self) * 0x022FDD63CC95386DULL) >> 58];
}
simple test:
>>> table = {((2**bit * 0x022fdd63cc95386d) % 2**64) >> 58: bit for bit in xrange(64)}.values()
>>> assert all(i == table[(2**i * 0x022fdd63cc95386d % 2**64) >> 58] for i in xrange(64))
Short of using assembly or compiler-specific extensions to find the first/last bit that's set, the fastest algorithm is a binary search. First check if any of the first 32 bits are set. If so, check if any of the first 16 are set. If so, check if any of the first 8 are set. Etc. Your function to do this can directly return an odd prime at each leaf of the search, or it can return a bit index which you use as an array index into a table of odd primes.
Here's a loop implementation for the binary search, which the compiler could certainly unroll if that's deemed to be optimal:
uint32_t mask=0xffffffff;
int pos=0, shift=32, i;
for (i=6; i; i--) {
if (!(val&mask)) {
val>>=shift;
pos+=shift;
}
shift>>=1;
mask>>=shift;
}
val is assumed to be uint64_t, but to optimize this for 32-bit machines, you should special-case the first check, then perform the loop with a 32-bit val variable.
Call the GNU POSIX extension function ffsll, found in glibc. If the function isn't present, fall back on __builtin_ffsll. Both functions return the index + 1 of the first bit set, or zero. With Visual-C++, you can use _BitScanForward64.
unsigned bit_position = 0;
while ((value & 1) ==0)
{
++bit_position;
value >>= 1;
}
Then look up the primes based on bit_position as you say.
You may find that log(n) / log(2) gives you the 0, 1, 2, ... you're after in a reasonable timeframe. Otherwise, some form of hashtable based approach could be useful.
Another answer assuming IEEE float:
int get_bit_index(uint64_t val)
{
union { float f; uint32_t i; } u = { val };
return (u.i>>23)-127;
}
It works as specified for the input values you asked for (exactly 1 bit set) and also has useful behavior for other values (try to figure out exactly what that behavior is). No idea if it's fast or slow; that probably depends on your machine and compiler.
From the GnuChess source:
unsigned char leadz (BitBoard b)
/**************************************************************************
*
* Returns the leading bit in a bitboard. Leftmost bit is 0 and
* rightmost bit is 63. Thanks to Robert Hyatt for this algorithm.
*
***************************************************************************/
{
if (b >> 48) return lzArray[b >> 48];
if (b >> 32) return lzArray[b >> 32] + 16;
if (b >> 16) return lzArray[b >> 16] + 32;
return lzArray[b] + 48;
}
Here lzArray is a pregenerated array of size 2^16. This'll save you 50% of the operations compared to a full binary search.
This is for 32 bit, java, but it should be possible to adapt it to 64 bit.
It assume this will be the fastest cause there is no branching involved.
static public final int msb(int n) {
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
n >>>= 1;
n += 1;
return n;
}
static public final int msb_index(int n) {
final int[] multiply_de_bruijn_bit_position = {
0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8,
31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9
};
return multiply_de_bruijn_bit_position[(msb(n) * 0x077CB531) >>> 27];
}
Here is more information from: http://graphics.stanford.edu/~seander/bithacks.html#ZerosOnRightMultLookup
// Count the consecutive zero bits (trailing) on the right with multiply and lookup
unsigned int v; // find the number of trailing zeros in 32-bit v
int r; // result goes here
static const int MultiplyDeBruijnBitPosition[32] =
{
0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8,
31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9
};
r = MultiplyDeBruijnBitPosition[((uint32_t)((v & -v) * 0x077CB531U)) >> 27];
// Converting bit vectors to indices of set bits is an example use for this.
// It requires one more operation than the earlier one involving modulus
// division, but the multiply may be faster. The expression (v & -v) extracts
// the least significant 1 bit from v. The constant 0x077CB531UL is a de Bruijn
// sequence, which produces a unique pattern of bits into the high 5 bits for
// each possible bit position that it is multiplied against. When there are no
// bits set, it returns 0. More information can be found by reading the paper
// Using de Bruijn Sequences to Index 1 in a Computer Word by
// Charles E. Leiserson, Harald Prokof, and Keith H. Randall.
and as last:
http://supertech.csail.mit.edu/papers/debruijn.pdf

Resources