python cmbus2 block read starting with 2nd memory address on EEPROM - smbus

I am interfacing with 24LC512 and 47C16 EEProm/EERam
I have stored multiple bytes starting at location 0 on both chips.
on the 24LC512 I get these results - notice when I execute a block read using
bus.read_i2c_block_data, it starts with the 2nd memory location. When I reset the pointer back to 0 and read with bus.read_byte, it reads the location at 0 and correctly increments the pointer by 1
>>> bus.write_byte_data(0x54, 0, 0)
>>> bus.read_byte(0x54)
45
>>> bus.write_byte_data(0x54, 0, 0)
>>> bus.read_i2c_block_data(0x54, 0, 6)
[46, 71, 65, 32, 48, 255]
>>> bus.write_byte_data(0x54, 0, 0)
>>> bus.read_byte(0x54)
45
>>> bus.read_byte(0x54)
46
>>> bus.read_byte(0x54)
71
>>> bus.read_byte(0x54)
65
>>> bus.read_byte(0x54)
32
>>> bus.read_byte(0x54)
48
I get the same results on the 47C16
>>> bus.write_byte_data(0x50, 0, 0)
>>> bus.read_i2c_block_data(0x50, 0, 6)
[46, 71, 70, 32, 49, 255]
>>> bus.write_byte_data(0x50, 0, 0)
>>> bus.read_byte(0x50)
45
>>>
>>> bus.read_byte(0x50)
46
>>> bus.read_byte(0x50)
71
>>> bus.read_byte(0x50)
70
>>> bus.read_byte(0x50)
32
>>> bus.read_byte(0x50)
49
It appears I am not using the read_i2c_block_data correctly - even though this is what I read in the documentation. What am I doing wrong ??
thank you for any pointers you can provide

Related

Q: sorting a table of numbers in Lua

I've looked around for a bit to find a solution to my problem but I haven't gotten anything that completely fixes it. Essentially the function does sort but it doesn't sort the numbers in the table just the numbers 1 through 10
local numbers = {18, 45, 90, 77, 65, 18, 3, 57, 81, 10}
local function selectionSort(t)--t is the table to be sorted
local t = {18, 45, 90, 77, 65, 18, 3, 57, 81, 10}
local tkeys = {}
for k in pairs(t) do table.insert(tkeys, k) end
table.sort(tkeys)
for _, k in ipairs(tkeys) do print(k, t[k]) end
return t -- return the sorted table
end
list = selectionSort(list)
and this is what comes out
1 18
2 45
3 90
4 77
5 65
6 18
7 3
8 57
9 81
10 10
and what I want is
3 18
10 45
18 90
18 77
45 65
57 18
65 3
77 57
81 81
90 10
any solutions?
You are taking the key from your input and you want the value.
you can change it to:
local list = {18, 45, 90, 77, 65, 18, 3, 57, 81, 10}
local function selectionSort(t)--t is the table to be sorted
local tSorted = {}
for _,v in pairs(t) do
table.insert(tSorted, v)
end
table.sort(tSorted)
for i=1,#t,1 do
print(tSorted[i], t[i])
end
return tSorted -- return the sorted table
end
list = selectionSort(numbers)
and you will get:
sorted original
3 18
10 45
18 90
18 77
45 65
57 18
65 3
77 57
81 81
90 10

Use a custom kernel / image filter to find a specific pattern in a 2d array

Given an image im,
>>> np.random.seed(0)
>>> im = np.random.randint(0, 100, (10,5))
>>> im
array([[44, 47, 64, 67, 67],
[ 9, 83, 21, 36, 87],
[70, 88, 88, 12, 58],
[65, 39, 87, 46, 88],
[81, 37, 25, 77, 72],
[ 9, 20, 80, 69, 79],
[47, 64, 82, 99, 88],
[49, 29, 19, 19, 14],
[39, 32, 65, 9, 57],
[32, 31, 74, 23, 35]])
what is the best way to find a specific segment of this image, for instance
>>> im[6:9, 2:5]
array([[82, 99, 88],
[19, 19, 14],
[65, 9, 57]])
If the specific combination does not exist (maybe due to noise), I would like to have a similarity measure, which searches for segments with a similar distribution and tells me for each pixel of im, how good the agreement is. For instance something like
array([[0.03726647, 0.14738364, 0.04331007, 0.02704363, 0.0648282 ],
[0.02993497, 0.04446428, 0.0772978 , 0.1805197 , 0.08999 ],
[0.12261269, 0.18046972, 0.01985607, 0.19396181, 0.13062801],
[0.03418192, 0.07163043, 0.15013723, 0.12156613, 0.06500945],
[0.00768509, 0.12685481, 0.19178985, 0.13055806, 0.12701177],
[0.19905991, 0.11637007, 0.08287372, 0.0949395 , 0.12470202],
[0.06760152, 0.13495046, 0.06344035, 0.1556691 , 0.18991421],
[0.13250537, 0.00271433, 0.12456922, 0.97 , 0.194389 ],
[0.17563869, 0.10192488, 0.01114294, 0.09023184, 0.00399753],
[0.08834218, 0.19591735, 0.07188889, 0.09617871, 0.13773224]])
The example code is python.
I think there should be a solution correlating a kernel with im. This will have the issue though, that a segment with the same value but scaled, will give a sharper response.
Template matching would be one of the ways to go about it. Of course deep learning/ML can also be used for more complicated matching.
Most image processing libraries support some sort of matching function which compares a set of 2 image - reference and the one to match. In OpenCV it returns a score which can used to determine a match. The matching method uses various functions that support scale and/or rotation invariant matching. Beware of licensing constraints in the method you plan to use.
In case the images may not always be exact, you can use standard deviation (StdDev) to allow for permissible deviation and yet classify them into buckets. Histogram matching may also be used depending on the condition of image to be matched (lighting, color can be important, unless you use specific channels). Use of histogram will avoid matching template in its entirety.
Ref for Template Matching:
OpenCV - https://docs.opencv.org/master/d4/dc6/tutorial_py_template_matching.html
SciPy - https://scikit-image.org/docs/dev/auto_examples/features_detection/plot_template.html
Thanks to banerjk for the great answer - template matching is exactly the solution!
some backup method
Considering my correlating-with-a-kernel idea, there is some progress:
When one correlates the image with the template (i.e. what I called target segment in the question), chances are high, that the most intense point in the correlated image (relative to the mean intensity) matches the template position (see im and m in the example). Seems like I am not the first, who comes up with this idea, as can be see in these lecture notes on page 39.
However, this is not always true. This method, more or less, just detects weight at the largest values in the template. In the example, im2 is constructed such, that it tricks this concept.
Maybe it gets more reliable if one applies some filter (for instance median) on the image beforehand.
I just wanted to mention it here, as it might have advantages for certain situations (it should be more performant compared to the Wikipedia-implementation of template_matching).
example
import numpy as np
from scipy import ndimage
np.random.seed(0)
im = np.random.randint(0, 100, (10,5))
t = im[6:9, 2:5]
print('t', t, sep='\n')
m = ndimage.correlate(im, t) / ndimage.correlate(im, np.ones(t.shape))
m /= np.amax(m)
print('im', im, sep='\n')
print('m', m, sep='\n')
print("this can be 'tricked', however")
im2 = im.copy()
im2[6:9, :3] = 0
im2[6,1] = 1
m2 = ndimage.correlate(im2, t) / ndimage.correlate(im2, np.ones(t.shape))
m2 /= np.amax(m2)
print('im2', im2, sep='\n')
print('m2', m2, sep='\n')
output
t
[[82 99 88]
[19 19 14]
[65 9 57]]
im
[[44 47 64 67 67]
[ 9 83 21 36 87]
[70 88 88 12 58]
[65 39 87 46 88]
[81 37 25 77 72]
[ 9 20 80 69 79]
[47 64 82 99 88]
[49 29 19 19 14]
[39 32 65 9 57]
[32 31 74 23 35]]
m
[[0.73776208 0.62161208 0.74504705 0.71202601 0.66743979]
[0.70809611 0.70617161 0.70284942 0.80653741 0.67067733]
[0.55047727 0.61675268 0.5937487 0.70579195 0.74351706]
[0.7303857 0.77147963 0.74809273 0.59136392 0.61324214]
[0.70041161 0.7717032 0.69220064 0.72463532 0.6957257 ]
[0.89696894 0.69741108 0.64136612 0.64154719 0.68621613]
[0.48509474 0.60700037 0.65812918 0.68441118 0.68835903]
[0.73802038 0.83224745 0.87301124 1. 0.92272565]
[0.72708573 0.64909142 0.54540817 0.60859883 0.52663327]
[0.72061572 0.70357846 0.61626289 0.71932261 0.75028955]]
this can be 'tricked', however
im2
[[44 47 64 67 67]
[ 9 83 21 36 87]
[70 88 88 12 58]
[65 39 87 46 88]
[81 37 25 77 72]
[ 9 20 80 69 79]
[ 0 1 0 99 88]
[ 0 0 0 19 14]
[ 0 0 0 9 57]
[32 31 74 23 35]]
m2
[[0.53981867 0.45483201 0.54514907 0.52098765 0.48836403]
[0.51811216 0.51670401 0.51427317 0.59014141 0.49073293]
[0.40278285 0.4512764 0.43444444 0.51642621 0.54402958]
[0.5344214 0.56448972 0.54737758 0.43269951 0.44870774]
[0.51248943 0.56465331 0.50648148 0.53021386 0.50906076]
[0.78923691 0.56633529 0.51641414 0.44336403 0.50210263]
[0.88137788 0.89779614 0.63552189 0.55070797 0.50367059]
[0.88888889 1. 0.75544508 0.75694003 0.67515605]
[0.43965976 0.48492221 0.37490287 0.48511085 0.38533625]
[0.30754918 0.32478065 0.27066895 0.46685032 0.548985 ]]
Maybe someone can contribute on the background of the lecture notes.
update: It is discussed in J. P. Lewis, “Fast Normalized Cross-Correlation”, Industrial Light and Magic. on the very first page.

iOS Swift Mi Scale 2 Bluetooth Get Weight

I am writing an app that can get weight measurement from Xiaomi mi scale 2. After reading all available uuid's, only "181B" connection, specifically "2A9C" characteristic (Body weight measurement in bluetooth gatt) gets notifications.
Value data is [2, 164, 178, 7, 1, 1, 2, 58, 56, 253, 255, 240, 60]. Only last two values vary, the rest is time and date, witch is not set currently (253, 255 are zeroes when the weight varies on the scale until it stabilizes).
Can someone help me get only persons weight, should i be getting data maybe in a different way, from other uuid's (like custom ones: 00001530-0000-3512-2118-0009AF100700, 00001542-0000-3512-2118-0009AF100700), and how do i retrieve them.
Correct answer by Paulw11: You need to look at bit 0 of the first byte to determine if the weight is in imperial or SI; the bit is 0 so the data is SI. The to get the weight, convert the last two bytes to a 16 bit integer (60*256+240 = 15,600) and multiply by 0.005 = 78kg
In my case, it was a little different:
I got data like this [207, 0, 0, 178, 2, 0, 0, 0, 0, 0, 127] (6.9 KG) and the solution is:
let bytesArray = [207, 0, 0, 178, 2, 0, 0, 0, 0, 0, 127]
let weight = (( bytesArray[4] * 256 + bytesArray[3] ) * 10.0) / 1000
And now I have my 6.9 kg.
I was using Mi Smart scale and i had the following byte array.
02-A4-B2-07-02-13-06-33-35-FD-FF-EC-09" received - 12.7 KG
02-A4-B2-07-02-13-06-3B-17-FD-FF-C8-3C" - 77.8 KG
I used the last two bytes to get the weights in KG.
(09*256 + EC)/200 = 12.7
(3C*256+C8)/200 = 77.8
My byte array was 13 bytes long.
bytes 0 and 1: control bytes
bytes 2 and 3: year
byte 4: month
byte 5: day
byte 6: hours
byte 7: minutes
byte 8: seconds
bytes 9 and 10: impedance
bytes 11 and 12: weight (divide by 100 for pounds and catty, divide by 200 for kilograms)

Categorical attributes to Sparse Matrix

First of all I´m new to Machine Learning.
I am trying to predict the price of second hand cars. This cars have makes and models, so I used a MultiLabelBinarizer to make a sparse matrix, to handle the categorical attributes, here's the code:
from sklearn.preprocessing import MultiLabelBinarizer
encoder = MultiLabelBinarizer()
make_cat_1hot = encoder.fit_transform(make_cat)
model_cat_1hot = encoder.fit_transform(model_cat)
type_cat_1hot = encoder.fit_transform(type_cat)
print(type(make_cat_1hot))
carInfoModHot = carsInfoMod.copy()
carInfoModHot["makeHot"] = make_cat_1hot.tolist()
carInfoModHot["modelHot"] = model_cat_1hot.tolist()
carInfoModHot["typeHot"] = type_cat_1hot.tolist()
doors km make year makeHot modelHot
5.0 78779 Mercedes 2012 [0, 0, 0, 0, 1, 0, 0, 0, ...[1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, ...
5.0 25463 Bmw 2015 [0, 1, 0, 0, 0, 0, 0, ... [1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, ...
Then I used it to make a prediction and get the mean square error with a Linear Regression:
lr = linear_model.LinearRegression()
carsInfoTrainHot = carInfoModHot.drop(["price"], axis=1) # drop labels for training set
df1 = carsInfoTrainHot.iloc[:30000, :]
carsLabels1 = carsInfoMod.iloc[:30000, 3]
print(carsInfoTrainHot.head())
df2 = carsInfoTrainHot.iloc[30001:60000, :]
carsLabels2 = carsInfoMod.iloc[30001:60000, 3]
df3 = carsInfoTrainHot.iloc[60001:, :]
carsLabels3 = carsInfoMod.iloc[60001:, 3]
lr.fit(df1, carsLabels1)
print(carsInfoTrainHot.shape)
carPrediction = lr.predict(df2)
lin_mse = mean_squared_error(carsLabels2, carPrediction)
lin_rmse = np.sqrt(lin_mse)
But I get this error:
ValueError Traceback (most recent call
last) in ()
12 carsLabels3 = carsInfoMod.iloc[60001:, 3]
13
---> 14 lr.fit(df1, carsLabels1)
15 print(carsInfoTrainHot.shape)
16 carPrediction = lr.predict(df2)
/home/vagrant/anaconda3/lib/python3.6/site-packages/sklearn/linear_model/base.py
in fit(self, X, y, sample_weight)
510 n_jobs_ = self.n_jobs
511 X, y = check_X_y(X, y, accept_sparse=['csr', 'csc', 'coo'],
--> 512 y_numeric=True, multi_output=True)
513
514 if sample_weight is not None and np.atleast_1d(sample_weight).ndim > 1:
/home/vagrant/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py
in check_X_y(X, y, accept_sparse, dtype, order, copy,
force_all_finite, ensure_2d, allow_nd, multi_output,
ensure_min_samples, ensure_min_features, y_numeric, warn_on_dtype,
estimator)
519 X = check_array(X, accept_sparse, dtype, order, copy, force_all_finite,
520 ensure_2d, allow_nd, ensure_min_samples,
--> 521 ensure_min_features, warn_on_dtype, estimator)
522 if multi_output:
523 y = check_array(y, 'csr', force_all_finite=True, ensure_2d=False,
/home/vagrant/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py
in check_array(array, accept_sparse, dtype, order, copy,
force_all_finite, ensure_2d, allow_nd, ensure_min_samples,
ensure_min_features, warn_on_dtype, estimator)
400 # make sure we actually converted to numeric:
401 if dtype_numeric and array.dtype.kind == "O":
--> 402 array = array.astype(np.float64)
403 if not allow_nd and array.ndim >= 3:
404 raise ValueError("Found array with dim %d. %s expected <= 2."
ValueError: setting an array element with a sequence.
From what I understand is that I´m inserting an array in the categorical attributes, but how else can I change the categorical values to a sparse matrix?
Thanks.

What is correct implementation of LDA (Linear Discriminant Analysis)?

I found that the result of LDA in OpenCV is different from other libraries. For example, the input data was
DATA (13 data samples with 4 dimensions)
7 26 6 60
1 29 15 52
11 56 8 20
11 31 8 47
7 52 6 33
11 55 9 22
3 71 17 6
1 31 22 44
2 54 18 22
21 47 4 26
1 40 23 34
11 66 9 12
10 68 8 12
LABEL
0 1 2 0 1 2 0 1 2 0 1 2 0
The OpenCV code is
Mat data = (Mat_<float>(13, 4) <<\
7, 26, 6, 60,\
1, 29, 15, 52,\
11, 56, 8, 20,\
11, 31, 8, 47,\
7, 52, 6, 33,\
11, 55, 9, 22,\
3, 71, 17, 6,\
1, 31, 22, 44,\
2, 54, 18, 22,\
21, 47, 4, 26,\
1, 40, 23, 34,\
11, 66, 9, 12,\
10, 68, 8, 12);
Mat mean;
reduce(data, mean, 0, CV_REDUCE_AVG);
mean.convertTo(mean, CV_64F);
Mat label(data.rows, 1, CV_32SC1);
for (int i=0; i<label.rows; i++)
label.at<int>(i) = i%3;
LDA lda(data, label);
Mat projection = lda.subspaceProject(lda.eigenvectors(), mean, data);
The matlab code is (used Matlab Toolbox for Dimensionality Reduction)
cd drtoolbox\techniques\
load hald
label=[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0]
[projection, trainedlda] = lda(ingredients, label)
The eigenvalues are
OpenCV (lda.eigenvectors())
0.4457 4.0132
0.4880 3.5703
0.5448 3.3466
0.5162 3.5794
Matlab Toolbox for Dimensionality Reduction (trainedlda.M)
0.5613 0.7159
0.6257 0.6203
0.6898 0.5884
0.6635 0.6262
Then the projections of data are
OpenCV
1.3261 7.1276
0.8892 -4.7569
-1.8092 -6.1947
-0.0720 1.1927
0.0768 3.3105
-0.7200 0.7405
-0.3788 -4.7388
1.5490 -2.8255
-0.3166 -8.8295
-0.8259 9.8953
1.3239 -3.1406
-0.5140 4.2194
-0.5285 4.0001
Matlab Toolbox for Dimensionality Reduction
1.8030 1.3171
1.2128 -0.8311
-2.3390 -1.0790
-0.0686 0.3192
0.1583 0.5392
-0.9479 0.1414
-0.5238 -0.9722
1.9852 -0.4809
-0.4173 -1.6266
-1.1358 1.9009
1.6719 -0.5711
-0.6996 0.7034
-0.6993 0.6397
The eigenvectors and projections are different even though these LDAs have the same data. I believe there are 2 possibilities.
One of the libraries is wrong.
I am doing it wrong.
Thank you!
The difference is because eigenvectors are not normalized.
The normalized (L2 norm) eigenvectors are
OpenCV
0.44569 0.55196
0.48798 0.49105
0.54478 0.46028
0.51618 0.49230
Matlab Toolbox for Dimensionality Reduction
0.44064 0.55977
0.49120 0.48502
0.54152 0.46008
0.52087 0.48963
They look simliar now, although they have quite different eigenvalues.
Even though the PCA in OpenCV returns normalized eigenvectors, LDA does not. My next question is 'Is normalizing eigenvectors in LDA not necessary?'

Resources