Esper - Aggegation by partitioning - esper

I am new to Esper. I am stuck at the implementation of this problem.
The input sensor event data is of the following format: (id, time, value, intensity)
I want to do aggregation of value for each id and intensity.
So, the output data should be of the format: id, start_time, end_time, aggregated value, intensity (high, low, medium)
where start_time is the time when the analysis is started and end_time is the time when analysis ended and analysis is done on sliding window of 11 seconds.
For the input data:
1, t1, x1, i1
1, t2, x1, i1
1, t3, x2, i1
1, t4, x3, i1
1, t5, x5, i2
1, t6, x6, i1
2, t7, x7, i1
The output will look like:
1, t1, t1, v1, i1
1, t1, t2, v2, i1
1, t1, t3, v3, i1
1, t1, t4, v4, i1
1, t5, t5, v5, i2
1, t6, t6, v6, i1
2, t7, t7, v7, i1 and so on.
In the result set we can see that data are grouped by id and intensity, but once a different (id, intensity) appears analysis for that stops.
How can I obtain the result in this format?
I tried to use prev() function, but that does not work, because I have no idea how many events will be there.
Please suggest me what I should try to solve this.

The "prev" function can tell you when a previous group stops. You could have the "prev" generate an indicator that a sequence ended. For example an "insert into Trigger select field from Event where prev(field) <> field" would generate a Trigger event when "prev" encountered changing fields. The trigger event can then be used to perform the analysis and maybe remove the rows. A named window could be used to store the relevant rows until analysis is done.

Related

How to find R, T and Q matrixes given K1, K2, D1, D2, P1, P2, R1 and R2 matrixes?

I downloaded a stereo database that contains the camera calibration parameters. I want to calculate the disparity map using the cv::StereoBM and reproject with cv::reprojectImageTo3D, but the database does not provide all stereo matrixes, so I need the R, T, and Q matrixes.
How I recover those matrixes? I have the K1, K2, D1, D2, P1, P2, R1, and R2 matrixes.
PS: If I use the cv::stereoRectify like:
cv::stereoRectify(K1, D1, K2, D2, imageSize, R, T, R1, R2, P1, P2, Q,
cv::CALIB_ZERO_DISPARITY, 0, imageSize,
&validRoi[0], &validRoi[1]);
I got the error
OpenCV Error: Assertion failed (src.size == dst.size && src.channels() == dst.channels()) in cvConvertScale
error: (-215) src.size == dst.size && src.channels() == dst.channels() in function cvConvertScale
I'm not quite sure about what all matrices mean. However you are not using stereoRectify in the correct way. From the documentation you see that R1, R2, P1, P2 are output arrays, R and T are input arrays.
If you wish to rectify your two images, I guess that you want to call initUndistortRectifyMap instead, which directly takes in R1, P1.
Assuming K1 is the camera matrix 1, D1 the distortion coefficients 1, you can call as:
Mat map11, map12;
initUndistortRectifyMap(K1, D1, R1, P1, img_size, CV_16SC2, map11, map12);
Mat img1r
remap(img1, img1r, map11, map12, INTER_LINEAR);
with img1r being your rectified image. This code is adapted from the sample in OpenCV: opencv/samples/cpp/stereo_match.cpp
For the matrix Q, which is needed for the function reprojectImageTo3D, you might be interested by this link or to look in the definition of stereoRectify in OpenCV code directly (opencv/modules/calib3d/src/calibration.cpp).
I hope this helps!
EDIT:
Actually, according to this, Q is just composed of cx, cy, tx and f. So I guess that you have them all: cx, cy, f are in K (camera matrix), and you can get tx using the right side P matrix which contain Tx * f. I think fx has to be used for f but you should probably check that.

Find similar images using Geometric Min Hash: How to calculated theoretical matching probabilities?

I'm trying to match images based on visual words (labeled key points within images). When comparing the simulated results to my theoretical results I get significant deviations, therefore I guess there must be a mistake in my theoretical probability calculation.
You can imagine two images as set of visual words (visual word names range from A to Z):
S1=SetImage1={A, B, C, D, E, F, G, H, I, J, L, M, N, O, Y, Z}
S2=SetImage2={A, L, M, O, T, U, V, W, X, Y, Z}
You can already see that some visual words occur in both sets (e.g. A, Z, Y,...). Now we separate the visual words into primary words and secondary words (see the provided image). Each primary word has a neighborhood of secondary words. You can see the primary words (red rectangles) and their secondary words (words within ellipse). For our example the primary word sets are as follows:
SP1=SetPrimaryWordsImage1={A, J, L}
SP2=SetPrimaryWordsImage2={A, L,}
We now randomly select a visual word img1VAL1 from the set SP1 and one word from the neighborhood of img1VAL1, i.e. img1VAL2=SelFromNeighborhood(img1VAL1) resulting into a pair PairImage1={img1VAL1, img1VAL2}. We do the same with the second image and get PairImage2={img2VAL1, img2VAL2}.
Example:
from Image1 we select A as primary visual word and C as secondary word since C is within the neighborhood of A. We get the pair {A, C}
from Image2 we select also A as primary visual word and Z as secondary word. We get the pair {A, Z}
{A,C} != {A,Z} and therefore we have no match. But what is the probability that randomly selected pairs are equal?
The probability is this:
A={1, 2, 3, 4}, B=A={1, 2, 3}
intersection C=A int B={1, 2, 3}
Number of possible pairs out of intersection = 3-choose-2 (binomial)
number of all possibilities=|A|-choose-2 * |B|-choose-2
therefore probability
|intersection|-choose-2/(|A|-choose-2 * |B|-choose-2)

Comparing feature descriptor test?

I would like to test different descriptors (like SIFT, SURF, ORB, LATCH etc.) in terms of precision-recall and computation time for my image dataset in order to understand which one is more suitable.
There is any pre-built tester in OpenCV for this purpose? Any other alternative or guideline?
You can use the code in the foolowing link to compute recall vs. precision curves:
http://www.robots.ox.ac.uk/~vgg/research/affine/desc_evaluation.html#code
In order to plot them, you need to detect keypoint and extract descriptors in each image in the dataset. Next, you write the descriptors for each image in the following format:
descriptor_size
nbr_of_regions
x1 y1 a1 b1 c1 d1 d2 d3 ...
x2 y2 a2 b2 c2 d1 d2 d3 ...
....
....
x, y - center coordinates
a, b, c - ellipse parameters ax^2+2bxy+cy^2=1
d1 d2 d3 ... - descriptor values, binary values in case of ORB and LATCH

Maxima - what returns a function chebyshev_t(n, t)

I have probably a very simple question.
What returns in Maxima function chebyshev_t(n, t)? I mean the exact mathematical formula.
Best Regards
When n is a literal integer (e.g. 2, 3, 5, etc) then chebyshev_t evaluates to a polynomial.
When n is a symbol declared an integer (via declare(n, integer) then chebyshev_t evaluates to a summation.
(%i1) display2d : false $
(%i2) chebyshev_t (5, u);
(%o2) -25*(1-u)-16*(1-u)^5+80*(1-u)^4-140*(1-u)^3+100*(1-u)^2+1
(%i3) declare (m, integer);
(%o3) done
(%i4) chebyshev_t (m, u);
(%o4) 'sum(pochhammer(-m,i1)*pochhammer(m,i1)*(1-u)^i1
/(pochhammer(1/2,i1)*2^i1*i1!),i1,0,m)

Estimating dependent variable as sum of functions of independent variables

I have a training data of 5 columns, where c1 is the dependent variable and columns c2, c3, c4, c5 are independent variables.
I want to estimate c1 as sum of functions of ci (where i = 2, 3, 4, 5) in a way which minimizes residual error.
c1 ~ ( a2 F2(c2) + a3 F3(c3) + a4 F4(c4) + a5 F5(c5) )
I wrote a Python script to read columns c2, c3, c4, c5 for training. Now if I input a new row with c2, c3, c4, c5, then my script can generate c1 for this row.
But actually, what am I supposed to do by this statement "estimate c1 as sum of functions of ci (where i = 2, 3, 4, 5) in a way which minimizes residual error" ? I have no idea of ML, and would appreciate if somebody throws light on what is meant by estimating c1.
Thank you.

Resources