What is the err parameter in OpenCv's Lucas Kanade method? - opencv

In the function cv::caclOpticalFlowPyrLK(..) there's a parameter cv::OutputArray err. What does this parameter specify? Is it the distance at which the corresponding match was found for a feature?
Lucas Kanade | OpenCV
This question arose because I checked the difference between err[i], and the Euclidean distance of prevPts[i] and nextPts[i] and it turns out to be somewhere in the range -1 or +1, occasionally outside it.

Optical flow basically works by matching a patch, around each input point, from the input image to the second image.
The parameter err allows you to retrieve the matching error (e.g. you may think of that as the correlation error) for each input point. As said in the documentation, the actual error measure depends on what flags were specified.

Related

If data has linear frelationship, won't linear regression lead to zero in-sample error

I am using the Learning from Data textbook by Yaser Abu-Mostafa et al. I am curious about the following statement in the linear regression chapter and would like to verify that my understanding is correct.
After talking about the "pseudo-inverse" way to get the "best weights" (best for minimizing squared error), i.e w_lin = (X^T X)^-1 X^T y
The statement is "The linear regression weight vector is an attempt to map the inputs X to the outputs y. However, w_lin does not produce y exactly, but produces an estimate X w_lin which differs from y due to in sample error.
If the data is on a hyper-plane, won't X w_lin exactly match y (i.e in-sample error = 0)? I.e above statement is only talking about data that is not linearly separable.
Here, 'w_lin' is the not the same for all data points (all pairs of (X,y)).
The linear regression model finds the best possible weight vector (or best possible 'w_lin') considering all data points such that X*w_lin gives a result very close to 'y' for any data point.
Hence the error will not be zero unless all data points line on a straight line.
The community might not get whole context unless the book is opened because not everything that the author of the book says might have been covered in your post. But let me try to answer.
Whenever any model is formed, there are certain constants used whose value is not known beforehand but are used to fit the line/curve as good as possible. Also, the equations, many a times, contain an element of randomness. Variables that take random values cause some errors when actual and expected outputs are computed.
Suggested reading: Errors and residuals

Epipolar Geometry, Not Visually sane output in OpenCV

I've tried using the code given https://docs.opencv.org/3.2.0/da/de9/tutorial_py_epipolar_geometry.html to find the epipolar lines, but instead of getting the output given in the link, I am getting the following output.
but when changing the line F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_LMEDS) to
F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_RANSAC) i.e: using RANSAC algorithm to find Fundamental matrix instead of LMEDS this is the following output.
When the same line is replaced with F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_8POINT) i.e: use eight point algorithm this is the following output.
All of the above about output does not have any visual sanity nor anyway near to close to the given output in opencv documentation for finding epipolar lines. But ironically, if the same code if executed by changing the algorithm to find fundamental matrix in this particular sequence
FM_LMEDS
FM_8POINT
FM_7POINT
FM_LMEDS
most accurate results are generated. This is the output.
I thought we are suppose to get the above output in one run of any of the algorithm (with the variations in matrix values and error). Am I running the code incorrectly? What is that I've to do, to get the correct epipolar lines (i.e.; visually sane)? I am using opencv version 3.3.0 and python 2.7.
Looking forward for reply.
Thank you.

Scilab Error: Mean, Variance not executing

I1 is an rgb image. 'Out' variable basically stores one colour channel of the whole image.
The in-built functions mean, variance and standard deviation when calculated on 'out' gives an error asking for a real vector or matrix as input.
This can be seen in image given below
But when min or max is used, no error is reported.But these in-built function take in the same parameters as mentioned in the Scilab documentation which is of type vector or matrix of integers.
On further examination, it seems that variable 'out' is of type matrix of graphic handles when it should be a matrix of integers.
I can't seem to understand why the error is coming if it works for min and max functions ?
How can I solve this problem?
The output of imread() is a hypermatrix of integers, not of floating point numbers.
This is shown by the fact that min(out) is displayed as "4" (without decimal point), not as "4."
Now, mean() and stdev() do not work with integers, only with real or complex numbers.
The solution is to convert integers into decimal numbers:
mean(double(out))
https://help.scilab.org/docs/6.1.1/en_US/double.html

how to understand "interchange of filtering with compressor/expander"

In Sec. 4.7 of the classical textbook "Discrete-Time Signal Processing (3rd)", the efficient implementation of multi-rate processing is well discussed. The first method deal with the "interchange of filtering with compressor/expander", and the following figure shows the interchange in downsampling.
Since downsampling can cause aliasing, the pre-filtering is necessary. In the figure, we can notice that H(z) in (a) and H(z^M) in (b); however, if aliasing has occurred after downsampling in (a), can H(z) eliminates the aliasing? Thank you!
Yes, as long as the original filter was of the form H(z^M), meaning that only every Mth coefficient of the filter is non-zero.
The reason this is possible comes down to the fact that only each Mth input sample actually factors into the output sequence in this configuration. It is a special case since input samples at non multiples of M are always cancelled out by either the filter zero coefficients or the decimator. It is unnecessary to even consider input samples at indexes other than multiples of M.
This means you can decimate the input first and then apply the filter with its zero coefficients removed.

Artificial Neural Network for formula classification/calculation

I am trying to create an ANN for calculating/classifying a/any formula.
I initially tried to replicate Fibonacci Sequence. I using the inputs:
[1,2] output [3]
[2,3] output [5]
[3,5] output [8]
etc...
The issue I am trying to overcome is how to normalize the data that could be potentially infinite or scale exponentially? I then tried to create an ANN to calculate the slope-intercept formula y = mx+b (2x+2) with inputs
[1] output [4]
[2] output [6]
etc...
Again I do not know how to normalize the data. If I normalize only the training data how would the network be able to calculate or classify with inputs outside of what was used for normalization?
So would it be possible to create an ANN to calculate/classify the formula ((a+2b+c^2+3d-5e) modulo 2), where the formula is unknown, but the inputs (some) a,b,c,d,and e are given as well as the output? Essentially classifying whether the calculations output is odd or even and the inputs are between -+infinity...
Okay, I think I understand what you're trying to do now. Basically, you are going to have a set of inputs representing the coefficients of a function. You want the ANN to tell you whether the function, with those coefficients, will produce an even or an odd output. Let me know if that's wrong. There are a few potential issues here:
First, while it is possible to use a neural network to do addition, it is not generally very efficient. You also need to set your ANN up in a very specific way, either by using a different node type than is usually used, or by setting up complicated recurrent topologies. This would explain your lack of success with the Fibonacci sequence and the line equation.
But there's a more fundamental problem. You might have heard that ANNs are general function approximators. However, in this case, the function that the ANN is learning won't be your formula. When you have an ANN that is learning to output either 0 or 1 in response to a set of inputs, it's actually trying to learn a function for a line (or set of lines, or hyperplane, depending on the topology) that separates all of the inputs for which the output should be 0 from all of the inputs for which the output should be 1. (see the answers to this question for a more thorough explanation, with pictures). So the question, then, is whether or not there is a hyperplane that separates coefficients that will result in an even output from coefficients that will result in an odd output.
I'm inclined to say that the answer to that question is no. If you consider the a coefficient in your example, for instance, you will see that every time you increment or decrement it by 1, the correct output switches. The same is true for the c, d, and e terms. This means that there aren't big clumps of relatively similar inputs that all return the same output.
Why do you need to know whether the output of an unknown function is even or odd? There might be other, more appropriate techniques.

Resources