Lens distortion model vs correction model - opencv

The lens model in OpenCV is a sort of distortion model which distorts an ideal position to the corresponding real (distorted) position:
x_corrected = x_distorted ( 1 + k_1 * r^2 + k_2 * r^4 + ...),
y_corrected = y_distorted ( 1 + k_1 * r^2 + k_2 * r^4 + ...),
where r^2 = x_distorted^2 + y_distorted^2 in the normalized image coordinate (the tangential distortion is omitted for simplicity). This is also found in Z. Zhang: "A Flexible New Technique for Camera Calibration," TPAMI 2000, and also in "Camera Calibration Toolbox for Matlab" by Bouguet.
On the other hand, Bradski and Kaehler: "Learning OpenCV" introduces in p.376 the lens model as a correction model which corrects a distorted position to the ideal position:
x_distorted = x_corrected ( 1 + k'_1 * r'^2 + k'_2 * r'^4 + ...),
y_distorted = y_corrected ( 1 + k'_1 * r'^2 + k'_2 * r'^4 + ...),
where r'^2 = x_corrected^2 + y_corrected^2 in the normalized image coordinate.
Hartley and Zisserman: "Multiple View Geometry in Computer Vision" also describes this model.
I understand the both correction and distortion models have advantages and disadvantages in practice. For example, the former makes correction of detected feature point locations easy, while the latter makes the undistortion of the entire image straightforward.
My question is, why they share the same polynomial expression, while they are supposed to be the inverse of each other? I could find this document evaluating the inversibility, but its theoretical background is not clear to me.
Thank you for your help.

I think the short answer is: they are just different models, so they're not supposed to be each other's inverse. Like you already wrote, each has its own advantages and disadvantages.
As to inversibility, this depends on the order of the polynomial. A 2nd-order (quadratic) polynomial is easily inverted. A 4th-order requires some more work, but can still be analytically inverted. But as soon as you add a 6th-order term, you'll probably have to resort to numeric methods to find the inverse, because a 5th-order or higher polynomial is not analytically invertible in the general case.

According to taylor expansion every formula in world can be written as c0 + c1*x + c2*x^2 + c3*x^3 + c4*x^4...
The goal is just discover the constants.
In our particular case the expression must be symmetric in x and -x (even function) so the constants in x, x^3, x^5, x^7 are equal to zero.

Related

Can you give me a short step by step numerical example of radial basis function kernel trick? I would like to understand how to apply on perceptron

I understand well perceptron so put accent only on kernel but I am not familiar with matemathic expressions so please give me an numerical example and a guide on kernel.
For example:
My hyperplane of perceptron is x1*w1+x2*w2+x3*w3+b=0; The RBF kernel formula: k(x,z) = exp((-|x-z|^2)/2*variance^2) where takes action the radial basis function kernel here. Is x an input and what is z variable here?
Or of what I have to calculate variance if it is variance in the formula?
Somewhere I have understood so that I have to plug this formula in perceptron decision function x1*w1+x2*w2+x3*w3+b=0; but how does it look look like If I plug in?
I would like to ask a numerical example to avoid confusion.
Linear Perceptron
As you know linear perceptrons can be trained for binary classification. More precisely, if there is n features, x1, x2, ..., xn in n-dimensional space, Rn, and you want to label them in 2 categories, y1 & y2 (usually -1 and +1), you can use linear perceptron which defines a hyperplane w1*x1 + ... + wn*xn + b = 0 to do so.
w1*x1 + ... + wn*xn + b > 0 or W.X + b > 0 ==> class = y1
w1*x1 + ... + wn*xn + b < 0 or W.X + b < 0 ==> class = y2
Linear perceptron will work well, only if the problem is linearly separable in Rn. For example, in 2D space, this means that one line can separate the 2 sets of points.
Algorithm
One common algorithm to train the perceptron, i.e., find weights and bias, w's & b, based on N data points, X1, ..., XN, and their labels, Y1, ..., YN is the following:
Initialize: W = zeros(n,1); b = 0
For i=1 to N:
Calculate F(Xi) = W.Xi + b
If F(Xi)*Yi <= 0:
W <--- W + Xi*Yi
b <--- b + Yi
This will give the final value for W & b. Besides, based on the training, W will be a linear combination of training points, Xi's, more precisely, the ones that were misclassified. So W = a1*X1 + ... + ...aN*XN where a's are in {0,y1,y2}.
Now, if there is a new point, let's say Z, to label, we check the sign of F(Z) = W.Z + b = a1*(X1.Z) + ... + aN*(XN.Z) + b. It is interesting that only the inner product of new point and training points take part in it.
Kernel Perceptron
Now, if the problem is not linearly separable, one may try to go to a higher dimensional space in which a hyperplane can do the classification. As an example, consider a circle in 2D space. The points inside and outside of the circle can't be separated by a line. However, if you find a transformation that can take the points to 3D space such that the first 2 coordinates remain the same for all points, and the 3rd coordinate become +1 and -1 for the points inside and outside of the circle respectively, then a plane defined as 3rd coordinate = 0 can separate the points.
Finding such transformations can be difficult and computationally heavy, so the kernel trick is introduced. Notice that we only used the inner product of new points with the training points. Kernel trick employs this fact and defines the inner product of the transformed points without actually finding the transformation.
If the unknown transformation is P(X) then Kernel function will be:
K(Xi,Xj) = <P(Xi),P(Xj)>. So instead of finding P, kernel functions are defined which represent the scalar result of the inner product in high-dimensional space. There are also theorems about what functions can be kernel functions, i.e., correspond to inner product in another space.
After choosing a kernel function, the algorithm will be modified as follows:
Initialize: F(X) = 0
For i=1 to N:
Calculate F(Xi)
If F(Xi)*Yi <= 0:
F(.) <--- F(.) + K(.,Xi)*Yi + Yi
At the end, F(.) = a1*K(.,X1) + ... + ...aN*K(.,XN) + b where a's are in {0,y1,y2}.
RBF Kernel
Radial basis function is one type of kernel function that is actually computing the inner product in an infinite-dimensional space. It can be written as
K(Xi,Xj) = exp(- norm2(Xi-Xj)^2 / (2*sigma^2))
Sigma is some parameter that you can work with to find an optimum value for. For example, you can train the model with different values of sigma and then find the best value based on the performance. You can start with sigma = 1
After training the model to find F(.), for a new data Z, the sign of F(Z) = a1*K(Z,X1) + ... + ...aN*K(Z,XN) + b will determine the class.
Remarks:
Regarding to your question about variance, you don't need to find any variance.
About x and z in your question, in each iteration, you should find the kernel output for the current data point and all the previously added points (the points that were misclassified and hence were added to F).
I couldn't come up with a simple instructive numerical example.
References:
I borrowed some notation from
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwjVu-fXo8DOAhVDxCYKHQkcDDAQFggoMAE&url=http%3A%2F%2Falex.smola.org%2Fteaching%2Fpune2007%2Fpune_3.pdf&usg=AFQjCNHlxy9TnY8xNe2-QDERipN_GycSqQ&bvm=bv.129422649,d.eWE

OpenCV: Essential Matrix Decomposition

I am trying to extract Rotation matrix and Translation vector from the essential matrix.
<pre><code>
SVD svd(E,SVD::MODIFY_A);
Mat svd_u = svd.u;
Mat svd_vt = svd.vt;
Mat svd_w = svd.w;
Matx33d W(0,-1,0,
1,0,0,
0,0,1);
Mat_<double> R = svd_u * Mat(W).t() * svd_vt; //or svd_u * Mat(W) * svd_vt;
Mat_<double> t = svd_u.col(2); //or -svd_u.col(2)
</code></pre>
However, when I am using R and T (e.g. to obtain rectified images), the result does not seem to be right(black images or some obviously wrong outputs), even so I used different combination of possible R and T.
I suspected to E. According to the text books, my calculation is right if we have:
E = U*diag(1, 1, 0)*Vt
In my case svd.w which is supposed to be diag(1, 1, 0) [at least in term of a scale], is not so. Here is an example of my output:
svd.w = [21.47903827647813; 20.28555196246256; 5.167099204708699e-010]
Also, two of the eigenvalues of E should be equal and the third one should be zero. In the same case the result is:
eigenvalues of E = 0.0000 + 0.0000i, 0.3143 +20.8610i, 0.3143 -20.8610i
As you see, two of them are complex conjugates.
Now, the questions are:
Is the decomposition of E and calculation of R and T done in a right way?
If the calculation is right, why the internal rules of essential matrix are not satisfied by the results?
If everything about E, R, and T is fine, why the rectified images obtained by them are not correct?
I get E from fundamental matrix, which I suppose to be right. I draw epipolar lines on both the left and right images and they all pass through the related points (for all the 16 points used to calculate the fundamental matrix).
Any help would be appreciated.
Thanks!
I see two issues.
First, discounting the negligible value of the third diagonal term, your E is about 6% off the ideal one: err_percent = (21.48 - 20.29) / 20.29 * 100 . Sounds small, but translated in terms of pixel error it may be an altogether larger amount.
So I'd start by replacing E with the ideal one after SVD decomposition: Er = U * diag(1,1,0) * Vt.
Second, the textbook decomposition admits 4 solutions, only one of which is physically plausible (i.e. with 3D points in front of the camera). You may be hitting one of non-physical ones. See http://en.wikipedia.org/wiki/Essential_matrix#Determining_R_and_t_from_E .

OpenCV DFT_INVERSE different from Matlab's ifft

I try to filter a signal using opencv's dft function. The way I try to this is taking the signal in time domain:
x = [0.0201920000000000 -0.0514940000000000 0.0222140000000000 0.0142460000000000 -0.00313500000000000 0.00270600000000000 0.0111770000000000 0.0233470000000000 -0.00162700000000000 -0.0306280000000000 0.0239410000000000 -0.0225840000000000 0.0281410000000000 0.0265510000000000 -0.0272180000000000 0.0223850000000000 -0.0366850000000000 0.000515000000000000 0.0213440000000000 -0.0107180000000000 -0.0222150000000000 -0.0888300000000000 -0.178814000000000 -0.0279280000000000 -0.144982000000000 -0.199606000000000 -0.225617000000000 -0.188347000000000 0.00196200000000000 0.0830530000000000 0.0716730000000000 0.0723950000000000]
Convert it to FOURIER domain using :
cv::dft(x, x_fft, cv::DFT_COMPLEX_OUTPUT, 0);
Eliminate the unwanted frequencies:
for(int k=0; k<32;k++){
if(k==0 || k>6 )
{
x_fft.ptr<float>(0)[2*k+0]=0;
x_fft.ptr<float>(0)[2*k+1]=0;
}
}
Convert it back to time domain:
cv::dft(x_fft, x_filt, cv::DFT_INVERSE, 0);
In order to check my results I've compared them to Matlab. I took the same signal x, convert it to FOURIER using x_mfft = fft(x); The results are similar to the ones I get from opencv, excepting the fact that in opencv I only get the left side, while in matlab I get the symmetric values too.
After this I set to 0 in Matlab the values of x_mfft(0) and x_mfft(8:32) and now the signal look exactly the same except the fact that in Matlab they are in complex form, while in opencv they are separated, real part in one channel, imaginary part in the other.
The problem is that when I perform the inverse transform in matlab using x_mfilt = ifft(x_mfft) the results are completely different from what I get using opencv.
Matlab:
0.0126024108604191 + 0.0100628178150509i 0.00278762121814893 - 0.00615997579216921i 0.0116716145588075 - 0.0150834711251450i 0.0204808089882897 - 0.00937680194210788i 0.0187164132302469 - 0.000843687942567208i 0.0132322795522116 - 0.000108642129381095i 0.0140282455278201 - 0.00325620843335947i 0.0190436542174946 - 0.000556561558544529i 0.0182379867325824 + 0.00764390022568001i 0.00964801276734883 + 0.0107158342431018i 0.00405220362962359 + 0.00339496875258604i 0.0108096973356501 - 0.00476499376334313i 0.0236507440224628 - 0.000415067678294738i 0.0266197220512826 + 0.0154626911663024i 0.0142805873081583 + 0.0267004219364679i 0.000314527358302778 + 0.0215255889620223i 0.00173512964620177 + 0.00865151513638104i 0.0169666351363477 + 0.00836162056544561i 0.0255915540012784 + 0.0277878383595920i 0.0118710562486680 + 0.0506446948330055i -0.0160165379892836 + 0.0553846122152651i -0.0354343989166415 + 0.0406080858067314i -0.0370261047451452 + 0.0261077990289579i -0.0365120038155127 + 0.0268311542287801i -0.0541841640123775 + 0.0312446266697320i -0.0854132555297956 + 0.0125342802025550i -0.0989182320365535 - 0.0377079727602073i -0.0686133217915410 - 0.0925138855355046i -0.00474198249025186 - 0.111728716441247i 0.0515933837210975 - 0.0814138940625859i 0.0663201317560107 - 0.0279433757588921i 0.0426055814586485 + 0.00821080477569232i
OpenCV after cv::dft(x_fft, x_filt, cv::DFT_INVERSE, 0);
Channel 1:
0.322008 -0.197121 -0.482671 -0.300055 -0.026996 -0.003475 -0.104199 -0.017810 0.244606 0.342909 0.108642 -0.152477 -0.013281 0.494806 0.854412 0.688818 0.276848 0.267571 0.889207 1.620622 1.772298 1.299452 0.835450 0.858602 0.999833 0.401098 -1.206658 -2.960446 -3.575316 -2.605239 -0.894184 0.262747
Channel 2:
0.403275 0.089205 0.373494 0.655387 0.598925 0.423432 0.448903 0.609397 0.583616 0.308737 0.129670 0.345907 0.756820 0.851827 0.456976 0.010063 0.055522 0.542928 0.818924 0.379870 -0.512527 -1.133893 -1.184826 -1.168379 -1.733893 -2.733226 -3.165383 -2.195622 -0.151738 1.650990 2.122242 1.363375
What am I missing? Shouldn't the results be similar? How can I check if the inverse transform in opencv is done correctly?
Later EDIT:
After struggling with the problems for a few hours now I've decided to plot the results from Matlab and OpenCV and to my surprise they were very much similar.
Imaginary parts
Real parts:
So obviously it's something about a SCALE factor. After dividing them element by element apparently this factor is 32 - the length of the signal. Can someone explain why this happens?
The obvious solution is to use cv::dft(x_fft, x_filt, cv::DFT_INVERSE+cv::DFT_SCALE, 0); so I guess this topic is answered but I'm still interested in why is it this way.
There is no standard for scale factor used by all FFT libraries. Some use none, some include a scale factor of 1/N, some 1/sqrt(N). You have to test or look in the documentation for each particular library.

In digital image restoration field, what is shift-invarient blurs?

I am working in digital image restoration field. According to it the image degradation model is defined like that:
g(x,y) = h(x,y)*f(x,y) + n(x,y)
many times i have studied that the blur-kernel is shift-invariant, can anyone please explain that what does it mean.? i have already searched it on Google, but i did not get satisfactory answer that i can tell during my presentation.
Shift invariant means that if some arbitrary value x is added to (or subtracted from) every element of a sample (in this case the pixels covered by the kernel), then the result of the kernel is also affected by the addition (or subtraction) of the value x.
It's most easily understood if you consider a blur kernel as a simple average (mean) rather than a Gaussian or whatever.
So if you have pixels with values v1, v2, v3 ... vn
with an average, A = (v1 + v2 + v3 + ... vn) / n,
then if you add some value x to each pixel (i.e. v1 + x, v2 + x, v3 + x ... vn + x),
the new average will simply be A + x.
So the output of the convolution is shifted by the same amount as each of the inputs. Hence shift invariant.

Implementing a linear, binary SVM (support vector machine)

I want to implement a simple SVM classifier, in the case of high-dimensional binary data (text), for which I think a simple linear SVM is best. The reason for implementing it myself is basically that I want to learn how it works, so using a library is not what I want.
The problem is that most tutorials go up to an equation that can be solved as a "quadratic problem", but they never show an actual algorithm! So could you point me either to a very simple implementation I could study, or (better) to a tutorial that goes all the way to the implementation details?
Thanks a lot!
Some pseudocode for the Sequential Minimal Optimization (SMO) method can be found in this paper by John C. Platt: Fast Training of Support Vector Machines using Sequential Minimal Optimization. There is also a Java implementation of the SMO algorithm, which is developed for research and educational purpose (SVM-JAVA).
Other commonly used methods to solve the QP optimization problem include:
constrained conjugate gradients
interior point methods
active set methods
But be aware that some math knowledge is needed to understand this things (Lagrange multipliers, Karush–Kuhn–Tucker conditions, etc.).
Are you interested in using kernels or not? Without kernels, the best way to solve these kinds of optimization problems is through various forms of stochastic gradient descent. A good version is described in http://ttic.uchicago.edu/~shai/papers/ShalevSiSr07.pdf and that has an explicit algorithm.
The explicit algorithm does not work with kernels but can be modified; however, it would be more complex, both in terms of code and runtime complexity.
Have a look at liblinear and for non linear SVM's at libsvm
The following paper "Pegasos: Primal Estimated sub-GrAdient SOlver for SVM" top of page 11 describes the Pegasos algorithm also for kernels.It can be downloaded from http://ttic.uchicago.edu/~nati/Publications/PegasosMPB.pdf
It appears to be a hybrid of coordinate descent and subgradient descent. Also, line 6 of the algorithm is wrong. In the predicate the second appearance of y_i_t should be replaced with y_j instead.
I would like to add a little supplement to the answer about original Platt's work.
There is a bit simplified version presented in Stanford Lecture Notes, but derivation of all the formulas should be found somewhere else (e.g. this random notes I found on the Internet).
If it's ok to deviate from original implementations, I can propose you my own variation of the SMO algorithm that follows.
class SVM:
def __init__(self, kernel='linear', C=10000.0, max_iter=100000, degree=3, gamma=1):
self.kernel = {'poly':lambda x,y: np.dot(x, y.T)**degree,
'rbf':lambda x,y:np.exp(-gamma*np.sum((y-x[:,np.newaxis])**2,axis=-1)),
'linear':lambda x,y: np.dot(x, y.T)}[kernel]
self.C = C
self.max_iter = max_iter
def restrict_to_square(self, t, v0, u):
t = (np.clip(v0 + t*u, 0, self.C) - v0)[1]/u[1]
return (np.clip(v0 + t*u, 0, self.C) - v0)[0]/u[0]
def fit(self, X, y):
self.X = X.copy()
self.y = y * 2 - 1
self.lambdas = np.zeros_like(self.y, dtype=float)
self.K = self.kernel(self.X, self.X) * self.y[:,np.newaxis] * self.y
for _ in range(self.max_iter):
for idxM in range(len(self.lambdas)):
idxL = np.random.randint(0, len(self.lambdas))
Q = self.K[[[idxM, idxM], [idxL, idxL]], [[idxM, idxL], [idxM, idxL]]]
v0 = self.lambdas[[idxM, idxL]]
k0 = 1 - np.sum(self.lambdas * self.K[[idxM, idxL]], axis=1)
u = np.array([-self.y[idxL], self.y[idxM]])
t_max = np.dot(k0, u) / (np.dot(np.dot(Q, u), u) + 1E-15)
self.lambdas[[idxM, idxL]] = v0 + u * self.restrict_to_square(t_max, v0, u)
idx, = np.nonzero(self.lambdas > 1E-15)
self.b = np.sum((1.0-np.sum(self.K[idx]*self.lambdas, axis=1))*self.y[idx])/len(idx)
def decision_function(self, X):
return np.sum(self.kernel(X, self.X) * self.y * self.lambdas, axis=1) + self.b
In simple cases it works not much worth than sklearn.svm.SVC, comparison shown below (I have posted code that generates these images on GitHub)
I used quite different approach to derive formulas, you may want to check my preprint on ResearchGate for details.

Resources