I'm looking for an elegant way to perform pixel-wise summation and subtraction in EmguCV.
I'm trying to calculate the Haar-like features of an image.
For a one-dimensional situation, it's done by multiplying the vector [x x x x x x x x] by the following vector element-wise:
[ 1 -1 1 -1 1 -1 1 -1]
[ 1 1 -1 -1 1 1 -1 -1]
[ 1 1 1 1 -1 -1 -1 -1]
So I need to add or subtract the element pixels of an image.
Say,
Bgr sum = new Bgr();
sum = sum + img[0,0] - img[0,1] + img[0,2] - img[0,3];
Obviously this won't compile since there's no operator "+" in class Bgr. I've to make a new Bgr by specifying each of the B, G, R value, which is ugly.
Any idea on performing elegant pixel-wise operation?
Previous Thread
You could probably use img.GetSum() if you first flip the sign of the pixels you want to be subtracted. You might be able to do that by multiplying the image element-wise with a matrix consisting of 1 and -1 at the appropriate places.
Related
I modified equation 9.12 in http://www.deeplearningbook.org/contents/convnets.html to center the MxN convolution kernel.
That gives the following expression (take it on faith for now) for the gradient, assuming 1 input and 1 output channel (to simplify):
dK(krow, kcol) = sum(G(row, col) * V(row+krow-M/2, col+kcol-N/2); row, col)
To read the above, the single element of dK at krow, kcol is equal to the sum over all of the rows and cols of the product of G times a shifted V. Note G and V have the same dimensions. We will define going outside V to result in a zero.
For example, in one dimension, if G is [a b c d], V is [w x y z], and M is 3, then the first sum is dot (G, [0 w x y]), the second sum is dot (G, [w x y z]), and the third sum is dot (G, [x y z 0]).
ArrayFire has a shift operation, but it does a circular shift, rather than a shift with zero insertion. Also, the kernel sizes MxN are typically small, e.g., 7x7, so it seems a more optimal implementation would read in G and V once only, and accumulate over the kernel.
For that 1D example, we would read in a and w,x and start with [a*0 aw ax]. Then we read in b,y and add [bw bx by]. Then read in c,z and add [cx cy cz]. Then read in d and finally add [dy dz d*0].
Is there a direct way to compute dK in ArrayFire? I can't help but think this is some kind of convolution, but I've been unable to wrap my head around what the convolution would look like.
Ah so. For a 3x3 dK array, I use unwrap to convert my MxN input arrays to two MxN column vectors. Then I do 9 dot products of shifted subsets of the two column vectors. No, that doesn't work since the shift is in 2 dimensions.
So I need to create intermediate arrays of 1 x (MxN) and (MxN) x 9 in size, where each column of the latter is a shifted MxN window of the original with a pad border of zeros of size 1, and then do a matrix multiply.
Hmm, that requires too much memory (sometimes.) So the final solution is to do a gfor over the output 3x3, and for each loop, do a dot product of the unwrapped-once G and the unwrapped-repeatedly V.
Agreed?
Gx = [-1 0 1
-2 0 2
-1 0 1]
Gy = [-1 -2 -1
0 0 0
1 2 1]
I knew these are the combination of smoothing filter and gradient but how are they combine to get this output ?
The Sobel Kernel is a convolution of the derivation kernel [-1 0 1] with a smoothing kernel [1 2 1]'. The former is straightforward, the later is rather arbitrary - you can see it as some sort of discrete implementation of a 1D Gaussian of a certain sigma if you want.
I think edge detection (ie gradient) influence is obvious - if there is a vertical edge. sobel operator Gx will definitely give big values relative to places where there is no edge because you just subtract two different values (intensity on the one side of an edge differs much from intensity on another side). The same thought on horizontal edges.
About smoothing, if you see e.g. mask for gaussian function for simga=1.0:
which actually does smoothing, you can catch an idea: we actaully set a pixel to a value associated to values of its neighbor. It means we 'average' values respectively to the pixel we are considering. In our case, Gx and Gy, it perforsm slightly smoothing in comparision to gaussian, but still idea remains the same.
Help me get the 3 * 3 matrix of coefficients of the Gaussian filter, differentiable by X and Y.
Is it just
0 0 0
-1 0 1
0 0 0
or not?
You may use this 3x3 mask to compute the horizontal and vertical first derivatives:
Wolfram|Alpha also knows about that:
http://www.wolframalpha.com/input/?i=GaussianMatrix[1%2C{0%2C1}]
Transpose to get the mask corresponding to the vertical first derivative.
For image derivative computation, Sobel operator looks this way:
[-1 0 1]
[-2 0 2]
[-1 0 1]
I don't quite understand 2 things about it,
1.Why the centre pixel is 0? Can't I just use an operator like below,
[-1 1]
[-1 1]
[-1 1]
2.Why the centre row is 2 times the other rows?
I googled my questions, didn't find any answer which can convince me. Please help me.
In computer vision, there's very often no perfect, universal way of doing something. Most often, we just try an operator, see its results and check whether they fit our needs. It's true for gradient computation too: Sobel operator is one of many ways of computing an image gradient, which has proved its usefulness in many usecases.
In fact, the simpler gradient operator we could think of is even simpler than the one you suggest above:
[-1 1]
Despite its simplicity, this operator has a first problem: when you use it, you compute the gradient between two positions and not at one position. If you apply it to 2 pixels (x,y) and (x+1,y), have you computed the gradient at position (x,y) or (x+1,y)? In fact, what you have computed is the gradient at position (x+0.5,y), and working with half pixels is not very handy. That's why we add a zero in the middle:
[-1 0 1]
Applying this one to pixels (x-1,y), (x,y) and (x+1,y) will clearly give you a gradient for the center pixel (x,y).
This one can also be seen as the convolution of two [-1 1] filters: [-1 1 0] that computes the gradient at position (x-0.5,y), at the left of the pixel, and [0 -1 1] that computes the gradient at the right of the pixel.
Now this filter still has another disadvantage: it's very sensitive to noise. That's why we decide not to apply it on a single row of pixels, but on 3 rows: this allows to get an average gradient on these 3 rows, that will soften possible noise:
[-1 0 1]
[-1 0 1]
[-1 0 1]
But this one tends to average things a little too much: when applied to one specific row, we lose much of what makes the detail of this specific row. To fix that, we want to give a little more weight to the center row, which will allow us to get rid of possible noise by taking into account what happens in the previous and next rows, but still keeping the specificity of that very row. That's what gives the Sobel filter:
[-1 0 1]
[-2 0 2]
[-1 0 1]
Tampering with the coefficients can lead to other gradient operators such as the Scharr operator, which gives just a little more weight to the center row:
[-3 0 3 ]
[-10 0 10]
[-3 0 3 ]
There are also mathematical reasons to this, such as the separability of these filters... but I prefer seeing it as an experimental discovery which proved to have interesting mathematical properties, as experiment is in my opinion at the heart of computer vision.
Only your imagination is the limit to create new ones, as long as it fits your needs...
EDIT The true reason that the Sobel operator looks that way can be be
found by reading an interesting article by Sobel himself. My
quick reading of this article indicates Sobel's idea was to get an
improved estimate of the gradient by averaging the horizontal,
vertical and diagonal central differences. Now when you break the
gradient into vertical and horizontal components, the diagonal central
differences are included in both, while the vertical and horizontal
central differences are only included in one. Two avoid double
counting the diagonals should therefore have half the weights of the
vertical and horizontal. The actual weights of 1 and 2 are just
convenient for fixed point arithmetic (and actually include a scale
factor of 16).
I agree with #mbrenon mostly, but there are a couple points too hard to make in a comment.
Firstly in computer vision, the "Most often, we just try an operator" approach just wastes time and gives poor results compared to what might have been achieved. (That said, I like to experiment too.)
It is true that a good reason to use [-1 0 1] is that it centres the derivative estimate at the pixel. But another good reason is that it is the central difference formula, and you can prove mathematically that it gives a lower error in its estmate of the true derivate than [-1 1].
[1 2 1] is used to filter noise as mbrenon, said. The reason these particular numbers work well is that they are an approximation of a Gaussian which is the only filter that does not introduce artifacts (although from Sobel's article, this seems to be coincidence). Now if you want to reduce noise and you are finding a horizontal derivative you want to filter in the vertical direction so as to least affect the derivate estimate. Convolving transpose([1 2 1]) with [-1 0 1] we get the Sobel operator. i.e.:
[1] [-1 0 1]
[2]*[-1 0 1] = [-2 0 2]
[1] [-1 0 1]
For 2D image you need a mask. Say this mask is:
[ a11 a12 a13;
a21 a22 a23;
a31 a32 a33 ]
Df_x (gradient along x) should be produced from Df_y (gradient along y) by a rotation of 90o, i.e. the mask should be:
[ a11 a12 a11;
a21 a22 a21;
a31 a32 a31 ]
Now if we want to subtract the signal in front of the middle pixel (thats what differentiation is in discrete - subtraction) we want to allocate same weights to both sides of subtraction, i.e. our mask becomes:
[ a11 a12 a11;
a21 a22 a21;
-a11 -a12 -a11 ]
Next, the sum of the weight should be zero, because when we have a smooth image (e.g. all 255s) we want to have a zero response, i.e. we get:
[ a11 a12 a11;
a21 -2a21 a21;
-a31 -a12 -a31 ]
In case of a smooth image we expect the differentiation along X-axis to produce zero, i.e.:
[ a11 a12 a11;
0 0 0;
-a31 -a12 -a31 ]
Finally if we normalize we get:
[ 1 A 1;
0 0 0;
-1 -A -1 ]
and you can set A to anything you want experimentally. A factor of 2 gives the original Sobel filter.
I am reading Learning openCV and I came across description of cvHoughLines2 in this book. But I can't understand one thing.
I read about Hough transform and I think I understand it, so the parameters rho and theta are bit puzzling to me. When we have equation rho=xcos(theta)+ycos(theta) when we decide on some set of discrete values of theta, values of rho should be automatically known.
In this book it is said that opencv creates rhoxtheta accumlator array.
Does opencv just discretize angle as multiplies of 360/theta? But how does the rho parameter fits? How are the values of rho discretized?
Your question isn't clear, it seems you are confused. Have a look at this page. Given a set of points (the x's and y's) belonging to a line you can describe the same line by just two parameters r and theta. These are two independent parameters we want to find that best describe the line that we have the points on.
in beginning you decide vector of theta lets say 10 numbers , you need to round the result to fall in pixel of matrix which row represent radius and columns the angle
so if some line is in same angle and radius it will add the accumulator a value.
[0 36 .. 360]
also radius vector [1 2 3 .. 10]
then you create image M*N all zeros lets say for example only
[ 0 0 0
0 0 0
0 0 0]
then you perform formula you write at some radius and angleyour matrix become
[ 1 0 0
0 0 0
0 0 0]
then
[ 1 0 0
0 0 1
0 0 0]
then
[ 2 0 0
0 0 1
0 1 0]
ans so on
then you can threshold and find only some lines or at some angles.