I don't really follow how they came up with the derivative equation. Could somebody please explain in some details or even a link to somewhere with sufficient math explanation?
Laplacian filter looks like
Monsieur Laplace came up with this equation. This is simply the definition of the Laplace operator: the sum of second order derivatives (you can also see it as the trace of the Hessian matrix).
The second equation you show is the finite difference approximation to a second derivative. It is the simplest approximation you can make for discrete (sampled) data. The derivative is defined as the slope (equation from Wikipedia):
In a discrete grid, the smallest h is 1. Thus the derivative is f(x+1)-f(x). This derivative, because it uses the pixel at x and the one to the right, introduces a half-pixel shift (i.e. you compute the slope in between these two pixels). To get to the 2nd order derivative, simply compute the derivative on the result of the derivative:
f'(x) = f(x+1) - f(x)
f'(x+1) = f(x+2) - f(x+1)
f"(x) = f'(x+1) - f'(x)
= f(x+2) - f(x+1) - f(x+1) + f(x)
= f(x+2) - 2*f(x+1) + f(x)
Because each derivative introduces a half-pixel shift, the 2nd order derivative ends up with a 1-pixel shift. So we can shift the output left by one pixel, leading to no bias. This leads to the sequence f(x+1)-2*f(x)+f(x-1).
Computing this 2nd order derivative is the same as convolving with a filter [1,-2,1].
Applying this filter, and also its transposed, and adding the results, is equivalent to convolving with the kernel
[ 0, 1, 0 [ 0, 0, 0 [ 0, 1, 0
1,-4, 1 = 1,-2, 1 + 0,-2, 0
0, 1, 0 ] 0, 0, 0 ] 0, 1, 0 ]
Related
I learned from several articles that to compute the gradients for the filters, you just do a convolution with the input volume as input and the error matrix as the kernel. After that, you just subtract the filter weights by the gradients(multiplied by the learning rate). I implemented this process but it's not working.
I even tried doing the backpropagation process myself with pen and paper but the gradients I calculated doesn't make the filters perform any better. So am I understanding the whole process wrong?
Edit:
I will provide an example of my understanding of the backpropagation in CNNs and the problem with it.
Consider a randomised input matrix for a convolutional layer:
1, 0, 1
0, 0, 1
1, 0, 0
And a randomised weight matrix:
1, 0
0, 1
The output would be (applied ReLU activator):
1, 1
0, 0
The target for this layer is a 2x2 matrix filled with zeros. This way, we know the weight matrix should be filled with zeros also.
Error:
-1, -1
0, 0
By applying the process as stated above, the gradients are:
-1, -1
1, 0
So the new weight matrix is:
2, 1
-1, 1
This is not getting anywhere. If I repeat the process, the filter weights just go to extremely high values. So I must have made a mistake somewhere. So what is it that I'm doing wrong?
I'll give you a full example, not going to be short but hopefully you will get it. I'm omitting both bias and activation functions for simplicity, but once you get it it's simple enough to add those too. Remember, backpropagation is essentially the SAME in CNN as in a simple MLP, but instead of having multiplications you'll have convolutions. So, here's my sample:
Input:
.7 -.3 -.7 .5
.9 -.5 -.2 .9
-.1 .8 -.3 -.5
0 .2 -.1 .6
Kernel:
.1 -.3
-.5 .7
Doing the convolution yields (Result of 1st convolutional layer, and input for the 2nd convolutional layer):
.32 .27 -.59
.99 -.52 -.55
-.45 .64 .13
L2 Kernel:
-.5 .1
.3 .9
L2 activation:
.73 .29
.37 -.63
Here you would have a flatten layer and a standard MLP or SVM to do the actual classification. During backpropagation you'll recieve a delta which for fun let's assume is the following:
-.07 .15
-.09 .02
This will always be the same size as your activation before the flatten layer. Now, to calculate the kernel's delta for the current L2, you'll convolve L1's activation with the above delta. I'm not writting this down again but the result will be:
.17 .02
-.05 .13
Updating the kernel is done as L2.Kernel -= LR * ROT180(dL2.K), meaning you first rotate the above 2x2 matrix and then update the kernel. This for our toy example turns out to be:
-.51 .11
.3 .9
Now, to calculate the delta for the first convolutional layer, recall that in MLP you had the following: current_delta * current_weight_matrix. Well in Conv layer, you pretty much have the same. You have to convolve the original Kernel (before update) of L2 layer with your delta for the current layer. But this convolution will be a full convolution. The result turns out to be:
.04 -.08 .02
.02 -.13 .14
-.03 -.08 .01
With this you'll go for the 1st convolutional layer, and will convolve the original input with this 3x3 delta:
.16 .03
-.09 .16
And update your L1 kernel the same way as above:
.08 -.29
-.5 .68
Then you can start over from feeding forward. The above calculations were rounded to 2 decimal places and a learning rate of .1 was used for calculating the new kernel values.
TLDR:
You get a delta
You calculate the next delta that will be used for the next layer as: FullConvolution(Li.Input, delta)
Calculate the kernel delta that is used to update the kernel: Convolution(Li.W, delta)
Go to next layer and repeat.
I am trying to train a neural network to control a characters speed in 2 dimensions. x and y between -1 and 1 m/sec. Currently I split the range into 0.1 m/sec intervals so I end up with 400 output neurons (20 x values * 20 y values) if I increase the accuracy to 0.01 I end up with 40k output neurons. Is there a way to reduce the number of output neurons?
I assume you are treating the problem as a classification problem. In the training time, you have input X and output Y. Since you are training the neural network for classification, your expected output is always like:
-1 -0.9 ... 0.3 0.4 0.5 ... 1.0 m/s
Y1 = [0, 0, ..., 1, 0, 0, ..., 0] // speed x component
Y2 = [0, 0, ..., 0, 0, 1, ..., 0] // speed y component
Y = [Y1, Y2]
That is: only one of the neurons outputs 1 for each of the speed component at x and y direction; all other neurons output 0 (in the example above, the expected output is 0.3m/s in x direction and 0.5m/s in y direction for this training instance). Actually this is probably easier to learn and has better prediction performance. But as you pointed out, it does not scale.
I think you can also treat the problem as a regression problem. In your network, you have one neuron for each of the speed component. Your expected output is just:
Y = [0.3, 0.5] // for the same training instance you have.
To get an output range of -1 to 1, you have different options for the activation function in the output layer. For example, you can use
f(x) = 2 * (Sigmoid(x) - 0.5)
Sigmoid(x) = 1 / (1 + exp(-x))
Since sigmoid (x) is in (0,1), 2*(sigmoid(x) - 0.5) is in (-1,1). This change (replace multiple neurons in the output layer with two neurons) greatly decreases the complexity of the model so you might want to add more neurons in the middle layer to avoid under fitting.
Gx = [-1 0 1
-2 0 2
-1 0 1]
Gy = [-1 -2 -1
0 0 0
1 2 1]
I knew these are the combination of smoothing filter and gradient but how are they combine to get this output ?
The Sobel Kernel is a convolution of the derivation kernel [-1 0 1] with a smoothing kernel [1 2 1]'. The former is straightforward, the later is rather arbitrary - you can see it as some sort of discrete implementation of a 1D Gaussian of a certain sigma if you want.
I think edge detection (ie gradient) influence is obvious - if there is a vertical edge. sobel operator Gx will definitely give big values relative to places where there is no edge because you just subtract two different values (intensity on the one side of an edge differs much from intensity on another side). The same thought on horizontal edges.
About smoothing, if you see e.g. mask for gaussian function for simga=1.0:
which actually does smoothing, you can catch an idea: we actaully set a pixel to a value associated to values of its neighbor. It means we 'average' values respectively to the pixel we are considering. In our case, Gx and Gy, it perforsm slightly smoothing in comparision to gaussian, but still idea remains the same.
For image derivative computation, Sobel operator looks this way:
[-1 0 1]
[-2 0 2]
[-1 0 1]
I don't quite understand 2 things about it,
1.Why the centre pixel is 0? Can't I just use an operator like below,
[-1 1]
[-1 1]
[-1 1]
2.Why the centre row is 2 times the other rows?
I googled my questions, didn't find any answer which can convince me. Please help me.
In computer vision, there's very often no perfect, universal way of doing something. Most often, we just try an operator, see its results and check whether they fit our needs. It's true for gradient computation too: Sobel operator is one of many ways of computing an image gradient, which has proved its usefulness in many usecases.
In fact, the simpler gradient operator we could think of is even simpler than the one you suggest above:
[-1 1]
Despite its simplicity, this operator has a first problem: when you use it, you compute the gradient between two positions and not at one position. If you apply it to 2 pixels (x,y) and (x+1,y), have you computed the gradient at position (x,y) or (x+1,y)? In fact, what you have computed is the gradient at position (x+0.5,y), and working with half pixels is not very handy. That's why we add a zero in the middle:
[-1 0 1]
Applying this one to pixels (x-1,y), (x,y) and (x+1,y) will clearly give you a gradient for the center pixel (x,y).
This one can also be seen as the convolution of two [-1 1] filters: [-1 1 0] that computes the gradient at position (x-0.5,y), at the left of the pixel, and [0 -1 1] that computes the gradient at the right of the pixel.
Now this filter still has another disadvantage: it's very sensitive to noise. That's why we decide not to apply it on a single row of pixels, but on 3 rows: this allows to get an average gradient on these 3 rows, that will soften possible noise:
[-1 0 1]
[-1 0 1]
[-1 0 1]
But this one tends to average things a little too much: when applied to one specific row, we lose much of what makes the detail of this specific row. To fix that, we want to give a little more weight to the center row, which will allow us to get rid of possible noise by taking into account what happens in the previous and next rows, but still keeping the specificity of that very row. That's what gives the Sobel filter:
[-1 0 1]
[-2 0 2]
[-1 0 1]
Tampering with the coefficients can lead to other gradient operators such as the Scharr operator, which gives just a little more weight to the center row:
[-3 0 3 ]
[-10 0 10]
[-3 0 3 ]
There are also mathematical reasons to this, such as the separability of these filters... but I prefer seeing it as an experimental discovery which proved to have interesting mathematical properties, as experiment is in my opinion at the heart of computer vision.
Only your imagination is the limit to create new ones, as long as it fits your needs...
EDIT The true reason that the Sobel operator looks that way can be be
found by reading an interesting article by Sobel himself. My
quick reading of this article indicates Sobel's idea was to get an
improved estimate of the gradient by averaging the horizontal,
vertical and diagonal central differences. Now when you break the
gradient into vertical and horizontal components, the diagonal central
differences are included in both, while the vertical and horizontal
central differences are only included in one. Two avoid double
counting the diagonals should therefore have half the weights of the
vertical and horizontal. The actual weights of 1 and 2 are just
convenient for fixed point arithmetic (and actually include a scale
factor of 16).
I agree with #mbrenon mostly, but there are a couple points too hard to make in a comment.
Firstly in computer vision, the "Most often, we just try an operator" approach just wastes time and gives poor results compared to what might have been achieved. (That said, I like to experiment too.)
It is true that a good reason to use [-1 0 1] is that it centres the derivative estimate at the pixel. But another good reason is that it is the central difference formula, and you can prove mathematically that it gives a lower error in its estmate of the true derivate than [-1 1].
[1 2 1] is used to filter noise as mbrenon, said. The reason these particular numbers work well is that they are an approximation of a Gaussian which is the only filter that does not introduce artifacts (although from Sobel's article, this seems to be coincidence). Now if you want to reduce noise and you are finding a horizontal derivative you want to filter in the vertical direction so as to least affect the derivate estimate. Convolving transpose([1 2 1]) with [-1 0 1] we get the Sobel operator. i.e.:
[1] [-1 0 1]
[2]*[-1 0 1] = [-2 0 2]
[1] [-1 0 1]
For 2D image you need a mask. Say this mask is:
[ a11 a12 a13;
a21 a22 a23;
a31 a32 a33 ]
Df_x (gradient along x) should be produced from Df_y (gradient along y) by a rotation of 90o, i.e. the mask should be:
[ a11 a12 a11;
a21 a22 a21;
a31 a32 a31 ]
Now if we want to subtract the signal in front of the middle pixel (thats what differentiation is in discrete - subtraction) we want to allocate same weights to both sides of subtraction, i.e. our mask becomes:
[ a11 a12 a11;
a21 a22 a21;
-a11 -a12 -a11 ]
Next, the sum of the weight should be zero, because when we have a smooth image (e.g. all 255s) we want to have a zero response, i.e. we get:
[ a11 a12 a11;
a21 -2a21 a21;
-a31 -a12 -a31 ]
In case of a smooth image we expect the differentiation along X-axis to produce zero, i.e.:
[ a11 a12 a11;
0 0 0;
-a31 -a12 -a31 ]
Finally if we normalize we get:
[ 1 A 1;
0 0 0;
-1 -A -1 ]
and you can set A to anything you want experimentally. A factor of 2 gives the original Sobel filter.
I am beginner in neural networks. I am learning about perceptrons.
My question is Why is weight vector perpendicular to decision boundary(Hyperplane)?
I referred many books but all are mentioning that weight vector is perpendicular to decision boundary but none are saying why?
Can anyone give me an explanation or reference to a book?
The weights are simply the coefficients that define a separating plane. For the moment, forget about neurons and just consider the geometric definition of a plane in N dimensions:
w1*x1 + w2*x2 + ... + wN*xN - w0 = 0
You can also think of this as being a dot product:
w*x - w0 = 0
where w and x are both length-N vectors. This equation holds for all points on the plane. Recall that we can multiply the above equation by a constant and it still holds so we can define the constants such that the vector w has unit length. Now, take out a piece of paper and draw your x-y axes (x1 and x2 in the above equations). Next, draw a line (a plane in 2D) somewhere near the origin. w0 is simply the perpendicular distance from the origin to the plane and w is the unit vector that points from the origin along that perpendicular. If you now draw a vector from the origin to any point on the plane, the dot product of that vector with the unit vector w will always be equal to w0 so the equation above holds, right? This is simply the geometric definition of a plane: a unit vector defining the perpendicular to the plane (w) and the distance (w0) from the origin to the plane.
Now our neuron is simply representing the same plane as described above but we just describe the variables a little differently. We'll call the components of x our "inputs", the components of w our "weights", and we'll call the distance w0 a bias. That's all there is to it.
Getting a little beyond your actual question, we don't really care about points on the plane. We really want to know which side of the plane a point falls on. While w*x - w0 is exactly zero on the plane, it will have positive values for points on one side of the plane and negative values for points on the other side. That's where the neuron's activation function comes in but that's beyond your actual question.
Intuitively, in a binary problem the weight vector points in the direction of the '1'-class, while the '0'-class is found when pointing away from the weight vector. The decision boundary should thus be drawn perpendicular to the weight vector.
See the image for a simplified example: You have a neural network with only 1 input which thus has 1 weight. If the weight is -1 (the blue vector), then all negative inputs will become positive, so the whole negative spectrum will be assigned to the '1'-class, while the positive spectrum will be the '0'-class. The decision boundary in a 2-axis plane is thus a vertical line through the origin (the red line). Simply said it is the line perpendicular to the weight vector.
Lets go through this example with a few values. The output of the perceptron is class 1 if the sum of all inputs * weights is larger than 0 (the default threshold), otherwise if the output is smaller than the threshold of 0 then the class is 0. Your input has value 1. The weight applied to this single input is -1, so 1 * -1 = -1 which is less than 0. The input is thus assigned class 0 (NOTE: class 0 and class 1 could have just been called class A or class B, don't confuse them with the input and weight values). Conversely, if the input is -1, then input * weight is -1 * -1 = 1, which is larger than 0, so the input is assigned to class 1. If you try every input value then you will see that all the negative values in this example have an output larger than 0, so all of them belong to class 1. All positive values will have an output of smaller than 0 and therefore will be classified as class 0. Draw the line which separates all positive and negative input values (the red line) and you will see that this line is perpendicular to the weight vector.
Also note that the weight vector is only used to modify the inputs to fit the wanted output. What would happen without a weight vector? An input of 1, would result in an output of 1, which is larger than the threshold of 0, so the class is '1'.
The second image on this page shows a perceptron with 2 inputs and a bias. The first input has the same weight as my example, while the second input has a weight of 1. The corresponding weight vector together with the decision boundary are thus changed as seen in the image. Also the decision boundary has been translated to the right due to an added bias of 1.
Here is a viewpoint from a more fundamental linear algebra/calculus standpoint:
The general equation of a plane is Ax + By + Cz = D (can be extended for higher dimensions). The normal vector can be extracted from this equation: [A B C]; it is the vector orthogonal to every other vector that lies on the plane.
Now if we have a weight vector [w1 w2 w3], then when do w^T * x >= 0 (to get positive classification) and w^T * x < 0 (to get negative classification). WLOG, we can also do w^T * x >= d. Now, do you see where I am going with this?
The weight vector is the same as the normal vector from the first section. And as we know, this normal vector (and a point) define a plane: which is exactly the decision boundary. Hence, because the normal vector is orthogonal to the plane, then so too is the weight vector orthogonal to the decision boundary.
Start with the simplest form, ax + by = 0, weight vector is [a, b], feature vector is [x, y]
Then y = (-a/b)x is the decision boundary with slope -a/b
The weight vector has slope b/a
If you multiply those two slopes together, result is -1
This proves decision boundary is perpendicular to weight vector
Although the question was asked 2 years ago, I think many students will have the same doubts. I reached this answer because I asked the same question.
Now, just think of X, Y (a Cartesian coordinate system is a coordinate system that specifies each point uniquely in a plane by a pair of numerical coordinates, which are the signed distances from the point to two fixed perpendicular directed lines [from Wikipedia]).
If Y = 3X, in geometry Y is perpendicular to X, then let w = 3, then Y = wX, w = Y/X and if we want to draw the relation between X, w we will have two perpendicular lines just like when we draw the relation between X, Y. So always think of the w-coefficient as perpendicular to X and Y.