How do you calculate the gradient of bias in a conolutional neural network? - machine-learning

I am having a hard time finding resources online about how to preform backpropagation with the bias in a convolutional neural network. By bias I mean the number added to every number resulting from a convolution.
Here is a picture further explaining
I know how to calculate the gradient for the filter's weights but I am not sure what to do about the biases. Right now I am just adjusting it by the average error for that layer. Is this correct?

It is similar to the bias gradient in standard neural networks but here we sum over all the gradients w.r.t convolution output:
where L is the loss function, w and h are the width and height of the conv output, is the gradient of the conv output w.r.t the loss function.
Thus, the gradient of b is computed by summing all the convolution output gradients at each position (w, h) w.r.t the loss function L.
Hope this helps.

Related

Why would I use a Non Linear activation function in CNN convolutional layer?

I was going through one of the deep learning lectures from MIT on CNN. It said when multiplying weights with pixel values, a non linear activation function like relu can be applied on every pixel. I understand why it should be applied in a simple neural network, since it introduces non linearity in our input data. But why would I want to apply it on a single pixel ? Or am I getting it wrong ?
You may have got it a little wrong.
When they say "multiplying weights with pixel values" - they refer to the linear operation of multiplying the filter (weights + bias) with the pixels of the image. If you think about it, each filter in a CNN essentially represents a linear equation.
For example - if we're looking at a 4*4 filter, the filter is essentially computing x1 * w1 + x2 * w2 + x3 * w3 + x4 * w4 + b for every 4*4 patch of the image it goes over. (In the above equation, x1,x2,x4,x4 refer to pixels of the image, while w1,w2,w3,w4 refer to the weights present in the CNN filter)
Now, hopefully it's fairly clear that the filter is essentially computing a linear equation. To be able to perform a task like let's say image classification, we require some amount of non-linearity. This is achieved by using, most popularly, the ReLU activation function.
So you aren't applying non linearity to a "pixel" per se, you're still applying it to a linear operation (like in a vanilla neural network) - which consists of pixel values multiplied by the weights present in a filter.
Hope this cleared your doubt, feel free to reach out for more help!

Do I need to include my scaled outputs in my back-propagation equation (SGD)?

Quick question, when I am backpropagating the loss function to my parameters and I used a scaled output (ex. tanh(x) * 2), do I need to include the derivative of the scaled output w.r.t the original output? Thank you!
Before we can backprop the errors, we've to compute the gradient of the loss function with respect to each of the parameters. This computation involves computing the gradients of the outputs first and then use chain rule repeatedly. So, when you do this, the scaling constant remains as is. So, yes, you've to scale the errors accordingly.
As an example, you might have observed the following L2 regularized loss - a.k.a Ridge regression:
Loss = 1/2 * |T - Y|^2 + \lambda * ||w||^2
Here, we are scaling down the squared error. So, when we compute the gradient 1/2 & 2 would cancel out. If we would not have multiplied this by 0.5 in the first place, then we would have to scale up our gradient by 2. Else the gradient vector would point in some other direction instead of the direction which minimizes the loss.

Generalization of gradient calculation for multi channel convolutions

I have been trying to understand how backpropagation for conv nets is implemented at mathematical level. I came across this article which explains gradient calculation graphically for 2D convolution. The conv layer consists of 3x3 inputs and the dimension of the filter used is 2x2 which, on convolution results in 2x2 layer which is then fully connected. The gradient for this fully connected layer will be of dimension 2x2.
According to the article :-
Gradient of conv layer = convolution between gradient of next layer
and weights of this layer
But I cannot generalize this for 3 channel inputs.
Lets say out input layer is of dimension 3x3x3 and we use 1 filter of dimension 2x2x3 then the resultant convolution will again be of dimension 2x2 which will then be treated as fully connected layer.
Now the gradient for fully connected layer will be 2x2. So, to calculate the gradient for conv layer we again to calculate the convolution between 2x2 gradient layer and 2x2x3 weight layer but they are incompatible .
So, I dont understand how to use this formula for calculating gradient for 3D convolutions. How can I proceed after this step ?
Derivation(or an article) with respect to 3D input will also be really helpful .

How to set local biases in caffe and torch?

When convoluting a multi-channel image into one channel image, usually you can have only one bias variable(as output is one channel). If I want to set local biases, that is, set biases for each pixel of the output image, how shall I do this in caffe and torch?
In Tensorflow, this is very simple. your just set a bias matrix, for example:
data is 25(height)X25(width)X48(channels)
weights is 3X3(kernel size)X48(input channels)X1(output channels)
biases is 25X25,
then,
hidden = tf.nn.conv2d(data, weights, [1, 1, 1, 1], padding='SAME')
output = tf.relu(hidden+biases)
Is there a similar solution in caffe ortorch?
For caffe, here is a scale layer post: Scale layer in Caffe. Scale layer can only provide one variable bias.
The answer is Bias layer. bias layer can have a weight matrix, treat it as biases.
For torch, torch has a nn.Add() layer, almost like the tensorflow's tf.add() function, so nn.Add() layer is the solution.
All these have been proved by actual models.
But still thank you very much #Shai

Finding the function approximated by a neural network

If I have a feed-forward multilayer perceptron with sigmoid activation function, which is trained and has known weights, how can I find the equation of the curve that is approximated by the network (the curve that separates between 2 types of data)?
In general, there is no closed form solution for the input points where your NN output is 0.5 (or 0, in case of -1/1 instead of 0/1).
What is usually done for visualization in low-dimensional input space is gridding up the input space and computing the contours of the NN output. (The contours are smooth estimate of what the NN response surface looks like.)
In MATLAB, one would do
[X,Y] = meshgrid(linspace(-1,1), linspace(-1,1));
contour(f(X,Y))
where f is your trained NN, and assuming [-1,1] x [-1,1] space.

Resources