Finding the function approximated by a neural network - machine-learning

If I have a feed-forward multilayer perceptron with sigmoid activation function, which is trained and has known weights, how can I find the equation of the curve that is approximated by the network (the curve that separates between 2 types of data)?

In general, there is no closed form solution for the input points where your NN output is 0.5 (or 0, in case of -1/1 instead of 0/1).
What is usually done for visualization in low-dimensional input space is gridding up the input space and computing the contours of the NN output. (The contours are smooth estimate of what the NN response surface looks like.)
In MATLAB, one would do
[X,Y] = meshgrid(linspace(-1,1), linspace(-1,1));
contour(f(X,Y))
where f is your trained NN, and assuming [-1,1] x [-1,1] space.

Related

Why would I use a Non Linear activation function in CNN convolutional layer?

I was going through one of the deep learning lectures from MIT on CNN. It said when multiplying weights with pixel values, a non linear activation function like relu can be applied on every pixel. I understand why it should be applied in a simple neural network, since it introduces non linearity in our input data. But why would I want to apply it on a single pixel ? Or am I getting it wrong ?
You may have got it a little wrong.
When they say "multiplying weights with pixel values" - they refer to the linear operation of multiplying the filter (weights + bias) with the pixels of the image. If you think about it, each filter in a CNN essentially represents a linear equation.
For example - if we're looking at a 4*4 filter, the filter is essentially computing x1 * w1 + x2 * w2 + x3 * w3 + x4 * w4 + b for every 4*4 patch of the image it goes over. (In the above equation, x1,x2,x4,x4 refer to pixels of the image, while w1,w2,w3,w4 refer to the weights present in the CNN filter)
Now, hopefully it's fairly clear that the filter is essentially computing a linear equation. To be able to perform a task like let's say image classification, we require some amount of non-linearity. This is achieved by using, most popularly, the ReLU activation function.
So you aren't applying non linearity to a "pixel" per se, you're still applying it to a linear operation (like in a vanilla neural network) - which consists of pixel values multiplied by the weights present in a filter.
Hope this cleared your doubt, feel free to reach out for more help!

Matmul input and weight matrices order?

I saw many ML tutorials explained fully connected network by constructing two matrices, weight matrix and input(or activation) matrix and perform a matrix to matrix multiplication (matmul) to form the linear equations.
All the examples I saw place input as first argument to matmul and weight tensor as second argument. Why is that? Why can’t I perform weights times input (assuming the weight matrix was created properly with columns count equal to input matrix row counts)?
To get (nx1) output For a (nx1) input, you should multiplicate input with a (nxn) matrix from left or (1x1) matrix from right.
If you multiplicate input with a scalar ( (1x1) matrix), then there are one connection from input to output from each neuron. If you multiplicate it with a matrix, for each output cell we get weighted sum of input neurons. In other words, each neuron in input connected to each neuron in output which is fully connected.
By preserving this logic, it doesn't matter how you arrange your weight matrices.

How do you calculate the gradient of bias in a conolutional neural network?

I am having a hard time finding resources online about how to preform backpropagation with the bias in a convolutional neural network. By bias I mean the number added to every number resulting from a convolution.
Here is a picture further explaining
I know how to calculate the gradient for the filter's weights but I am not sure what to do about the biases. Right now I am just adjusting it by the average error for that layer. Is this correct?
It is similar to the bias gradient in standard neural networks but here we sum over all the gradients w.r.t convolution output:
where L is the loss function, w and h are the width and height of the conv output, is the gradient of the conv output w.r.t the loss function.
Thus, the gradient of b is computed by summing all the convolution output gradients at each position (w, h) w.r.t the loss function L.
Hope this helps.

Loss function representing the euclidean distance from prediction to nearest groundtruth in images?

Is there a loss function that calculates the euclidean distance between a prediction pixel and the nearest groundtruth pixel? Specifically, this is the location distance, not the intensity distance.
This would be on binary predictions and binary groundtruth.
That's the root of mean square error (RMSE), for example:
model.compile(loss='rmse', optimizer='adagrad')
But it might be better to use mean squared error instead because of what is discussed here https://github.com/fchollet/keras/issues/1170:
i.e. Keras computes the loss batch by batch. To avoid inconsistencies
I recommend using MSE instead.
As in:
model.compile(loss='rmse', optimizer='adagrad')
But since your data has only binary predictions I would advise the binary_crossentropy instead (https://keras.io/losses/#binary_crossentropy):
model.compile(loss='binary_crossentropy', optimizer='adagrad')

How many layers are in this neural network?

I am trying to make sure I'm using the correct terminology. The below diagram shows the MNIST example
X is 784 row vector
W is 784X10 matrix
b is a 10 row vector
The out of the linear box is fead into softmax
The output of softmax is fed into the distance function cross-entropy
How many layers are in this NN? What are the input and hidden layer in that example?
Similarly, how many layers are in this answer If my understanding is correct, then 3 layers?
Edit
#lejlot Does the below represent a 3 layered NN with 1 hidden layer?
Take a look at this picture:
http://cs231n.github.io/assets/nn1/neural_net.jpeg
In your first picture you have only two layers:
Input layers -> 784 neurons
Output layer -> 10 neurons
Your model is too simple (w contains directly connections between the input and the output and b contains the bias terms).
With no hidden layer you are obtaining a linear classifier, because a linear combination of linear combinations is a linear combination again. The hidden layers are what include non linear transformations in your model.
In your second picture you have 3 layers, but you are confused the notation:
The input layer is the vector x where you place an input data.
Then the operation -> w -> +b -> f() -> is the conexion between the first layer and the second layer.
The second layer is the vector where you store the result z=f(xw1+b1)
Then softmax(zw2+b2) is the conexion between the second and the third layer.
The third layer is the vector y where you store the final result y=softmax(zw2+b2).
Cross entropy is not a layer is the cost function to train your neural network.
EDIT:
One more thing, if you want to obtain a non linear classifier you must add a non linear transformation in every hidden layer, in the example that I have described, if f() is a non linear function (for example sigmoid, softsign, ...):
z=f(xw1+b1)
If you add a non linear transformation only in the output layer (the softmax function that you have at the end) your outputs are still linear classifiers.
That has 1 hidden layer.
The answer you link to, I would call a 2-hidden layer NN.
Your input-layer is the X-vector.
Your layer Wx+b is the hidden layer, aka. the box in your picture.
The output-layer is the Soft-max.
The cross-entropy is your loss/cost function, and is not a layer at all.

Resources