Implementing convolutional neural network backprop in ArrayFire (gradient calculation) - arrayfire

I modified equation 9.12 in http://www.deeplearningbook.org/contents/convnets.html to center the MxN convolution kernel.
That gives the following expression (take it on faith for now) for the gradient, assuming 1 input and 1 output channel (to simplify):
dK(krow, kcol) = sum(G(row, col) * V(row+krow-M/2, col+kcol-N/2); row, col)
To read the above, the single element of dK at krow, kcol is equal to the sum over all of the rows and cols of the product of G times a shifted V. Note G and V have the same dimensions. We will define going outside V to result in a zero.
For example, in one dimension, if G is [a b c d], V is [w x y z], and M is 3, then the first sum is dot (G, [0 w x y]), the second sum is dot (G, [w x y z]), and the third sum is dot (G, [x y z 0]).
ArrayFire has a shift operation, but it does a circular shift, rather than a shift with zero insertion. Also, the kernel sizes MxN are typically small, e.g., 7x7, so it seems a more optimal implementation would read in G and V once only, and accumulate over the kernel.
For that 1D example, we would read in a and w,x and start with [a*0 aw ax]. Then we read in b,y and add [bw bx by]. Then read in c,z and add [cx cy cz]. Then read in d and finally add [dy dz d*0].
Is there a direct way to compute dK in ArrayFire? I can't help but think this is some kind of convolution, but I've been unable to wrap my head around what the convolution would look like.

Ah so. For a 3x3 dK array, I use unwrap to convert my MxN input arrays to two MxN column vectors. Then I do 9 dot products of shifted subsets of the two column vectors. No, that doesn't work since the shift is in 2 dimensions.
So I need to create intermediate arrays of 1 x (MxN) and (MxN) x 9 in size, where each column of the latter is a shifted MxN window of the original with a pad border of zeros of size 1, and then do a matrix multiply.
Hmm, that requires too much memory (sometimes.) So the final solution is to do a gfor over the output 3x3, and for each loop, do a dot product of the unwrapped-once G and the unwrapped-repeatedly V.
Agreed?

Related

Backpropagation example

I'm trying to implement my own neural network, but I'm not very confident that my math is correct.
I'm doing the MNIST digit recognition, so I have a softmaxed output of 10 probabilities. as output. I then compute my delta output thus:
delta_output = vector of outputs - one-hot encoded actual label
delta_output is a matrix of dimension 10 x 1.
Then I compute the delta for the weights of the last hidden layer thus:
delta_hidden = weight_hidden.transpose * delta_output * der_hidden_activation(output_hidden)
Assuming there are N nodes in the last hidden layer, weight_hidden is a matrix of dimension of N by 10, delta_output from above is 10x1, and the result of der_hidden_activation(output_hidden) is N x 1.
Now, my first question here is, should the multiplication of delta_output and der_hidden_activation(output_hidden) return a 10 x N matrix using outer product? I think the I need to do a Hadamard product of this resulting matrix with the untouched weights to get delta_hidden to be N x 10 still.
Finally I multiply this delta_hidden by my learning rate and subtract it from the original weight of the last hidden layer to get my new weights.
My second and final question here is, did I miss anything?
Thanks in advance.
Assuming there are N nodes in the last hidden layer, weight_hidden is a matrix of dimension of N by 10, delta_output from above is 10x1, and the result of der_hidden_activation(output_hidden) is N x 1.
When going from a layer of N neurons (hidden layer), to a layer of M neurons (output layer), under matrix multiplication, the weight matrix dimensions should be M x N. So for the back propagation stage, going from layer of M neurons (output), to a layer of N neurons (hidden layer), use the transpose of the weight matrix which will give you a matrix of dimension (N x M).
Now, my first question here is, should the multiplication of delta_output and der_hidden_activation(output_hidden) return a 10 x N matrix using outer product?
I think the I need to do a Hadamard product of this resulting matrix with the untouched weights to get delta_hidden to be N x 10 still.
Yes you need to use hadamard product, however you can't multiply delta_output and der_hidden_activation(output_hidden) since these are matrices of different dimensions (10 x 1, and N x 1 respectively). Instead you multiply the transpose of the hidden_weight matrix (N x 10) by delta_output (10 x 1) to get a matrix of N x 1, and then perform the hadamard product with der_hidden_activation(output_hidden).
If I am translating this correctly...
hidden_weight matrix =
delta_output =
delta_hidden =
der_hidden_activation(output_hidden) =
Plugging this into the BP formula...
As you can see, you need to multiply the transpose of the weight_hidden (N x 10) with delta_output (10 x 1) first to produce a matrix (N x 1), and then you use hadamard product with der_hidden_activation(output_hidden).
Finally I multiply this delta_hidden by my learning rate and subtract it from the original weight of the last hidden layer to get my new weights.
You don't multiply delta_hidden by the learning rate. You need to use the learning rate on a bias and delta weight matrix...
The delta weight matrix is a matrix of the same dimensions as your (hidden) weight matrix and is calculated using the formula...
And then you can easily apply the learning rate...
Incidentally, I just answered a similar question on AISE which might help shed some light and goes into more detail about the matricies to use during backpropagation.

Why does fundamental matrix have 7 degrees of freedom?

There are 9 parameters in the fundamental matrix to relate the pixel co-ordinates of left and right images but only 7 degrees of freedom (DOF).
The reasoning for this on several pages that I've searched says :
Homogenous equations means we lose a degree of freedom
The determinant of F = 0, therefore we lose another degree of freedom.
I don't understand why those 2 reasons mean we lose 2 DOF - can someone explain it?
We initially have 9 DOF because the fundamental matrix is composed of 9 parameters, which implies that we need 9 corresponding points to compute the fundamental matrix (F). But because of the following two reasons, we only need 7 corresponding points.
Reason 1
We lose 1 DOF because we are using homogeneous coordinates. This basically is a way to represent nD points as a vector form by adding an extra dimension. ie) A 2D point (0,2) can be represented as [0,2,1], in general [x,y,1]. There are useful properties when using homogeneous coordinates with 2D/3D transformation, but I'm going to assume you know that.
Now given the expression p and p' representing pixel coordinates:
p'=[u',v',1] and p=[u,v,1]
the fundamental matrix:
F = [f1,f2,f3]
[f4,f5,f6]
[f7,f8,f9]
and fundamental matrix equation:
(transposed p')Fp = 0
when we multiple this expression in algebra form, we get the following:
uu'f1 + vu'f2 + u'f3 + uv'f4 + vv'f5 + v'f6 + uf7 + vf8 + f9 = 0.
In a homogeneous system of linear equation form Af=0 (basically the factorization of the above formula), we get two components A and f.
A:
[uu',vu',u', uv',vv',v',u,v,1]
f (f is essentially the fundamental matrix in vector form):
[f1,f2'f3,f4,f5,f6,f7,f8,f9]
Now if we look at the components of vector A, we have 8 unknowns, but one known value 1 because of homogeneous coordinates, and therefore we only need 8 equations now.
Reason 2
det F = 0.
A determinant is a value that can be obtained from a square matrix.
I'm not entirely sure about the mathematical details of this property but I can still infer the basic idea, and, hopefully, you can as well.
Basically given some matrix A
A = [a,b,c]
[d,e,f]
[g,h,i]
The determinant can be computed using this formula:
det A = aei+bfg+cdh-ceg-bdi-afh
If we look at the determinant using the fundamental matrix, the algebra would look something like this:
F = [f1,f2,f3]
[f4,f5,f6]
[f7,f8,f9]
det F = (f1*f5*f8)+(f2*f6*f7)+(f3*f4*f8)-(f3*f5*f7)-(f2*f4*f9)-(f1*f6*f8)
Now we know the determinant of the fundamental matrix is zero:
det F = (f1*f5*f8)+(f2*f6*f7)+(f3*f4*f8)-(f3*f5*f7)-(f2*f4*f9)-(f1*f6*f8) = 0
So, if we work out only 7 of the 9 parameters of the fundamental matrix, we can work out the last parameter using the above determinant equation.
Therefore the fundamental matrix has 7DOF.
The reasons why F has only 7 degrees of freedom are
F is a 3x3 homogeneous matrix. Homogeneous means there is a scale ambiguity in the matrix, so the scale doesn't matter (as shown in #Curator Corpus 's example). This drops one degree of freedom.
F is a matrix with rank 2. It is not a full rank matrix, so it is singular and its determinant is zero (Proof here). The reason why F is a matrix with rank 2 is that it is mapping a 2D plane (image1) to all the lines (in image 2) that pass through the epipole (of image 2).
Hope it helps.
As for the highest votes answer by nbro, I think it can be interpreted as this way where we have reason two, matrix F has a rank2, so its determinant is zero as a constraint to the f variable function. So, we only need 7 points to determine the rest of variables (f1-f8), with the previous constriant. And 8 equations, 8 variables, leaving only one solution. So there is 7 DOF.

Programmatically performing gradient calculation

Let y = Relu(Wx) where W is a 2d matrix representing a linear transformation on x, a vector. Likewise, let m = Zy, where Z is a 2d matrix representing a linear transformation on y. How do I programmatically calculate the gradient of Loss = sum(m^2) with respect to W, where the power means take the element wise power of the resulting vector, and sum means adding all the elements together?
I can work this out slowly mathematically by taking a hypothetical, multiplying it all out, then element-by-element taking the derivative to construct the gradient, but I can't figure out an efficient approach to write a program once the neural network layer becomes >1.
Say, for just one layer (m = Zy, take gradient wrt Z) I could just say
Loss = sum(m^2)
dLoss/dZ = 2m * y
where * is the outer product of the vectors, and I guess this is kind of like normal calculus and it works. Now for 2 layers + activation (gradient wrt W), if I try to do it like "normal" calculus and apply the chain rule I get:
dLoss/dW = 2m * Z * dRelu * x
where dRelu is the derivative of Relu(Wx) except here I have no idea what * means in this case to make it work.
Is there an easy way to calculate this gradient mathematically without basically multiplying it all out and deriving each separate element in the gradient? I'm really unfamiliar with matrix calculus, so if anyone could also give some mathematical intuition, if my attempt is completely wrong, that would be appreciated.
For the sake of convenience, let's ignore the ReLU for a moment. You have an input space X (of some size [dimX]) mapped to an intermediate space Y (of some size [dimY]) mapped to an output space m (of some size [dimM]) You have, then, W: X → Y a matrix of shape [dimY, dimX] and Z: Y → m a matrix of shape [dimM, dimY]. Finally your loss is simply a function that maps your M space to a scalar value.
Let us walk the way backwards. As you correctly said, you want to compute the derivative of the loss w.r.t W and to do so you need to apply the chain rule all the way back. You then have:
dL/dW = dL/dm * dm/dY * dY/dW
dL/dm is of shape [dimm] (a scalar function with derivatives across dimm dimensions)
dm/dY is of shape [dimm, dimY] (an m-dimensional function with derivatives across dimY dimensions)
dY/dW is of shape [dimY, dimW] = [dimY, dimY, dimX] (a y-dimensional function with derivatives across [dimY, dimX] dimensions)
Edit:
To make the last bit more clear, Y consists of dimY different values, so Y can be treated as dimY constituent functions. We need to apply the gradient operator on each of those mini-functions, all with respect to the basis vectors defined by W. More concretely, if W = [[w11, w12], [w21, w22], [w31, w32]] and x = [x1, x2], then Y = [y1, y2, y3] = [w11x1 + w12x2, w21x1 + w22x2, w31x1 + w32x2]. Then W defines a 6d space (3x2) across which we need to differentiate. We have dY/dW = [dy1/dW, dy2/dW, dy3/dW], and also dy1/dW = [[dy1/dw11, dy1/dw12], [dy1/dw21, dy1/dw22], [dy1/dw31, dy1/dw32]] = [[x1,x2],[0,0],[0,0]], a 3x2 matrix. So dY/dW is a [3,3,2] tensor.
As for the multiplication part; the operation here is tensor contraction (essentially matrix multiplication in high dimension spaces). Practically, if you have a high-order tensor A[[a1, a2, a3... ], β] (i.e. a+1 dimensions, the last of which is of size β) and a tensor B[β, [b1, b2...]] (i.e. b+1 dimensions, the first of which is β), their tensor contraction is a matrix C[[a1,a2...], [b1,b2...]] (i.e. a+b dimensions, the β dimension contracted), where C is obtained by summing over element-wise across the shared dimension β (refer to https://docs.scipy.org/doc/numpy/reference/generated/numpy.tensordot.html#numpy.tensordot).
The resulting tensor contraction then is a matrix of shape [dimY, dimX] which can be used to update your W weights. The ReLU which we ignored earlier can easily be thrown in the mix, since ReLU: 1 → 1 is a scalar function applied element-wise on Y.
To summarize, your code would be:
W_gradient = 2m * np.dot(Z, x) * np.e**x/(1+np.e**x))
I just implemented several multiplier neural networks(MLP) from scratch in C++[1], and I think I know what's your pain. And believe me, you don't even need any third party matrix/tensor/automatic differentiation(AD) libraries to do the matrix multiplication or gradient calculation. There are three things you should pay attention to:
There are two kinds of multiplication in the equations: matrix multiplication, and elementwise multiplication, you'll mess up if you denoted them all as a single *.
Use concrete examples, especially concrete numbers as dimensions of your data/matrix/vector to build intuition.
The most powerful tool for programming correctly is dimension compatibility, always don't forget to check dimensions.
Suppose your want to do binary classification and the neural network is input -> h1 -> sigmoid -> h2 -> sigmoid -> loss in which input layer has 1 sample each has 2 features, h1 has 7 neurons, and h2 has 2 neurons. Then:
forward pass:
Z1(1, 7) = X(1, 2) * W1(2, 7)
A1(1, 7) = sigmoid(Z1(1, 7))
Z2(1, 2) = A1(1, 7) * W2(7, 2)
A2(1, 2) = sigmoid(Z2(1, 2))
Loss = 1/2(A2 - label)^2
backward pass:
dA2(1, 2) = dL/dA2 = A2 - label
dZ2(1, 2) = dL/dZ2 = dA2 * dsigmoid(A2_i) -- element wise
dW2(7, 2) = A1(1, 7).T * dZ2(1, 2) -- matrix multiplication
Notice the last equation, the dimension of the gradient of W2 should match W2, which is (7, 2). And the only way to get a (7, 2) matrix is to transpose input A1 and multiply A1 with dZ2, that's dimension compatibility[2].
backward pass continued:
dA1(1, 7) = dZ2(1, 2) * A1(2, 7) -- matrix multiplication
dZ1(1, 7) = dA1(1, 7) * dsigmoid(A1_i) -- element wise
dW1(2, 7) = X.T(2, 1) * dZ1(1, 7) -- matrix multiplication
[1] The code is here, you can see the hidden layer implementation, naive matrix implementation and the references listed there.
[2] I omit the matrix derivation part, it's simple actually but hard to type the equations out. I strongly suggest you read this paper, every tiny detail you should know on matrix derivation in DL is listed in this paper.
[3] One sample as input is used in the above example(as a vector), you can substitute 1 with any batch numbers(become matrix), and the equations still hold.

Vectorization of mulivariable gradient descent

I've been doing the homework 1 in Andrew Ng's machine learning course. But I'm stuck on my understanding of what he was talking about when vectorizing the multivariable gradient descent.
his equation is presented as follows:
theta := theta - alpha*f
f is supposed to be created by 1/m*sum(h(xi)-yi)*Xi where i is the index
now here is where I get confused, I know that h(xi)-y(i) can be rewritten as theta*xi where xi represents a row of feature elements (1xn) and theta represents a column (nx1) producing a scalar which I then subtract from an individual value of y, which I then multiply by Xi where Xi represents a column of 1 features values?
so that would give me mx1 vector? which then has to be subtracted from an nx1 vector?
is it that Xi represents a row of feature values? and if so how can I do this without indexing over all of these rows?
I'm specifically referring to this image:
I'll explain it with the non-vectorized implementation
so that would give me mx1 vector? which then has to be subtracted from
an nx1 vector?
yes it will give you m x 1 vector, but instead to be subtracted from n x 1 vector, it has to be subtracted from m x 1 vector too. How?
I know that h(xi)-y(i) can be rewritten as theta*xi where xi
represents a row of feature elements (1xn) and theta represents a
column (nx1) producing a scalar
you have answer it actually, theta * xi produce a scalar, so when you have m samples it will give you a m x 1 vector. If you see carefully in the equation the scalar result from h(xi) - y(i) is multiplied with a scalar too which is x0 from sample i (x sup i sub 0) so it will give you a scalar result, or m x 1 vector if you have m samples.

visualizing hyperplane equation of SVM

I have been trying to understand the SVM algorithm and i can not fully get the hyperplane equation. The equation is- w.x-b=0.
What i understand(with lots of confusions) is- x is unknown set of all the vectors that constitutes the hyperplane and w is normal vector to that hyperplane. We do not know the w, we need to find the optimal w from training set.
Now, we all know, if two vectors are perpendicular to each other then their dot product is zero. So, if w is normal to x then that means it should be w.x=0, but why it's saying w.x-b=0 or w.x=b?(normal and perpendicular is same thing, right?) In normal sense, what i understand if w.x=b, then w and x is not perpendicular and the angle between them is more or less than 90 degree.
Another thing is, in most tutorials(even MIT professor in his lecture) it is being said, that x is projecting on w, but as I know if i want to take projection of x onto w then it will be x.w/|w| (without the direction of w), not only w.x . Am i right with this point?
I think, i am missing something or misunderstanding something. Can anybody help me with this?
First, to be in line:
The projection of x onto w is (x.w /|w|²) w. And x. w/|w| is the component of x in the direction of w (as w/|w| is a unit vector of direction w)
Then, you might be confusing two things:
If x is a vector of an hyperplane, then x.w = 0 is the equation of the hyperplane. Unfortunately, we do not want any of your x to be on the hyperplane.
In the case of SVM, you do not know any vector x on the hyperplane. Instead, you have a training set {(x1,y1), ..., (xN, yN)} from which you want to find the normal vector w of the hyperplane (then you can describe any vector x of this hyperplane knowing that w.x = 0).
So let's inspect the second point where you have a dataset {(x1,y1), ..., (xN, yN)}
and you want to find the hyperplane equation, ie its normal vector w, thanks to some particular vectors (called the support vector).
There is no reason why any of these xi should be normal to w. Moreover, it's impossible for all x to be normal to the hyperplane (If so, let's consider two vectors x1 != x2. Then w.x1 = 0 = w.x2 => w.(x1-x2) = 0 meaning that either w = 0 or x1=x2)
But what we want is that w.U >= C if U positive (one side of the hyperplane) and w.U < C if U negative (other side of the hyperplane).
As of a particular U, we can choose the vectors in the dataset. We expect them to be at a certain distance D (called the gutter in the lecture) of this hyperplane. So we have w.xi >= C + D if yi positive. And w.xi < C - D if yi negative
Let's put b = -C and D = 1 (without loss of generality). Then w.xi + b >= 1 if yi positive. w.xi + b < -1 if yi negative.
If multiplied by yi (being equal to 1 if xi positive, or -1 otherwise) it leads to yi ( w.xi +b ) >= 1.
Finally, by taking the support vector, ie those defining the gutter, we obtain yi (w.xi + b ) - 1 = 0

Resources