How do previous Activation Map in CNNs affect the next Activation Map - machine-learning

Say I have a CNN with 2 layers. to consume a picture of 25x25 RGB pixels, the first layer has filter-size = 50,kernel size = 5x5, stride = 1x1 padding = 0x0. and the second layer has the same parameters except that filter-size = 100. Now I know that my Activation Map from the 1st layer's pass is 21x21x3 in dimension (the ×3 is due to RGB).and this means I have 50 activation maps of 21x21x3 created via applying 50 different filters on the input picture.
My question is for the second pass since my filter-size = 100 does that mean that the 50 activation maps from layer 1 are passed through the 100 filters of the second layer each as receptive fields so that at the second pass I have a total of 100x50 activation maps or are the 50 activation maps fused into a single unit before being passed such that the 2nd layer only produces 100 activation maps still?

You have a slight misconception here. Instead of keeping the dimensions from your input image (in your case 3, since RGB), it does convolve across all of them. Meaning the output of your convolution operator C for a specific image region x of size 5x5x3 would be only a single value, not a vector of size 1x3.
The activation maps (i like to call them feature maps) then simply imply how many different convolutional filters you have, and thus you get as many "output dimensions" stacked. In your example, the output would not be 21x21x3, but instead 21x21x50.
For the next layer, you would similarly take an input of (I'm assuming you are using the same kernel size) 5x5x50, and again produce only a single value. This time, you have 100 output stacks, so the resulting size would be 17x17x100.

Related

About max-pooling?

Max-pooling is useful in vision for two reasons:
By eliminating non-maximal values, it reduces computation for upper
layers.
It provides a form of translation invariance. Imagine cascading a
max-pooling layer with a convolutional layer. There are 8 directions
in which one can translate the input image by a single pixel. If
max-pooling is done over a 2x2 region, 3 out of these 8 possible
configurations will produce exactly the same output at the
convolutional layer. For max-pooling over a 3x3 window, this jumps to
5/8.
Since it provides additional robustness to position, max-pooling is a
“smart” way of reducing the dimensionality of intermediate
representations.
I can't understand, what does 8 directions mean? And what does
"If max-pooling is done over a 2x2 region, 3 out of these 8 possible
configurations will produce exactly the same output at the
convolutional layer. For max-pooling over a 3x3 window, this jumps to
5/8."
mean?
There are 8 directions in which one can translate the input image by a single pixel.
They are considering 2 horizontal, 2 vertical and 4 diagonal 1-pixel shifts. That gives 8 in total.
If max-pooling is done over a 2x2 region, 3 out of these 8 possible configurations will produce exactly the same output at the convolutional layer. For max-pooling over a 3x3 window, this jumps to 5/8.
Imagine we are taking the maximum value in a 2x2 region of an image. The image is pre-convolved, though it doesn't matter for the purpose of this explanation.
No matter where exactly in a 2x2 region the maximum value resides, there will be 3 possible 1-pixel translations of the image that result in the maximum value remaining in that particular 2x2 region. Of course an even greater value may be brought from a neighbouring region, but that's beside the point. The point is you get some translation invariance.
With a 3x3 region it gets more complex, as the number of 1-pixel translations that keep the maximum value within the region depends on where exactly in the region that maximum value resides. The 5 translations they mention correspond to a location in the middle of an edge in a 3x3 pixel block. A corner location will give 3 translations, while the center one will give all 8.

How does the unpooling and deconvolution work in DeConvNet

I have been trying to understand how unpooling and deconvolution works in DeConvNets.
Unpooling
While during the unpooling stage, the activations are restored back to the locations of maximum activation selections, which makes sense, but what about the remaining activations? Do those remaining activations need to be restored as well or interpolated in some way or just filled as zeros in unpooled map.
Deconvolution
After the convolution section (i.e., Convolution layer, Relu, Pooling ), it is common to have more than one feature map output, which would be treated as input channels to successive layers ( Deconv.. ). How could these feature maps be combined together in order to achieve the activation map with same resolution as original input?
Unpooling
As etoropov wrote, you can read about unpooling in Visualizing and Understanding Convolutional Networks by Zeiler and Ferguson:
Unpooling: In the convnet, the max pooling operation
is non-invertible, however we can obtain an approximate
inverse by recording the locations of the
maxima within each pooling region in a set of switch
variables. In the deconvnet, the unpooling operation
uses these switches to place the reconstructions from
the layer above into appropriate locations, preserving
the structure of the stimulus. See Fig. 1(bottom) for
an illustration of the procedure.
Deconvolution
Deconvolution works like this:
You add padding around each pixel
You apply a convolution
For example, in the following illustration the original blue image is padded with zeros (white), the gray convolution filter is applied to get the green output.
Source: What are deconvolutional layers?
1 Unpooling.
In the original paper on unpooling, remaining activations are zeroed.
2 Deconvolution.
A deconvolutional layer is just the transposed of its corresponding conv layer. E.g. if conv layer's shape is [height, width, previous_layer_fms, next_layer_fms], than the deconv layer will have the shape [height, width, next_layer_fms, previous_layer_fms]. The weights of conv and deconv layers are shared! (see this paper for instance)

How to transform filter when using FFT to do 2d convolution?

I want to use FFT to accelerate 2D convolution. The filter is 15 x 15 and the image is 300 x 300. The filter's size is different with image so I can not doing dot product after FFT. So how to transform the filter before doing FFT so that its size can be matched with image?
I use the convention that N is kernel size.
Knowing the convolution is not defined (mathematically) on the edges (N//2 at each end of each dimension), you would loose N pixels in totals on each axis.
You need to make room for convolution : pad the image with enough "neutral values" so that the edge cases (junk values inserted there) disappear.
This would involve making your image a 307x307px image (with suitable padding values, see next paragraph), which after convolution gives back a 300x300 image.
Popular image processing libraries have this already embedded : when you ask for a convolution, you have extra arguments specifying the "mode".
Which values can we pad with ?
Stolen with no shame from Numpy's pad documentation
'constant' : Pads with a constant value.
'edge' : Pads with the edge values of array.
'linear_ramp' : Pads with the linear ramp between end_value and the arraydge value.
'maximum' :
Pads with the maximum value of all or part of the
vector along each axis.
'mean'
Pads with the mean value of all or part of the
vector along each axis.
'median'
Pads with the median value of all or part of the
vector along each axis.
'minimum'
Pads with the minimum value of all or part of the
vector along each axis.
'reflect'
Pads with the reflection of the vector mirrored on
the first and last values of the vector along each
axis.
'symmetric'
Pads with the reflection of the vector mirrored
along the edge of the array.
'wrap'
Pads with the wrap of the vector along the axis.
The first values are used to pad the end and the
end values are used to pad the beginning.
It's up to you, really, but the rule of thumb is "choose neutral values for the task at hand".
(For instance, padding with 0 when doing averaging makes little sense, because 0 is not neutral in an average of positive values)
it depends on the algorithm you use for the FFT, because most of them need to work with images of dyadic dimensions (power of 2).
Here is what you have to do:
Padding image: center your image into a bigger one with dyadic dimensions
Padding kernel: center you convolution kernel into an image with same dimensions as step 1.
FFT on the image from step 1
FFT on the kernel from step 2
Complex multiplication (Fourier space) of results from steps 3 and 4.
Inverse FFT on the resulting image on step 5
Unpadding on the resulting image from step 6
Put all 4 blocs into the right order.
If the algorithm you use does not need dyadic dimensions, then steps 1 is useless and 2 has to be a simple padding with the image dimensions.

What is Depth of a convolutional neural network?

I was taking a look at Convolutional Neural Network from CS231n Convolutional Neural Networks for Visual Recognition. In Convolutional Neural Network, the neurons are arranged in 3 dimensions(height, width, depth). I am having trouble with the depth of the CNN. I can't visualize what it is.
In the link they said The CONV layer's parameters consist of a set of learnable filters. Every filter is small spatially (along width and height), but extends through the full depth of the input volume.
For example loook at this picture. Sorry if the image is too crappy.
I can grasp the idea that we take a small area off the image, then compare it with the "Filters". So the filters will be collection of small images? Also they said We will connect each neuron to only a local region of the input volume. The spatial extent of this connectivity is a hyperparameter called the receptive field of the neuron. So is the receptive field has the same dimension as the filters? Also what will be the depth here? And what do we signify using the depth of a CNN?
So, my question mainly is, if i take an image having dimension of [32*32*3] (Lets say i have 50000 of these images, making the dataset [50000*32*32*3]), what shall i choose as its depth and what would it mean by the depth. Also what will be the dimension of the filters?
Also it will be much helpful if anyone can provide some link that gives some intuition on this.
EDIT:
So in one part of the tutorial(Real-world example part), it says The Krizhevsky et al. architecture that won the ImageNet challenge in 2012 accepted images of size [227x227x3]. On the first Convolutional Layer, it used neurons with receptive field size F=11, stride S=4 and no zero padding P=0. Since (227 - 11)/4 + 1 = 55, and since the Conv layer had a depth of K=96, the Conv layer output volume had size [55x55x96].
Here we see the depth is 96. So is depth something that i choose arbitrarily? or something i compute? Also in the example above(Krizhevsky et al) they had 96 depths. So what does it mean by its 96 depths? Also the tutorial stated Every filter is small spatially (along width and height), but extends through the full depth of the input volume.
So that means the depth will be like this? If so then can i assume Depth = Number of Filters?
In Deep Neural Networks the depth refers to how deep the network is but in this context, the depth is used for visual recognition and it translates to the 3rd dimension of an image.
In this case you have an image, and the size of this input is 32x32x3 which is (width, height, depth). The neural network should be able to learn based on this parameters as depth translates to the different channels of the training images.
UPDATE:
In each layer of your CNN it learns regularities about training images. In the very first layers, the regularities are curves and edges, then when you go deeper along the layers you start learning higher levels of regularities such as colors, shapes, objects etc. This is the basic idea, but there lots of technical details. Before going any further give this a shot : http://www.datarobot.com/blog/a-primer-on-deep-learning/
UPDATE 2:
Have a look at the first figure in the link you provided. It says 'In this example, the red input layer holds the image, so its width and height would be the dimensions of the image, and the depth would be 3 (Red, Green, Blue channels).' It means that a ConvNet neuron transforms the input image by arranging its neurons in three dimeonsion.
As an answer to your question, depth corresponds to the different color channels of an image.
Moreover, about the filter depth. The tutorial states this.
Every filter is small spatially (along width and height), but extends through the full depth of the input volume.
Which basically means that a filter is a smaller part of an image that moves around the depth of the image in order to learn the regularities in the image.
UPDATE 3:
For the real world example I just browsed the original paper and it says this : The first convolutional layer filters the 224×224×3 input image with 96 kernels of size 11×11×3 with a stride of 4 pixels.
In the tutorial it refers the depth as the channel, but in real world you can design whatever dimension you like. After all that is your design
The tutorial aims to give you a glimpse of how ConvNets work in theory, but if I design a ConvNet nobody can stop me proposing one with a different depth.
Does this make any sense?
Depth of CONV layer is number of filters it is using.
Depth of a filter is equal to depth of image it is using as input.
For Example: Let's say you are using an image of 227*227*3.
Now suppose you are using a filter of size of 11*11(spatial size).
This 11*11 square will be slided along whole image to produce a single 2 dimensional array as a response. But in order to do so, it must cover every aspect inside of 11*11 area. Therefore depth of filter will be depth of image = 3.
Now suppose we have 96 such filter each producing different response. This will be depth of Convolutional layer. It is simply number of filters used.
I'm not sure why this is skimped over so heavily. I also had trouble understanding it at first, and very few outside of Andrej Karpathy (thanks d00d) have explained it. Although, in his writeup (http://cs231n.github.io/convolutional-networks/), he calculates the depth of the output volume using a different example than in the animation.
Start by reading the section titled 'Numpy examples'
Here, we go through iteratively.
In this case we have an 11x11x4. (why we start with 4 is kind of peculiar, as it would be easier to grasp with a depth of 3)
Really pay attention to this line:
A depth column (or a fibre) at position (x,y) would be the activations
X[x,y,:].
A depth slice, or equivalently an activation map at depth d
would be the activations X[:,:,d].
V[0,0,0] = np.sum(X[:5,:5,:] * W0) + b0
V is your output volume. The zero'th index v[0] is your column - in this case V[0] = 0 this is the first column in your output volume.
V[1] = 0 this is the first row in your output volume. V[3]= 0 is the depth. This is the first output layer.
Now, here's where people get confused (at least I did). The input depth has absolutely nothing to do with your output depth. The input depth only has control of the filter depth. W in Andrej's example.
Aside: A lot of people wonder why 3 is the standard input depth. For color input images, this will always be 3 for plain ole images.
np.sum(X[:5,:5,:] * W0) + b0 (convolution 1)
Here, we are calculating elementwise between a weight vector W0 which is 5x5x4. 5x5 is an arbitrary choice. 4 is the depth since we need to match our input depth. The weight vector is your filter, kernel, receptive field or whatever obfuscated name people decide to call it down the road.
if you come at this from a non python background, that's maybe why there's more confusion since array slicing notation is non-intuitive. The calculation is a dot product of your first convolution size (5x5x4) of your image with the weight vector. The output is a single scalar value which takes the position of your first filter output matrix. Imagine a 4 x 4 matrix representing the sum product of each of these convolution operations across the entire input. Now stack them for each filter. That shall give you your output volume. In Andrej's writeup, he starts moving along the x axis. The y axis remains the same.
Here's an example of what V[:,:,0] would look like in terms of convolutions. Remember here, the third value of our index is the depth of your output layer
[result of convolution 1, result of convolution 2, ..., ...]
[..., ..., ..., ..., ...]
[..., ..., ..., ..., ...]
[..., ..., ..., result of convolution n]
The animation is best for understanding this, but Andrej decided to swap it with an example that doesn't match the calculation above.
This took me a while. Partly because numpy doesn't index the way Andrej does in his example, at least it didn't I played around with it. Also, there's some assumptions that the sum product operation is clear. That's the key to understand how your output layer is created, what each value represents and what the depth is.
Hopefully that helps!
Since the input volume when we are doing an image classification problem is N x N x 3. At the beginning it is not difficult to imagine what the depth will mean - just the number of channels - Red, Green, Blue. Ok, so the meaning for the first layer is clear. But what about the next ones? Here is how I try to visualize the idea.
On each layer we apply a set of filters which convolve around the input. Lets imagine that currently we are at the first layer and we convolve around a volume V of size N x N x 3. As #Semih Yagcioglu mentioned at the very beginning we are looking for some rough features: curves, edges etc... Lets say we apply N filters of equal size (3x3) with stride 1. Then each of these filters is looking for a different curve or edge while convolving around V. Of course, the filter has the same depth, we want to supply the whole information not just the grayscale representation.
Now, if M filters will look for M different curves or edges. And each of these filters will produce a feature map consisting of scalars (the meaning of the scalar is the filter saying: The probability of having this curve here is X%). When we convolve with the same filter around the Volume we obtain this map of scalars telling us where where exactly we saw the curve.
Then comes feature map stacking. Imagine stacking as the following thing. We have information about where each filter detected a certain curve. Nice, then when we stack them we obtain information about what curves / edges are available at each small part of our input volume. And this is the output of our first convolutional layer.
It is easy to grasp the idea behind non-linearity when taking into account 3. When we apply the ReLU function on some feature map, we say: Remove all negative probabilities for curves or edges at this location. And this certainly makes sense.
Then the input for the next layer will be a Volume $V_1$ carrying info about different curves and edges at different spatial locations (Remember: Each layer Carries info about 1 curve or edge).
This means that the next layer will be able to extract information about more sophisticated shapes by combining these curves and edges. To combine them, again, the filters should have the same depth as the input volume.
From time to time we apply Pooling. The meaning is exactly to shrink the volume. Since when we use strides = 1, we usually look at a pixel (neuron) too many times for the same feature.
Hope this makes sense. Look at the amazing graphs provided by the famous CS231 course to check how exactly the probability for each feature at a certain location is computed.
In simple terms, it can explain as below,
Let's say you have 10 filters where each filter is the size of 5x5x3. What does this mean? the depth of this layer is 10 which is equal to the number of filters. Size of each filter can be defined as we want e.g., 5x5x3 in this case where 3 is the depth of the previous layer. To be precise, depth of each filer in the next layer should be 10 ( nxnx10) where n can be defined as you want like 5 or something else. Hope will make everything clear.
The first thing you need to note is
receptive field of a neuron is 3D
ie If the receptive field is 5x5 the neuron will be connected to 5x5x(input depth) number of points. So whatever be your input depth, one layer of neurons will only develop 1 layer of output.
Now, the next thing to note is
depth of output layer = depth of conv. layer
ie The output volume is independent of the input volume, and it only depends on the number filters(depth). This should be pretty obvious from the previous point.
Note that the number of filters (depth of the cnn layer) is a hyper parameter. You can take it whatever you want, independent of image depth. Each filter has it's own set of weights enabling it to learn a different feature on the same local region covered by the filter.
The depth of the network is the number of layers in the network. In the Krizhevsky paper, the depth is 9 layers (modulo a fencepost issue with how layers are counted?).
If you are referring to the depth of the filter (I came to this question searching for that) then this diagram of LeNet is illustrating
Source http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf
How to create such a filter; Well in python like https://github.com/alexcpn/cnn_in_python/blob/main/main.py#L19-L27
Which will give you a list of numpy arrays and length of the list is the depth
Example in the code above,but adding a depth of 3 for color (RGB), the below is the network. The first Convolutional layer is a filter of shape (5,5,3) and depth 6
Input (R,G,B)= [32.32.3] *(5.5.3)*6 == [28.28.6] * (5.5.6)*1 = [24.24.1] * (5.5.1)*16 = [20.20.16] *
FC layer 1 (20, 120, 16) * FC layer 2 (120, 1) * FC layer 3 (20, 10) * Softmax (10,) =(10,1) = Output
In Pytorch
np.set_printoptions(formatter={'float': lambda x: "{0:0.2f}".format(x)})
# Generate a random image
image_size = 32
image_depth = 3
image = np.random.rand(image_size, image_size)
# to mimic RGB channel
image = np.stack([image,image,image], axis=image_depth-1) # 0 to 2
image = np.moveaxis(image, [2, 0], [0, 2])
print("Image Shape=",image.shape)
input_tensor = torch.from_numpy(image)
m = nn.Conv2d(in_channels=3,out_channels=6,kernel_size=5,stride=1)
output = m(input_tensor.float())
print("Output Shape=",output.shape)
Image Shape= (3, 32, 32)
Output Shape= torch.Size([6, 28, 28])

Why the LeNet5 uses 32×32 image as input?

I know that the handwritten digit images in the mnist dataset are 28×28,but why the input in LeNet5 is 32×32?
Your question is answered in the original paper:
The convolution step always takes a smaller input than the feature maps of the previous layer (and this holds true for the 1st layer - the input - as well):
Layer C1 is a convolutional layer with 6 feature maps.
Each unit in each feature map is connected to a 5x5 neighborhood in the input. The size of the feature maps is 28x28
which prevents connection from the input from falling off
the boundary.
This means that using a 5x5 neighborhood on a 32x32 input, you'll get 6 features maps of size 28x28 because there's pixels you won't use at the image boundary (you will always have a remainder with these numbers).
Of course they could have an exception for the first layer. The reason they're still using 32x32 images is:
The input is a 32x32 pixel image. This is significantly larger
than the largest character in the database (at most 20x20
pixels centered in a 28x28 field). The reason is that it is
desirable that potential distinctive features such as stroke
end-points or corner can appear in the center of the receptive field of the highest-level feature detectors.

Resources