Related
I've been reading about convolutional nets and I've programmed a few models myself. When I see visual diagrams of other models it shows each layer being smaller and deeper than the last ones. Layers have three dimensions like 256x256x32. What is this third number? I assume the first two numbers are the number of nodes but I don't know what the depth is.
TLDR; 256x256x32 refers to the layer's output shape rather than the layer itself.
There are many articles and posts out there explaining how convolution layers work. I'll try to answer your question without going into too many details, just focusing on shapes.
Assuming you are working with 2D convolution layers, your input and output will both be three-dimensional. That is, without considering the batch which would correspond to a 4th axis... Therefore, the shape of a convolution layer input will be (c, h, w) (or (h, w, c) depending on the framework) where c is the number of channels, h is the width of the input and w the width. You can see it as a c-channel hxw image.
The most intuitive example of such input is the input of the first convolution layer of your convolutional neural network: most likely an image of size hxw with c channels for example c=1 for greyscale or c=3 for RGB...
What's important is that for all pixels of that input, the values on each channel gives additional information on that pixel. Having three channels will give each pixel ('pixel' as in position in the 2D input space) a richer content than having a single. Since each pixel will be encoded with three values (three channels) vs. a single one (one channel). This kind of intuition about what channels represent can be extrapolated to a higher number of channels. As we said an input can have c channels.
Now going back to convolution layers, here is a good visualization. Imagine having a 5x5 1-channel input. And a convolution layer consisting of a single 3x3 filter (i.e. kernel_size=3)
input
filter
convolution
output
shape
(1, 5, 5)
(3, 3)
(3,3)
representation
Now keep in mind the dimension of the output will depend on the stride and padding of the convolution layer. Here the shape of the output is the same as the shape of the filter, it does not necessarily have to be! Take an input shape of (1, 5, 5), with the same convolution settings, you would end up with a shape of (4, 4) (which is different from the filter shape (3, 3).
Also, something to note is that if the input had more than one channel: shape (c, h, w), the filter would have to have the same number of channels. Each channel of the input would convolve with each channel of the filter and the results would be averaged into a single 2D feature map. So you would have an intermediate output of (c, 3, 3), which after averaging over the channels, would leave us with (1, 3, 3)=(3, 3). As a result, considering a convolution with a single filter, however many input channels there are, the output will always have a single channel.
From there what you can do is assemble multiple filters on the same layer. This means you define your layer as having k 3x3 filters. So a layer consists k filters. For the computation of the output, the idea is simple: one filter gives a (3, 3) feature map, so k filters will give k (3, 3) feature maps. These maps are then stacked into what will be the channel dimension. Ultimately, you're left with an output shape of... (k, 3, 3).
Let k_h and k_w, be the kernel height and kernel width respectively. And h', w' the height and width of one outputted feature map:
input
layer
output
shape
(c, h, w)
(k, c, k_h, k_w)
(k, h', w')
description
c-channel hxw feature map
k filters of shape (c, k_h, k_w)
k-channel h'xw' feature map
Back to your question:
Layers have 3 dimensions like 256x256x32. What is this third number? I assume the first two numbers are the number of nodes but I don't know what the depth is.
Convolution layers have four dimensions, but one of them is imposed by your input channel count. You can choose the size of your convolution kernel, and the number of filters. This number will determine is the number of channels of the output.
256x256 seems extremely high and you most likely correspond to the output shape of the feature map. On the other hand, 32 would be the number of channels of the output, which... as I tried to explain is the number of filters in that layer. Usually speaking the dimensions represented in visual diagrams for convolution networks correspond to the intermediate output shapes, not the layer shapes.
As an example, take the VGG neural network:
Very Deep Convolutional Networks for Large-Scale Image Recognition
Input shape for VGG is (3, 224, 224), knowing that the result of the first convolution has shape (64, 224, 224) you can determine there is a total of 64 filters in that layer.
As it turns out the kernel size in VGG is 3x3. So, here is a question for you: knowing there is a single bias parameter per filter, how many total parameters are in VGG's first convolution layer?
Sorry for the short answer, but when you have a digital image, you have 2 dimensions and then you often have 3 for the colors. The convolutional filter looks into parts of the picture with lower height/width dimensions and much more depth channels (in your case 32) to get more information. This is then fed into the neural network to learn.
I created the example in PyTorch to demonstrate the output you had:
import torch
import torch.nn as nn
bs=16
x = torch.randn(bs, 3, 256, 256)
c = nn.Conv2d(3,32,kernel_size=5,stride=1,padding=2)
out = c(x)
print(out.shape, out.shape[1])
Out:
torch.Size([16, 32, 256, 256]) 32
It's a real tensor inside. It may help.
You can play with a lot of convolution parameters.
I'm training a HOG + SVM model, and my training data comes in various sizes and aspect ratios. The SVM model can't be trained on variable sized lists, so I'm looking to calculate a histogram of gradients that is the same length regardless of image size.
Is there a clever way to do that? Or is it better to resize the images or pad them?
What people usually do in such case is one of the follow two things:
Resize all images (or image patches) to a fixed size and extract the HOG features from those.
Use the "Bag of Words/Features" method and don't resize the images.
The first method 1. is quite simple but it has some problems which method 2. tries to solve.
First, think of what a hog descriptor does. It divides an image into cells of a fixed length, calculates the gradients cell-wise to generate cell-wise histograms(based on voting). At the end, you'll have a concatenated histogram of all the cells and that's your descriptor.
So there is a problem with it, because the object (that you want to detect) has to cover the images in similar manner. Otherwise your descriptor would look different depending on the location of the object inside the image.
Method 2. works as follows:
Extract the HOG features from both positive and negative images in your training set.
Use an clustering algorithm like k-means to define a fixed amount of k centroids.
For each image in your dataset, extract the HOG features and compare them element-wise to the centroids to create a frequency histogram.
Use the frequency histograms for the training of your SVM and use it for the classification phase. This way, the location doesn't matter and you'll always have a fixed sized of inputs. You'll also benefit from the reduction of dimensions.
You can normalize the images to a given target shape using cv2.resize(), divide image into number of blocks you want and calculate the histogram of orientations along with the magnitudes. Below is a simple implementation of the same.
img = cv2.imread(filename,0)
img = cv2.resize(img,(16,16)) #resize the image
gx = cv2.Sobel(img, cv2.CV_32F, 1, 0) #horizontal gradinets
gy = cv2.Sobel(img, cv2.CV_32F, 0, 1) # vertical gradients
mag, ang = cv2.cartToPolar(gx, gy)
bin_n = 16 # Number of bins
# quantizing binvalues in (0-16)
bins = np.int32(bin_n*ang/(2*np.pi))
# divide to 4 sub-squares
s = 8 #block size
bin_cells = bins[:s,:s],bins[s:,:s],bins[:s,s:],bins[s:,s:]
mag_cells = mag[:s,:s], mag[s:,:s], mag[:s,s:], mag[s:,s:]
hists = [np.bincount(b.ravel(), m.ravel(), bin_n) for b, m in zip(bin_cells,mag_cells)]
hist = np.hstack(hists) #histogram feature data to be fed to SVM model
Hope that helps!
I'm following the TensorFlow tutorial
Initially x is defined as
x = tf.placeholder(tf.float32, shape=[None, 784])
Later on it reshapes x, I'm trying to understand why.
To apply the layer, we first reshape x to a 4d tensor, with the second and third dimensions corresponding to image width and height, and the final dimension corresponding to the number of color channels.
x_image = tf.reshape(x, [-1,28,28,1])
What does -1 mean in the reshaping vector and why is x being reshaped?
1) What does -1 mean in the reshaping vector
From the documentation of reshape:
If one component of shape is the special value -1, the size of that
dimension is computed so that the total size remains constant. In
particular, a shape of [-1] flattens into 1-D. At most one component
of shape can be -1.
this is a standard feature and is available in numpy as well. Basically it means - I do not have time to calculate all the dimensions, so infer the one for me. In your case because x * 28 * 28 * 1 = 784 so your -1 = 1
2) Why is x being reshaped
They are planning to use convolution for image classification. So they need to use some spatial information. Current data is 1 dimensional. So they transform it to 4 dimensions. I do not know the point of the forth dimension because in my opinion they might have used only (x, y, color). Or even (x, y). Try to modify their reshape and convolution and most probably you will get similar accuracy.
why 4 dimensions
TensorFlow’s convolutional conv2d operation expects a 4-dimensional tensor with dimensions corresponding to batch, width, height and channel.
[batch, in_height, in_width, in_channels]
What is the correct mean of normalization in image processing? I googled it but i had different definition. I'll try to explain in detail each definition.
Normalization of a kernel matrix
If normalization is referred to a matrix (such as a kernel matrix for convolution filter), usually each value of the matrix is divided by the sum of the values of the matrix in order to have the sum of the values of the matrix equal to one (if all values are greater than zero). This is useful because a convolution between an image matrix and our kernel matrix give an output image with values between 0 and the max value of the original image. But if we use a sobel matrix (that have some negative values) this is not true anymore and we have to stretch the output image in order to have all values between 0 and max value.
Normalization of an image
I basically find two definition of normalization. The first one is to "cut" values too high or too low. i.e. if the image matrix has negative values one set them to zero and if the image matrix has values higher than max value one set them to max values. The second one is to linear stretch all the values in order to fit them into the interval [0, max value].
I will extend a bit the answer from #metsburg. There are several ways of normalizing an image (in general, a data vector), which are used at convenience for different cases:
Data normalization or data (re-)scaling: the data is projected in to a predefined range (i.e. usually [0, 1] or [-1, 1]). This is useful when you have data from different formats (or datasets) and you want to normalize all of them so you can apply the same algorithms over them. Is usually performed as follows:
Inew = (I - I.min) * (newmax - newmin)/(I.max - I.min) + newmin
Data standarization is another way of normalizing the data (used a lot in machine learning), where the mean is substracted to the image and dividied by its standard deviation. It is specially useful if you are going to use the image as an input for some machine learning algorithm, as many of them perform better as they assume features to have a gaussian form with mean=0,std=1. It can be performed easyly as:
Inew = (I - I.mean) / I.std
Data stretching or (histogram stretching when you work with images), is refereed as your option 2. Usually the image is clamped to a minimum and maximum values, setting:
Inew = I
Inew[I < a] = a
Inew[I > b] = b
Here, image values that are lower than a are set to a, and the same happens inversely with b. Usually, values of a and b are calculated as percentage thresholds. a= the threshold that separates bottom 1% of the data and b=the thredhold that separates top 1% of the data. By doing this, you are removing outliers (noise) from the image.
This is similar (simpler) to histogram equalization, which is another used preprocessing step.
Data normalization, can also be refereed to a normalization of a vector respect to a norm (l1 norm or l2/euclidean norm). This, in practice, is translated as to:
Inew = I / ||I||
where ||I|| refeers to a norm of I.
If the norm is choosen to be the l1 norm, the image will be divided by the sum of its absolute values, making the sum of the whole image be equal to 1. If the norm is choosen to be l2 (or euclidean), then image is divided by the sum of the square values of I, making the sum of square values of I be equal to 1.
The first 3 are widely used with images (not the 3 of them, as scaling and standarization are incompatible, but 1 of them or scaling + streching or standarization + stretching), the last one is not that useful. It is usually applied as a preprocess for some statistical tools, but not if you plan to work with a single image.
Answer by #Imanol is great, i just want to add some examples:
Normalize the input either pixel wise or dataset wise. Three normalization schemes are often seen:
Normalizing the pixel values between 0 and 1:
img /= 255.0
Normalizing the pixel values between -1 and 1 (as Tensorflow does):
img /= 127.5
img -= 1.0
Normalizing according to the dataset mean & standard deviation (as Torch does):
img /= 255.0
mean = [0.485, 0.456, 0.406] # Here it's ImageNet statistics
std = [0.229, 0.224, 0.225]
for i in range(3): # Considering an ordering NCHW (batch, channel, height, width)
img[i, :, :] -= mean[i]
img[i, :, :] /= std[i]
In data science, there are two broadly used normalization types:
1) Where we try to shift the data so that there sum is a particular value, usually 1 (https://stats.stackexchange.com/questions/62353/what-does-it-mean-to-use-a-normalizing-factor-to-sum-to-unity)
2) Normalize data to fit it within a certain range (usually, 0 to 1): https://stats.stackexchange.com/questions/70801/how-to-normalize-data-to-0-1-range
I was taking a look at Convolutional Neural Network from CS231n Convolutional Neural Networks for Visual Recognition. In Convolutional Neural Network, the neurons are arranged in 3 dimensions(height, width, depth). I am having trouble with the depth of the CNN. I can't visualize what it is.
In the link they said The CONV layer's parameters consist of a set of learnable filters. Every filter is small spatially (along width and height), but extends through the full depth of the input volume.
For example loook at this picture. Sorry if the image is too crappy.
I can grasp the idea that we take a small area off the image, then compare it with the "Filters". So the filters will be collection of small images? Also they said We will connect each neuron to only a local region of the input volume. The spatial extent of this connectivity is a hyperparameter called the receptive field of the neuron. So is the receptive field has the same dimension as the filters? Also what will be the depth here? And what do we signify using the depth of a CNN?
So, my question mainly is, if i take an image having dimension of [32*32*3] (Lets say i have 50000 of these images, making the dataset [50000*32*32*3]), what shall i choose as its depth and what would it mean by the depth. Also what will be the dimension of the filters?
Also it will be much helpful if anyone can provide some link that gives some intuition on this.
EDIT:
So in one part of the tutorial(Real-world example part), it says The Krizhevsky et al. architecture that won the ImageNet challenge in 2012 accepted images of size [227x227x3]. On the first Convolutional Layer, it used neurons with receptive field size F=11, stride S=4 and no zero padding P=0. Since (227 - 11)/4 + 1 = 55, and since the Conv layer had a depth of K=96, the Conv layer output volume had size [55x55x96].
Here we see the depth is 96. So is depth something that i choose arbitrarily? or something i compute? Also in the example above(Krizhevsky et al) they had 96 depths. So what does it mean by its 96 depths? Also the tutorial stated Every filter is small spatially (along width and height), but extends through the full depth of the input volume.
So that means the depth will be like this? If so then can i assume Depth = Number of Filters?
In Deep Neural Networks the depth refers to how deep the network is but in this context, the depth is used for visual recognition and it translates to the 3rd dimension of an image.
In this case you have an image, and the size of this input is 32x32x3 which is (width, height, depth). The neural network should be able to learn based on this parameters as depth translates to the different channels of the training images.
UPDATE:
In each layer of your CNN it learns regularities about training images. In the very first layers, the regularities are curves and edges, then when you go deeper along the layers you start learning higher levels of regularities such as colors, shapes, objects etc. This is the basic idea, but there lots of technical details. Before going any further give this a shot : http://www.datarobot.com/blog/a-primer-on-deep-learning/
UPDATE 2:
Have a look at the first figure in the link you provided. It says 'In this example, the red input layer holds the image, so its width and height would be the dimensions of the image, and the depth would be 3 (Red, Green, Blue channels).' It means that a ConvNet neuron transforms the input image by arranging its neurons in three dimeonsion.
As an answer to your question, depth corresponds to the different color channels of an image.
Moreover, about the filter depth. The tutorial states this.
Every filter is small spatially (along width and height), but extends through the full depth of the input volume.
Which basically means that a filter is a smaller part of an image that moves around the depth of the image in order to learn the regularities in the image.
UPDATE 3:
For the real world example I just browsed the original paper and it says this : The first convolutional layer filters the 224×224×3 input image with 96 kernels of size 11×11×3 with a stride of 4 pixels.
In the tutorial it refers the depth as the channel, but in real world you can design whatever dimension you like. After all that is your design
The tutorial aims to give you a glimpse of how ConvNets work in theory, but if I design a ConvNet nobody can stop me proposing one with a different depth.
Does this make any sense?
Depth of CONV layer is number of filters it is using.
Depth of a filter is equal to depth of image it is using as input.
For Example: Let's say you are using an image of 227*227*3.
Now suppose you are using a filter of size of 11*11(spatial size).
This 11*11 square will be slided along whole image to produce a single 2 dimensional array as a response. But in order to do so, it must cover every aspect inside of 11*11 area. Therefore depth of filter will be depth of image = 3.
Now suppose we have 96 such filter each producing different response. This will be depth of Convolutional layer. It is simply number of filters used.
I'm not sure why this is skimped over so heavily. I also had trouble understanding it at first, and very few outside of Andrej Karpathy (thanks d00d) have explained it. Although, in his writeup (http://cs231n.github.io/convolutional-networks/), he calculates the depth of the output volume using a different example than in the animation.
Start by reading the section titled 'Numpy examples'
Here, we go through iteratively.
In this case we have an 11x11x4. (why we start with 4 is kind of peculiar, as it would be easier to grasp with a depth of 3)
Really pay attention to this line:
A depth column (or a fibre) at position (x,y) would be the activations
X[x,y,:].
A depth slice, or equivalently an activation map at depth d
would be the activations X[:,:,d].
V[0,0,0] = np.sum(X[:5,:5,:] * W0) + b0
V is your output volume. The zero'th index v[0] is your column - in this case V[0] = 0 this is the first column in your output volume.
V[1] = 0 this is the first row in your output volume. V[3]= 0 is the depth. This is the first output layer.
Now, here's where people get confused (at least I did). The input depth has absolutely nothing to do with your output depth. The input depth only has control of the filter depth. W in Andrej's example.
Aside: A lot of people wonder why 3 is the standard input depth. For color input images, this will always be 3 for plain ole images.
np.sum(X[:5,:5,:] * W0) + b0 (convolution 1)
Here, we are calculating elementwise between a weight vector W0 which is 5x5x4. 5x5 is an arbitrary choice. 4 is the depth since we need to match our input depth. The weight vector is your filter, kernel, receptive field or whatever obfuscated name people decide to call it down the road.
if you come at this from a non python background, that's maybe why there's more confusion since array slicing notation is non-intuitive. The calculation is a dot product of your first convolution size (5x5x4) of your image with the weight vector. The output is a single scalar value which takes the position of your first filter output matrix. Imagine a 4 x 4 matrix representing the sum product of each of these convolution operations across the entire input. Now stack them for each filter. That shall give you your output volume. In Andrej's writeup, he starts moving along the x axis. The y axis remains the same.
Here's an example of what V[:,:,0] would look like in terms of convolutions. Remember here, the third value of our index is the depth of your output layer
[result of convolution 1, result of convolution 2, ..., ...]
[..., ..., ..., ..., ...]
[..., ..., ..., ..., ...]
[..., ..., ..., result of convolution n]
The animation is best for understanding this, but Andrej decided to swap it with an example that doesn't match the calculation above.
This took me a while. Partly because numpy doesn't index the way Andrej does in his example, at least it didn't I played around with it. Also, there's some assumptions that the sum product operation is clear. That's the key to understand how your output layer is created, what each value represents and what the depth is.
Hopefully that helps!
Since the input volume when we are doing an image classification problem is N x N x 3. At the beginning it is not difficult to imagine what the depth will mean - just the number of channels - Red, Green, Blue. Ok, so the meaning for the first layer is clear. But what about the next ones? Here is how I try to visualize the idea.
On each layer we apply a set of filters which convolve around the input. Lets imagine that currently we are at the first layer and we convolve around a volume V of size N x N x 3. As #Semih Yagcioglu mentioned at the very beginning we are looking for some rough features: curves, edges etc... Lets say we apply N filters of equal size (3x3) with stride 1. Then each of these filters is looking for a different curve or edge while convolving around V. Of course, the filter has the same depth, we want to supply the whole information not just the grayscale representation.
Now, if M filters will look for M different curves or edges. And each of these filters will produce a feature map consisting of scalars (the meaning of the scalar is the filter saying: The probability of having this curve here is X%). When we convolve with the same filter around the Volume we obtain this map of scalars telling us where where exactly we saw the curve.
Then comes feature map stacking. Imagine stacking as the following thing. We have information about where each filter detected a certain curve. Nice, then when we stack them we obtain information about what curves / edges are available at each small part of our input volume. And this is the output of our first convolutional layer.
It is easy to grasp the idea behind non-linearity when taking into account 3. When we apply the ReLU function on some feature map, we say: Remove all negative probabilities for curves or edges at this location. And this certainly makes sense.
Then the input for the next layer will be a Volume $V_1$ carrying info about different curves and edges at different spatial locations (Remember: Each layer Carries info about 1 curve or edge).
This means that the next layer will be able to extract information about more sophisticated shapes by combining these curves and edges. To combine them, again, the filters should have the same depth as the input volume.
From time to time we apply Pooling. The meaning is exactly to shrink the volume. Since when we use strides = 1, we usually look at a pixel (neuron) too many times for the same feature.
Hope this makes sense. Look at the amazing graphs provided by the famous CS231 course to check how exactly the probability for each feature at a certain location is computed.
In simple terms, it can explain as below,
Let's say you have 10 filters where each filter is the size of 5x5x3. What does this mean? the depth of this layer is 10 which is equal to the number of filters. Size of each filter can be defined as we want e.g., 5x5x3 in this case where 3 is the depth of the previous layer. To be precise, depth of each filer in the next layer should be 10 ( nxnx10) where n can be defined as you want like 5 or something else. Hope will make everything clear.
The first thing you need to note is
receptive field of a neuron is 3D
ie If the receptive field is 5x5 the neuron will be connected to 5x5x(input depth) number of points. So whatever be your input depth, one layer of neurons will only develop 1 layer of output.
Now, the next thing to note is
depth of output layer = depth of conv. layer
ie The output volume is independent of the input volume, and it only depends on the number filters(depth). This should be pretty obvious from the previous point.
Note that the number of filters (depth of the cnn layer) is a hyper parameter. You can take it whatever you want, independent of image depth. Each filter has it's own set of weights enabling it to learn a different feature on the same local region covered by the filter.
The depth of the network is the number of layers in the network. In the Krizhevsky paper, the depth is 9 layers (modulo a fencepost issue with how layers are counted?).
If you are referring to the depth of the filter (I came to this question searching for that) then this diagram of LeNet is illustrating
Source http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf
How to create such a filter; Well in python like https://github.com/alexcpn/cnn_in_python/blob/main/main.py#L19-L27
Which will give you a list of numpy arrays and length of the list is the depth
Example in the code above,but adding a depth of 3 for color (RGB), the below is the network. The first Convolutional layer is a filter of shape (5,5,3) and depth 6
Input (R,G,B)= [32.32.3] *(5.5.3)*6 == [28.28.6] * (5.5.6)*1 = [24.24.1] * (5.5.1)*16 = [20.20.16] *
FC layer 1 (20, 120, 16) * FC layer 2 (120, 1) * FC layer 3 (20, 10) * Softmax (10,) =(10,1) = Output
In Pytorch
np.set_printoptions(formatter={'float': lambda x: "{0:0.2f}".format(x)})
# Generate a random image
image_size = 32
image_depth = 3
image = np.random.rand(image_size, image_size)
# to mimic RGB channel
image = np.stack([image,image,image], axis=image_depth-1) # 0 to 2
image = np.moveaxis(image, [2, 0], [0, 2])
print("Image Shape=",image.shape)
input_tensor = torch.from_numpy(image)
m = nn.Conv2d(in_channels=3,out_channels=6,kernel_size=5,stride=1)
output = m(input_tensor.float())
print("Output Shape=",output.shape)
Image Shape= (3, 32, 32)
Output Shape= torch.Size([6, 28, 28])