can't change embedding dimension to pass it through gpt2 - machine-learning

I'm practicing image captioning and have some problems with different dimensions of tensors. So I have image embedding aka size [1, 512], but GPT2, which I use for caption generation, needs size [n, 768], where n is number of tokens of the caption's beginning. I don't know how I should change the dimension of my image embedding to pass it through GPT2.
I thought it would be a good idea to fill image embedding with zeros so in will be size [1, 768] but I think it will negatively affect on the result caption.
Thank you for your help!
I've tried to fill image embeddings with zeros to be size [1, 768] but I think it won't help a lot

Related

Resizing a target segmentation map without converting individual pixel values to floats

I have a dataset with drone view images of size 4000x6000, grayscale. Each individual pixel value corresponds to a class (I have 20 classes in total), so a pixel value of 3 would mean "tree" for example. Using the original image, I can very easily create binary masks for all 20 of the classes by using equality operators in NumPy and I get pixel-perfect masks.
Here's an example of what one row would look like:
[[2, 2, 2, 2, ...... , 5, 5, 5]]
However, 4000x6000 is much too big for my purposes, and I want to resize these segmentation targets to something a bit more bearable, such as 400x400 or 400x600. Though I've tried a few different Python libraries, all of them convert my pixel values to different float values causing me to lose my segmentation map labels. Is there any method (not including cropping), where I can resize my segmentation target maps AND the original RGB input images without losing my labels?
When one resizes an image, one usually needs to interpolate pixel values (e.g., decide on the "intensity" at sub-pixel locations). Natural images tend to vary smoothly between pixels, which makes interpolation with large support very appealing (see detailed discussion here).
However, as you observed, interpolating between integer values of labels makes no sense at all.
Therefore, you can:
Do not interpolate - use nearest-neighbor resizing for the label map.
That is, use whatever interpolation method you like (LANCZOS, BICUBIC...) for the input image, but use NEAREST method for the label map.
Interpolate the per-label probability maps - for each 4000x6000 label map, produce 20 per-class probability maps and interpolate them to the desired size (using the same interpolation method as used for the image: LANCZOS, BICUBIC...). Now, for each resized pixel you have a 20-dim target distribution. You can train with these "soft labels", or take the argmax and train with the most dominant label per pixel.

how to determine anchor sizes for RetinaNet model?

I am trying to implement a RetinaNet model in pytorch for my custom dataset, however, i am little confused on how some of the hyper-parameters are chosen. For example, the model uses anchor size of [32, 64, 128, 256, 512], anchor ratio of [0.5, 1, 2] and anchor scale of [1, 1.25, 1.58], but how do we determine these numbers for our own datasets. What are some of the ways we can go about when selecting param for the anchor box generation part ?
The anchor ratio can be determined as ratio = bounding box height / bounding box width.
The scales factor is used to enlarge the basic feature map at an anchor. The ratios factor is used in the same way. So for each feature map a variety of different aspect ratios and sizes are used to try to match with the annotated bounding boxes of the target objects.
A good way to automatically obtain a list of optimized parameters is to use the Anchor Optimization repository by Martin Zlocha, Qi Dou and Ben Glocker. You can check it out to get an idea on how to automatize the process.
Link to it:
https://github.com/martinzlocha/anchor-optimization/

Can CNN reduce input size by a specified ratio

I have a fully convoluted CNN and I have no problem designing it to have an exact same output size by using "same" padding and a stride of one.
However, I have an image-translation problem where I need the output being resized to (W/26, H/15) where (W, H) is the size of the input. (Resizing the image beforehand is problematic, it won't be an option in our case)
I understand by using the formula: O = (I - F +2P )/s + 1. Where:
O: output size
I: input size
F: filter size
P: padding
s: stride
I may be able to use some really strange filter size to achieve this. But is there a systematic or organized way to construct such a network to reduce input size?
Getting that precise output size playing only with filter size and stride is going to give you some headaches.
My two cents: whenever the size of your output is particularly weird (like (w//26, h//15)), interpolation layers might be helpful to get you to that particular size.
For example:
PyTorch you can use torch.nn.functional.interpolate.
Tensorflow tf.image.resize

Extracting original image format after adversarial attack with Cleverhans

Suppose I load up the MNIST dataset with Cleverhans and attack an image with FGM. Any image I load via the Cleverhans MNIST dataset already has its pixel values constrained to [0, 1], and the same is true after I attack the image (suppose I clip the image to [0, 1]). If I want to see the attack in this case, I would just multiply all the pixel values by 255 and create the adversarial image.
In this scenario, the original MNIST image with pixel values in [0, 255] has instead been modified to have pixel values in [0, 1] by dividing all values by 255. In order to get back the original "image properties" I just multiply again by 255.
Is there a way (in Cleverhans, or in general) to extract the original image properties when this preprocessing step (in MNIST's case, dividing by 255) is more complicated? For example, I am thinking of VGG16 where an ImageNet image is resized while its aspect ratio is preserved, and the process of bringing the image back to its original size is complicated and is unique for each image.
Is it possible to add this preprocessing step as a step in the model to directly get the noise on the original image? I imagine that this is likely not the case since not all preprocessing steps are differentiable?
Does this mean I am unable to view the noise as applied on the original image if the preprocessing step is too complicated?
That's correct, if your pipeline uses a prep-rocessing stage that is:
difficult to invert: it will be difficult to obtain the image that corresponds to the perturbed image in the original domani based on (a) the original image and (b) the perturbation in pre-processed space.
non-differentiable: it will not be possible for attacks that require gradients to compute the perturbed image in the original domain.
However, you can use attacks that do not compute gradients directly, like SPSA to operate in the original domain directly, even if the pre-processing stage is non-differentiable: https://github.com/tensorflow/cleverhans/blob/master/cleverhans/attacks/spsa.py

What is Depth of a convolutional neural network?

I was taking a look at Convolutional Neural Network from CS231n Convolutional Neural Networks for Visual Recognition. In Convolutional Neural Network, the neurons are arranged in 3 dimensions(height, width, depth). I am having trouble with the depth of the CNN. I can't visualize what it is.
In the link they said The CONV layer's parameters consist of a set of learnable filters. Every filter is small spatially (along width and height), but extends through the full depth of the input volume.
For example loook at this picture. Sorry if the image is too crappy.
I can grasp the idea that we take a small area off the image, then compare it with the "Filters". So the filters will be collection of small images? Also they said We will connect each neuron to only a local region of the input volume. The spatial extent of this connectivity is a hyperparameter called the receptive field of the neuron. So is the receptive field has the same dimension as the filters? Also what will be the depth here? And what do we signify using the depth of a CNN?
So, my question mainly is, if i take an image having dimension of [32*32*3] (Lets say i have 50000 of these images, making the dataset [50000*32*32*3]), what shall i choose as its depth and what would it mean by the depth. Also what will be the dimension of the filters?
Also it will be much helpful if anyone can provide some link that gives some intuition on this.
EDIT:
So in one part of the tutorial(Real-world example part), it says The Krizhevsky et al. architecture that won the ImageNet challenge in 2012 accepted images of size [227x227x3]. On the first Convolutional Layer, it used neurons with receptive field size F=11, stride S=4 and no zero padding P=0. Since (227 - 11)/4 + 1 = 55, and since the Conv layer had a depth of K=96, the Conv layer output volume had size [55x55x96].
Here we see the depth is 96. So is depth something that i choose arbitrarily? or something i compute? Also in the example above(Krizhevsky et al) they had 96 depths. So what does it mean by its 96 depths? Also the tutorial stated Every filter is small spatially (along width and height), but extends through the full depth of the input volume.
So that means the depth will be like this? If so then can i assume Depth = Number of Filters?
In Deep Neural Networks the depth refers to how deep the network is but in this context, the depth is used for visual recognition and it translates to the 3rd dimension of an image.
In this case you have an image, and the size of this input is 32x32x3 which is (width, height, depth). The neural network should be able to learn based on this parameters as depth translates to the different channels of the training images.
UPDATE:
In each layer of your CNN it learns regularities about training images. In the very first layers, the regularities are curves and edges, then when you go deeper along the layers you start learning higher levels of regularities such as colors, shapes, objects etc. This is the basic idea, but there lots of technical details. Before going any further give this a shot : http://www.datarobot.com/blog/a-primer-on-deep-learning/
UPDATE 2:
Have a look at the first figure in the link you provided. It says 'In this example, the red input layer holds the image, so its width and height would be the dimensions of the image, and the depth would be 3 (Red, Green, Blue channels).' It means that a ConvNet neuron transforms the input image by arranging its neurons in three dimeonsion.
As an answer to your question, depth corresponds to the different color channels of an image.
Moreover, about the filter depth. The tutorial states this.
Every filter is small spatially (along width and height), but extends through the full depth of the input volume.
Which basically means that a filter is a smaller part of an image that moves around the depth of the image in order to learn the regularities in the image.
UPDATE 3:
For the real world example I just browsed the original paper and it says this : The first convolutional layer filters the 224×224×3 input image with 96 kernels of size 11×11×3 with a stride of 4 pixels.
In the tutorial it refers the depth as the channel, but in real world you can design whatever dimension you like. After all that is your design
The tutorial aims to give you a glimpse of how ConvNets work in theory, but if I design a ConvNet nobody can stop me proposing one with a different depth.
Does this make any sense?
Depth of CONV layer is number of filters it is using.
Depth of a filter is equal to depth of image it is using as input.
For Example: Let's say you are using an image of 227*227*3.
Now suppose you are using a filter of size of 11*11(spatial size).
This 11*11 square will be slided along whole image to produce a single 2 dimensional array as a response. But in order to do so, it must cover every aspect inside of 11*11 area. Therefore depth of filter will be depth of image = 3.
Now suppose we have 96 such filter each producing different response. This will be depth of Convolutional layer. It is simply number of filters used.
I'm not sure why this is skimped over so heavily. I also had trouble understanding it at first, and very few outside of Andrej Karpathy (thanks d00d) have explained it. Although, in his writeup (http://cs231n.github.io/convolutional-networks/), he calculates the depth of the output volume using a different example than in the animation.
Start by reading the section titled 'Numpy examples'
Here, we go through iteratively.
In this case we have an 11x11x4. (why we start with 4 is kind of peculiar, as it would be easier to grasp with a depth of 3)
Really pay attention to this line:
A depth column (or a fibre) at position (x,y) would be the activations
X[x,y,:].
A depth slice, or equivalently an activation map at depth d
would be the activations X[:,:,d].
V[0,0,0] = np.sum(X[:5,:5,:] * W0) + b0
V is your output volume. The zero'th index v[0] is your column - in this case V[0] = 0 this is the first column in your output volume.
V[1] = 0 this is the first row in your output volume. V[3]= 0 is the depth. This is the first output layer.
Now, here's where people get confused (at least I did). The input depth has absolutely nothing to do with your output depth. The input depth only has control of the filter depth. W in Andrej's example.
Aside: A lot of people wonder why 3 is the standard input depth. For color input images, this will always be 3 for plain ole images.
np.sum(X[:5,:5,:] * W0) + b0 (convolution 1)
Here, we are calculating elementwise between a weight vector W0 which is 5x5x4. 5x5 is an arbitrary choice. 4 is the depth since we need to match our input depth. The weight vector is your filter, kernel, receptive field or whatever obfuscated name people decide to call it down the road.
if you come at this from a non python background, that's maybe why there's more confusion since array slicing notation is non-intuitive. The calculation is a dot product of your first convolution size (5x5x4) of your image with the weight vector. The output is a single scalar value which takes the position of your first filter output matrix. Imagine a 4 x 4 matrix representing the sum product of each of these convolution operations across the entire input. Now stack them for each filter. That shall give you your output volume. In Andrej's writeup, he starts moving along the x axis. The y axis remains the same.
Here's an example of what V[:,:,0] would look like in terms of convolutions. Remember here, the third value of our index is the depth of your output layer
[result of convolution 1, result of convolution 2, ..., ...]
[..., ..., ..., ..., ...]
[..., ..., ..., ..., ...]
[..., ..., ..., result of convolution n]
The animation is best for understanding this, but Andrej decided to swap it with an example that doesn't match the calculation above.
This took me a while. Partly because numpy doesn't index the way Andrej does in his example, at least it didn't I played around with it. Also, there's some assumptions that the sum product operation is clear. That's the key to understand how your output layer is created, what each value represents and what the depth is.
Hopefully that helps!
Since the input volume when we are doing an image classification problem is N x N x 3. At the beginning it is not difficult to imagine what the depth will mean - just the number of channels - Red, Green, Blue. Ok, so the meaning for the first layer is clear. But what about the next ones? Here is how I try to visualize the idea.
On each layer we apply a set of filters which convolve around the input. Lets imagine that currently we are at the first layer and we convolve around a volume V of size N x N x 3. As #Semih Yagcioglu mentioned at the very beginning we are looking for some rough features: curves, edges etc... Lets say we apply N filters of equal size (3x3) with stride 1. Then each of these filters is looking for a different curve or edge while convolving around V. Of course, the filter has the same depth, we want to supply the whole information not just the grayscale representation.
Now, if M filters will look for M different curves or edges. And each of these filters will produce a feature map consisting of scalars (the meaning of the scalar is the filter saying: The probability of having this curve here is X%). When we convolve with the same filter around the Volume we obtain this map of scalars telling us where where exactly we saw the curve.
Then comes feature map stacking. Imagine stacking as the following thing. We have information about where each filter detected a certain curve. Nice, then when we stack them we obtain information about what curves / edges are available at each small part of our input volume. And this is the output of our first convolutional layer.
It is easy to grasp the idea behind non-linearity when taking into account 3. When we apply the ReLU function on some feature map, we say: Remove all negative probabilities for curves or edges at this location. And this certainly makes sense.
Then the input for the next layer will be a Volume $V_1$ carrying info about different curves and edges at different spatial locations (Remember: Each layer Carries info about 1 curve or edge).
This means that the next layer will be able to extract information about more sophisticated shapes by combining these curves and edges. To combine them, again, the filters should have the same depth as the input volume.
From time to time we apply Pooling. The meaning is exactly to shrink the volume. Since when we use strides = 1, we usually look at a pixel (neuron) too many times for the same feature.
Hope this makes sense. Look at the amazing graphs provided by the famous CS231 course to check how exactly the probability for each feature at a certain location is computed.
In simple terms, it can explain as below,
Let's say you have 10 filters where each filter is the size of 5x5x3. What does this mean? the depth of this layer is 10 which is equal to the number of filters. Size of each filter can be defined as we want e.g., 5x5x3 in this case where 3 is the depth of the previous layer. To be precise, depth of each filer in the next layer should be 10 ( nxnx10) where n can be defined as you want like 5 or something else. Hope will make everything clear.
The first thing you need to note is
receptive field of a neuron is 3D
ie If the receptive field is 5x5 the neuron will be connected to 5x5x(input depth) number of points. So whatever be your input depth, one layer of neurons will only develop 1 layer of output.
Now, the next thing to note is
depth of output layer = depth of conv. layer
ie The output volume is independent of the input volume, and it only depends on the number filters(depth). This should be pretty obvious from the previous point.
Note that the number of filters (depth of the cnn layer) is a hyper parameter. You can take it whatever you want, independent of image depth. Each filter has it's own set of weights enabling it to learn a different feature on the same local region covered by the filter.
The depth of the network is the number of layers in the network. In the Krizhevsky paper, the depth is 9 layers (modulo a fencepost issue with how layers are counted?).
If you are referring to the depth of the filter (I came to this question searching for that) then this diagram of LeNet is illustrating
Source http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf
How to create such a filter; Well in python like https://github.com/alexcpn/cnn_in_python/blob/main/main.py#L19-L27
Which will give you a list of numpy arrays and length of the list is the depth
Example in the code above,but adding a depth of 3 for color (RGB), the below is the network. The first Convolutional layer is a filter of shape (5,5,3) and depth 6
Input (R,G,B)= [32.32.3] *(5.5.3)*6 == [28.28.6] * (5.5.6)*1 = [24.24.1] * (5.5.1)*16 = [20.20.16] *
FC layer 1 (20, 120, 16) * FC layer 2 (120, 1) * FC layer 3 (20, 10) * Softmax (10,) =(10,1) = Output
In Pytorch
np.set_printoptions(formatter={'float': lambda x: "{0:0.2f}".format(x)})
# Generate a random image
image_size = 32
image_depth = 3
image = np.random.rand(image_size, image_size)
# to mimic RGB channel
image = np.stack([image,image,image], axis=image_depth-1) # 0 to 2
image = np.moveaxis(image, [2, 0], [0, 2])
print("Image Shape=",image.shape)
input_tensor = torch.from_numpy(image)
m = nn.Conv2d(in_channels=3,out_channels=6,kernel_size=5,stride=1)
output = m(input_tensor.float())
print("Output Shape=",output.shape)
Image Shape= (3, 32, 32)
Output Shape= torch.Size([6, 28, 28])

Resources