I do not understand what is the use of dilated convolution and when should we use it. When we want a larger receptive field while saving memory? By increasing dilation size, it increases the spacing between the kernel points?
Referring to Multi-Scale Context Aggregation by Dilated Convolutions, yes, you can save some memory while having a larger receptive field. You might want to use dilated convolutions if you want an exponential expansion of the receptive field without loss of resolution or coverage. This allows us to have a larger receptive field with the same computation and memory costs while preserving resolution. Pooling and stride Convolutions can also kind of "expand" the receptive field, but those reduce data's resolution.
Generally, dilated convolutions have also shown to perform better, for example in image segmentation in DeepLab and in speech in WaveNet.
Here shows a neat visualization of what dilation does.
Related
Each layer in a CNN reduces the size of the input via convolution and max-pooling operations. Convolution is translation equivariant, but max-pooling is translation invariant. Correct me if this is wrong : each time max-pooling applied, the precise location of a feature is reduced. So the feature maps of the final conv layer in a very deep CNN will have a large receptive field (w.r.t the original image), but the location of this feature (in the original image) is not discernible from looking at this feature map alone.
If this is true, how can the accuracy of bounding boxes when we do localisation be so good with a deep CNN? I understand how classification works, but making accurate bounding box predictions is confusing me.
Perhaps a toy example will clarify my confusion;
Say we have a dataset of images with dimension 256x256x1, and we want to predict whether a cat is present, and if so, where it is, so our target is something like [sigmoid_cat_present, cat_location].
Our vanilla CNN (let's assume something like VGG) will take in the image and transform it to something like 16x16x256 in the last convolutional layer. Each pixel in this final 16x16 feature map can be influenced by a much larger region in the original image. So if we determine a cat is present, how can the [cat_location] be refined to value more granular than this effective receptive field?
To add to your question - how about pixel perfect accuracy of segmentation boundary !!
Your intuition regarding down-sampling via max-pooling is correct. Normal CNNs have that limit. However, there have been some improvements recently to overcome it.
The breakthrough to this problem came in 2015-6 in the form of U-net and atrous/dilated convolution introduced in DeepLab.
Dilated convolutions or atrous convolutions, previously described for wavelet analysis without signal decimation, expands window size without increasing the number of weights by inserting zero-values into convolution kernels. Dilated convolutions have been shown to decrease blurring in semantic segmentation maps, and are purported to work at least in part by extracting long range information without the need for pooling.
Using U-Net architectures is another method that seeks to retain high spatial frequency information by directly adding skip connections between early and late layers. In other words, up-sampling followed by down-sampling.
In TensorFlow, atrous convolutions are implemented with function:
tf.nn.atrous_conv2d
There are many more methods and this is an ongoing research area.
In CNN, if padding is used so that the size of the image doesn't get shrinked after several convolutional layers – then why do we use strided convolutions? I wonder because strided convolutions are also reducing the size of image.
Because we want to reduce to size of image. There are some reasons:
Reduce computational and memory requirement.
Aggregate local features to higher level features.
Subsequent convolutions would have a larger receptive field in the original scale.
Traditionally we have used pooling to reduce the size of image, like max-pooling. Strided convolution is another way to do this (and it's getting more popular).
My understanding is that we use padding when we convolute because convoluting with filters reduces the dimension of the output by shrinking it, as well as loses information from the edges/corners of the input matrix. However, we also use a pooling layer after a number of Conv layers in order to downsample our feature maps. Doesn't this seem sort of contradicting? We use padding because we do NOT want to reduce the spatial dimensions but we later use pooling to reduce the spatial dimensions. Could someone provide some intuition behind these 2?
Without loss of generality, assume we are dealing with images as inputs. The reasons behind padding is not only to keep the dimensions from shrinking, it's also to ensure that input pixels on the corners and edges of the input are not "disadvantaged" in affecting the output. Without padding, a pixel on the corner of an images overlaps with just one filter region, while a pixel in the middle of the image overlaps with many filter regions. Hence, the pixel in the middle affects more units in the next layer and therefore has a greater impact on the output. Secondly, you actually do want to shrink dimensions of your input (Remember, Deep Learning is all about compression, i.e. to find low dimensional representations of the input that disentangle the factors of variation in your data). The shrinking induced by convolutions with no padding is not ideal and if you have a really deep net you would quickly end up with very low dimensional representations that lose most of the relevant information in the data. Instead you want to shrink your dimensions in a smart way, which is achieved by Pooling. In particular, Max Pooling has been found to work well. This is really an empirical result, i.e. there isn't a lot of theory to explain why this is the case. You could imagine that by taking the max over nearby activations, you still retain the information about the presence of a particular feature in this region, while losing information about its exact location. This can be good or bad. Good because it buys you translation invariance, and bad because exact location may be relevant for you problem.
In Convolutional Neural Network (CNN), a filter is select for weights sharing. For example, in the following pictures, a 3x3 window with the stride (distance between adjacent neurons) 1 is chosen.
So my question is: How to choose the window size? If I use 4x4 with the stride being 2, how much difference will it cause? Thanks a lot in advance!
There's no definite answer to this: filter size is one of hyperparameters you generally need to tune. However, there're some useful observations, that may help you. It's often preferred to choose smaller filters, but have greater number of those.
Example: four 5x5 filters have 100 parameters (ignoring bias), while 10 3x3 filters have 90 parameters. Through the larger of filters you still can capture the variety of features in the image, but with fewer parameters. More on this here.
Modern CNNs go even further with this idea and choose consecutive 3x1 and 1x3 convolutional layers. This reduces the number of parameters even more, but doesn't affect the performance. See the evolution of inception network.
The choice of stride is also important, but it affects the tensor shape after the convolution, hence the whole network. The general rule is to use stride=1 in usual convolutions and preserve the spatial size with padding, and use stride=2 when you want to downsample the image.
The animation is from here. I am wondering why the dilated convolution is claimed to preserve resolution. Apparently the input in blue is 7x7 and the output in green is 3x3.
EDIT:
One way to work around the resolution loss is to pad the input with roughly half the size of the current receptive field, but
this essentially undermines the statement that dilated convolutions do not lose resolution, because it is the padding that preserves the resolution. To get the same output size with the input, a conventional convolution needs even less padding.
since the padding grows exponentially, a relative not-that-small dilation factor will leads to a heavily padded input image. Imagine a 1024x1024 input with 10x dilation, it will become about 2048x2048 (please let me know if I am wrong here). This is 4x the original size, which means most of the convolutions are done on the padded area instead of the real input. Personally this seems quite counterintuitive to me.
This is indeed a dilated convolution with a 5x5 filter. If you imagine the blue part of the animation as a 3x3 image that's 0 padded, it preserves resolution.
With regard to your edit, the emphasis is really in this statement in the post you linked: dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage
Padding is done to preserve resolution. That is correct.
What we really want here is to expand the size of the receptive field. In the post you linked, with 3 3x3 dilated convolutions at an increasing dilation, we already achieve a receptive field of 15x15 in the feature maps.
To achieve the equivalent with 3x3 convolutions and no loss of coverage and no loss of resolution, we can do it with a stride of 3 (4 would result in loss of coverage) and extremely heavy padding (to the extent where it's like what you said, convolution with mostly padded zeroes). However, we will need 4 3x3 convolutions with stride 3 instead of 3 to achieve a receptive field of 15x15.
On top of that, the normal convolutions would have even more convolutions that don't make sense, compared to the dilated convolution case.