Why do dilated convolutions preserve resolution? - machine-learning

The animation is from here. I am wondering why the dilated convolution is claimed to preserve resolution. Apparently the input in blue is 7x7 and the output in green is 3x3.
EDIT:
One way to work around the resolution loss is to pad the input with roughly half the size of the current receptive field, but
this essentially undermines the statement that dilated convolutions do not lose resolution, because it is the padding that preserves the resolution. To get the same output size with the input, a conventional convolution needs even less padding.
since the padding grows exponentially, a relative not-that-small dilation factor will leads to a heavily padded input image. Imagine a 1024x1024 input with 10x dilation, it will become about 2048x2048 (please let me know if I am wrong here). This is 4x the original size, which means most of the convolutions are done on the padded area instead of the real input. Personally this seems quite counterintuitive to me.

This is indeed a dilated convolution with a 5x5 filter. If you imagine the blue part of the animation as a 3x3 image that's 0 padded, it preserves resolution.
With regard to your edit, the emphasis is really in this statement in the post you linked: dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage
Padding is done to preserve resolution. That is correct.
What we really want here is to expand the size of the receptive field. In the post you linked, with 3 3x3 dilated convolutions at an increasing dilation, we already achieve a receptive field of 15x15 in the feature maps.
To achieve the equivalent with 3x3 convolutions and no loss of coverage and no loss of resolution, we can do it with a stride of 3 (4 would result in loss of coverage) and extremely heavy padding (to the extent where it's like what you said, convolution with mostly padded zeroes). However, we will need 4 3x3 convolutions with stride 3 instead of 3 to achieve a receptive field of 15x15.
On top of that, the normal convolutions would have even more convolutions that don't make sense, compared to the dilated convolution case.

Related

When to use dilated convolutions?

I do not understand what is the use of dilated convolution and when should we use it. When we want a larger receptive field while saving memory? By increasing dilation size, it increases the spacing between the kernel points?
Referring to Multi-Scale Context Aggregation by Dilated Convolutions, yes, you can save some memory while having a larger receptive field. You might want to use dilated convolutions if you want an exponential expansion of the receptive field without loss of resolution or coverage. This allows us to have a larger receptive field with the same computation and memory costs while preserving resolution. Pooling and stride Convolutions can also kind of "expand" the receptive field, but those reduce data's resolution.
Generally, dilated convolutions have also shown to perform better, for example in image segmentation in DeepLab and in speech in WaveNet.
Here shows a neat visualization of what dilation does.

How to choose the window size of CNN in deep learning?

In Convolutional Neural Network (CNN), a filter is select for weights sharing. For example, in the following pictures, a 3x3 window with the stride (distance between adjacent neurons) 1 is chosen.
So my question is: How to choose the window size? If I use 4x4 with the stride being 2, how much difference will it cause? Thanks a lot in advance!
There's no definite answer to this: filter size is one of hyperparameters you generally need to tune. However, there're some useful observations, that may help you. It's often preferred to choose smaller filters, but have greater number of those.
Example: four 5x5 filters have 100 parameters (ignoring bias), while 10 3x3 filters have 90 parameters. Through the larger of filters you still can capture the variety of features in the image, but with fewer parameters. More on this here.
Modern CNNs go even further with this idea and choose consecutive 3x1 and 1x3 convolutional layers. This reduces the number of parameters even more, but doesn't affect the performance. See the evolution of inception network.
The choice of stride is also important, but it affects the tensor shape after the convolution, hence the whole network. The general rule is to use stride=1 in usual convolutions and preserve the spatial size with padding, and use stride=2 when you want to downsample the image.

Backward pass on convolutional layer with 3-channel images

I know that deconvolution is basically convolution of output with flipped filters and I have implemented it for 2D data. But I am not able to generalize it for 3D data. For example consider the input of dimension 3x5x5 and the filter is of dimension 3x3x3 and stride is set to 1. SO, the output will be of the dimension 1x3x3. What I don't understand is how to calculate the deconvolution for this output. The flipped filter again will be of dimension 3x3x3 and output of convolution is of dimension 1x3x3 which are incompatible for convolution. So how can we calculate deconvolution ?
Perhaps this post will help you out a bit. You are correct in saying that a filter of the same size cannot fit the deconvolution dimensions. So in order to remedy that, the 1x3x3 gets padded throughout with zeros, mean-values, nn, etc. until it is the appropriate size that you require. Depth can be handled the same way. In your example, you want a 3x3x3 filter to 'deconvolve' the 1x3x3 to a 3x5x5. So we pad out the 1x3x3 to a 5x7x7 (with whichever method you prefer), and apply the filter. There are definite drawbacks with this process, stemming from the fact that you're trying to extrapolate more information from less.

Gaussian blur and FFT

I´m trying to make an implementation of Gaussian blur for a school project.
I need to make both a CPU and a GPU implementation to compare performance.
I am not quite sure that I understand how Gaussian blur works. So one of my questions is
if I have understood it correctly?
Heres what I do now:
I use the equation from wikipedia http://en.wikipedia.org/wiki/Gaussian_blur to calculate
the filter.
For 2d I take RGB of each pixel in the image and apply the filter to it by
multiplying RGB of the pixel and the surrounding pixels with the associated filter position.
These are then summed to be the new pixel RGB values.
For 1d I apply the filter first horizontally and then vetically, which should give
the same result if I understand things correctly.
Is this result exactly the same result as when the 2d filter is applied?
Another question I have is about how the algorithm can be optimized.
I have read that the Fast Fourier Transform is applicable to Gaussian blur.
But I can't figure out how to relate it.
Can someone give me a hint in the right direction?
Thanks.
Yes, the 2D Gaussian kernel is separable so you can just apply it as two 1D kernels. Note that you can't apply these operations "in place" however - you need at least one temporary buffer to store the result of the first 1D pass.
FFT-based convolution is a useful optimisation when you have large kernels - this applies to any kind of filter, not just Gaussian. Just how big "large" is depends on your architecture, but you probably don't want to worry about using an FFT-based approach for anything smaller than, say, a 49x49 kernel. The general approach is:
FFT the image
FFT the kernel, padded to the size of the image
multiply the two in the frequency domain (equivalent to convolution in the spatial domain)
IFFT (inverse FFT) the result
Note that if you're applying the same filter to more than one image then you only need to FFT the padded kernel once. You still have at least two FFTs to perform per image though (one forward and one inverse), which is why this technique only becomes a computational win for large-ish kernels.

Gaussian blur and convolution kernels

I do not understand what a convolution kernel is and how I would apply a convolution matrix to pixels in an image (I am talking about doing a Gaussian Blur operation on an image).
Also could I get an explanation on how to create a kernel for a Gaussian Blur operation?
I am reading this article but I cannot seem to understand how things are done...
Thanks to anyone who takes time to explain this to me :),
ExtremeCoder
The basic idea is that the new pixels of the image are created by an weighted average of the pixels close to it (imagine drawing a circle around the pixel).
For each pixel in the image you are going to create a little square around the pixel. Lets say you take the 8 neighbors next to a pixel (including diagonals even though do not matter here), and we perform a weighted average to get the middle pixel.
In the Gaussian blur case it breaks down to two one dimensional operations. For each pixel take the some amount of pixels next to a pixel in the row direction only. Multiply the pixel values time the weights computed from the Gaussian distribution (or if you are doing this for an visual effect and not for a scientific reason, the weights can anything that looks good) and sum them up. Another way to look at it is the pixel make a vector and the weights make a vector and your are taking the dot product. Repeat this process in the column direction as a separate pass.
A convolution kernel is a matrix of values that specify how the neighborhood of a pixel contribute to that pixel's state in the final image. There's a fair description of the basics here. A gaussian blur is a convolution function that uses a really ugly (you've seen the wikipedia page) function to compute a convolution kernel to pass over the image. You'll find an example kernel for a gaussian in that wikipedia page.
The point of all the math in there is to produce a soft blur that resembles the scatter pattern produced by a mesh screen placed between the viewer and the image. You can think of the 'size' (the standard deviation) of the gaussian as being related to the distance between the image and the screen.
Here's an awesome tool, if you don't want to calculate it all by yourself (like me):
http://www.embege.com/gauss/
EDIT
Since the link seems to be broken now, here's a link to archive.org:
http://web.archive.org/web/20150217075657/http://www.embege.com/gauss

Resources