Intuition behind Edge Detection Matrices in Convolution Neural Network - machine-learning

I am new to deep learning and attempting to understand how CNN performs image classification
i have gone through multiple youtube videos, multiple blogs and papers as well. And they all mention roughly the same thing:
add filters to get feature maps
perform pooling
remove linearity using RELU
send to a fully connected network.
While this is all fine and dandy, i dont really understand how convolution works in essence. Like for example. edge detection.
like for ex: [[-1, 1], [-1,1]] detects a vertical edge.
How? Why? how do we know for sure that this will detect a vertical edge .
Similarly matrices for blurring/sharpening, how do we actually know that they will perform what they are aimed for.
do i simply takes peoples word for it?
Please help/ i feel helpless since i am not able to understand convolution and how the matrices detects edges or shapes

Filters detect spatial patterns such as edges in an image by detecting the changes in intensity values of the image.
A quick recap: In terms of an image, a high-frequency image is the one where the intensity of the pixels changes by a large amount, while a low-frequency image the one where the intensity is almost uniform. An image has both high and low frequency components. The high-frequency components correspond to the edges of an object because at the edges the rate of change of intensity of pixel values is high.
High pass filters are used to enhance the high-frequency parts of an image.
Let's take an example that a part of your image has pixel values as [[10, 10, 0], [10, 10, 0], [10, 10, 0]] indicating the image pixel values are decreasing toward the right i.e. the image changes from light at the left to dark at the right. The filter used here is [[1, 0, -1], [1, 0, -1], [1, 0, -1]].
Now, we take the convolutional of these two matrices that give the output [[10, 0, 0], [10, 0, 0], [10, 0, 0]]. Finally, these values are summed up to give a pixel value of 30, which gives the variation in pixel values as we move from left to right. Similarly, we find the subsequent pixel values.
Here, you will notice that a rate of change of pixel values varies a lot from left to right thus a vertical edge has been detected. Had you used the filter [[1, 1, 1], [0, 0, 0], [-1, -1, -1]], you would get the convolutional output consisting of 0s only i.e. no horizontal edge present. In the similar ways, [[-1, 1], [-1, 1]] detects a vertical edge.
You can check more here in a lecture by Andrew Ng.
Edit: Usually, a vertical edge detection filter has bright pixels on the left and dark pixels on the right (or vice-versa). The sum of values of the filter should be 0 else the resultant image will become brighter or darker. Also, in convolutional neural networks, the filters are learned the same way as hyperparameters through backpropagation during the training process.

Related

How to calculate partial derivatives of error function with respect to values in matrix

I am building a project that is a basic neural network that takes in a 2x2 image with the goal to classify the image as either a forward slash (1-class) or back slash (0-class) shape. The data for the input is a flat numpy array. 1 represents a black pixel 0 represents a white pixel.
0-class: [1, 0, 0, 1]
1-class: [0, 1, 1, 0]
If I start my filter as a random 4x1 matrix, how can I use gradient descent to come to either perfect matrix [1,-1,-1,1] or [-1,1,1,-1] to classify the datapoints.
Side note: Even when multiplied with the "perfect" answer matrix then summed, the label output would be -2 and 2. Would my data labels need to be -2 and 2? What if I want my classes labeled as 0 and 1?

Derive Sobel's operators?

The two operators for detecting and smoothing horizontal and vertical edges are shown below:
[-1 0 1]
[-2 0 2]
[-1 0 1]
and
[-1 -2 -1]
[ 0 0 0]
[ 1 2 1]
But after much Googling, I still have no idea where these operators come from. I would appreciate it if someone can show me how they are derived.
The formulation was proposed by Irwin Sobel a long time ago. I think about 1974. There is a great page on the subject here.
The main advantage of convolving the 9 pixels surrounding one at which gradients are to be detected is that this simple operator is really fast and can be done with shifts and adds in low-cost hardware.
They are not the greatest edge detectors in the world - Google Canny edge detectors for something better, but they are fast and suitable for a lot of simple applications.
So spatial filters, like the Sobel kernels, are applied by "sliding" the kernel over the image (this is called convolution). If we take this kernel:
[-1 0 1]
[-2 0 2]
[-1 0 1]
After applying the Sobel operator, each result pixel gets a:
high (positive) value if the pixels on the right side are bright and pixels on the left are dark
low (negative) value if the pixels on the right side are dark and pixels on the left are bright.
This is because in discrete 2D convolution, the result is the sum of each kernel value multiplied by the corresponding image pixel. Thus a vertical edge causes the value to have a large negative or positive value, depending on the direction of the edge gradient. We can then take the absolute value and scale to interval [0, 1], if we want to display the edges as white and don't care about the edge direction.
This works identically for the other kernel, except it finds horizontal edges.

What is "same quarter" meaning about histogram?

In "Adaptive document image binarization" paper (link: http://www.mediateam.oulu.fi/publications/pdf/24.p) I found SDM, TBM algorithm for Text/Image Segmentation,
But I can't understand what "same quarter" is in the followed this paragraph.
If the average is high and a global histogram peak is in
the same quarter of the histogram and transient differ-
ence is transient, then use SDM.
If the average is medium and a global histogram peak
is not in the same quarter of the histogram and transi-
ent difference is uniform, then use TBM.
I know that a quarter meaning is 1/4. But i think that quarter is different meaning.. right?
After skimming the paper very quickly, I found two possible ways to interpret this.
From the current bin, choose a quarter of the histogram by looking 1/8th to the left and 1/8th to the right. i.e. if your histogram has 256 bins, and you are at bin 50, the quarter you are looking for is [18, 81]. So if the average is high and the peak lies in [18,81], use SDM.
Divide the entire histogram into quarters, and check which quarter your current bin lies in. i.e. if your histogram has 256 bins, divide it into [0, 63], [64, 127], [128, 191], [192, 255]. If your current bin is 50, you are in quarter 1, and so if the average is medium and the peak lies anywhere outside quarter 1, use TBM.
Based on intuition and mathematical sense, option 1 is more likely. But I would try both and see which implementation gives better results.

Where to center the kernel when using FFTW for image convolution?

I am trying to use FFTW for image convolution.
At first just to test if the system is working properly, I performed the fft, then the inverse fft, and could get the exact same image returned.
Then a small step forward, I used the identity kernel(i.e., kernel[0][0] = 1 whereas all the other components equal 0). I took the component-wise product between the image and kernel(both in the frequency domain), then did the inverse fft. Theoretically I should be able to get the identical image back. But the result I got is very not even close to the original image. I am suspecting this has something to do with where I center my kernel before I fft it into frequency domain(since I put the "1" at kernel[0][0], it basically means that I centered the positive part at the top left). Could anyone enlighten me about what goes wrong here?
For each dimension, the indexes of samples should be from -n/2 ... 0 ... n/2 -1, so if the dimension is odd, center around the middle. If the dimension is even, center so that before the new 0 you have one sample more than after the new 0.
E.g. -4, -3, -2, -1, 0, 1, 2, 3 for a width/height of 8 or -3, -2, -1, 0, 1, 2, 3 for a width/height of 7.
The FFT is relative to the middle, in its scale there are negative points.
In the memory the points are 0...n-1, but the FFT treats them as -ceil(n/2)...floor(n/2), where 0 is -ceil(n/2) and n-1 is floor(n/2)
The identity matrix is a matrix of zeros with 1 in the 0,0 location (the center - according to above numbering). (In the spatial domain.)
In the frequency domain the identity matrix should be a constant (all real values 1 or 1/(N*M) and all imaginary values 0).
If you do not receive this result, then the identify matrix might need padding differently (to the left and down instead of around all sides) - this may depend on the FFT implementation.
Center each dimension separately (this is an index centering, no change in actual memory).
You will probably need to pad the image (after centering) to a whole power of 2 in each dimension (2^n * 2^m where n doesn't have to equal m).
Pad relative to FFT's 0,0 location (to center, not corner) by copying existing pixels into a new larger image, using center-based-indexes in both source and destination images (e.g. (0,0) to (0,0), (0,1) to (0,1), (1,-2) to (1,-2))
Assuming your FFT uses regular floating point cells and not complex cells, the complex image has to be of size 2*ceil(2/n) * 2*ceil(2/m) even if you don't need a whole power of 2 (since it has half the samples, but the samples are complex).
If your image has more than one color channel, you will first have to reshape it, so that the channel are the most significant in the sub-pixel ordering, instead of the least significant. You can reshape and pad in one go to save time and space.
Don't forget the FFTSHIFT after the IFFT. (To swap the quadrants.)
The result of the IFFT is 0...n-1. You have to take pixels floor(n/2)+1..n-1 and move them before 0...floor(n/2).
This is done by copying pixels to a new image, copying floor(n/2)+1 to memory-location 0, floor(n/2)+2 to memory-location 1, ..., n-1 to memory-location floor(n/2), then 0 to memory-location ceil(n/2), 1 to memory-location ceil(n/2)+1, ..., floor(n/2) to memory-location n-1.
When you multiply in the frequency domain, remember that the samples are complex (one cell real then one cell imaginary) so you have to use a complex multiplication.
The result might need dividing by N^2*M^2 where N is the size of n after padding (and likewise for M and m). - You can tell this by (a. looking at the frequency domain's values of the identity matrix, b. comparing result to input.)
I think that your understanding of the Identity kernel may be off. An Identity kernel should have the 1 at the center of the 2D kernal not at the 0, 0 position.
example for a 3 x 3, you have yours setup as follows:
1, 0, 0
0, 0, 0
0, 0, 0
It should be
0, 0, 0
0, 1, 0
0, 0, 0
Check this out also
What is the "do-nothing" convolution kernel
also look here, at the bottom of page 3.
http://www.fmwconcepts.com/imagemagick/digital_image_filtering.pdf
I took the component-wise product between the image and kernel in frequency domain, then did the inverse fft. Theoretically I should be able to get the identical image back.
I don't think that doing a forward transform with a non-fft kernel, and then an inverse fft transform should lead to any expectation of getting the original image back, but perhaps I'm just misunderstanding what you were trying to say there...

How to get clear image after low frequency suppression of image?

I'm suppressing the low DC frequencies of several (unequal) blocks in an image in the Dicrete Cosine Transform (DCT) domain. After that doing an inverse DCT to get back the image with only the high frequency portions remaining.
cvConvertScale( img , img_32 ); //8bit to 32bit conversion
cvMinMaxLoc( img_32, &Min, &Max );
cvScale( img_32 , img_32 , 1.0/Max ); //quantization for 32bit
cvDCT( img_32 , img_dct , CV_DXT_FORWARD ); //DCT
//display( img_dct, "DCT");
cvSet2D(img_dct, 0, 0, cvScalar(0)); //suppress constant background
//cvConvertScale( img_dct, img_dct, -1, 255 ); //invert colors
cvDCT( img_dct , img_out , CV_DXT_INVERSE ); //IDCT
//display(img_out, "IDCT");
The objective is to identify and isolate elements which is present in high frequencies from previously detected regions in the image. However in several cases the text is very thin and faint (low contrast). In these cases the IDCT yeilds images which are so dark that even the high frequency portions become too faint for further analysis to work.
What manipulations are there so that we can obtain a clearer picture from the IDCT after background suppression? CvEqualizeHist() gives too much noise.
EDIT:
Whole picture uploaded here as belisarius asked. The low frequency suppression is not being done on the entire image, but on small ROI set to the smallest bounding rectangle around text/low frequency portions.
Based on your example image, Let's start with one possible strategy to isolate the text.
The code is in Mathematica.
(* Import your image*)
i1 = Import["http://i.stack.imgur.com/hYwx8.jpg"];
i = ImageData#i1;
(*Get the red channel*)
j = i[[All, All, 1]]
(*Perform the DCT*)
t = FourierDCT[j];
(*Define a high pass filter*)
truncate[data_, f_] :=
Module[{i, j},
{i, j} = Floor[Dimensions[data]/Sqrt[f]];
PadRight[Take[data, -i, -j], Dimensions[data], 0.]
];
(*Apply the HP filter, and do the reverse DCT*)
k = Image[FourierDCT[truncate[t, 4], 3]] // ImageAdjust
(*Appy a Gradient Filter and a Dilation*)
l = Dilation[GradientFilter[k, 1] // ImageAdjust, 5]
(*Apply a MinFilter and Binarize*)
m = Binarize[MinFilter[l, 10], .045]
(*Perform a Dilation and delete small components to get a mask*)
mask = DeleteSmallComponents#Dilation[m, 10]
(*Finally apply the mask*)
ImageMultiply[mask, Image#i]
To be continued ...
Edit
Answering questions in comments:
The GradientFilter description is under "more information" here: http://reference.wolfram.com/mathematica/ref/GradientFilter.html.
The MinFilter description is under "more information" here: http://reference.wolfram.com/mathematica/ref/MinFilter.html
You can improve the contrast by applying a simple positive power law transformation prior to applying the discrete cosine transform, or after the IDCT. That will move the shades of gray farther apart. Try this:
cvPow(img, img_hicontrast, 1.75); // Adjust the exponent to your needs
cvConvertScale(img_highcontrast, img_32);
If a simple threshold (+ maybe some morphological opening) is not enough, I would suggest to try using a diffusion filter: it smooths the noise in areas without edges, but preserves the edges very well. After that, the segmentation should become easier.
If the edges are becoming too faint after your frequency domain filtering, overpainting them with the result of a cvCanny() before filtering can help a lot, especially if you manage to find the right smoothing level, to get only the useful edges.

Resources