How can we use a basic Discrete Wavelet Transform(DWT) in a CNN layer? - image-processing

How can we use a basic Discrete Wavelet Transform(DWT) in a CNN layer?
If possible can anyone please give a code snippet preferably in python Tensorflow.
I want to use this for a image processing task using deep learning and implement DWT as a filter in CNN network.

Related

Pros and Cons of using DNN and CNN in both image classification and object recognition

I totally agree the CNN has been used as the basic structure of R-CNN, Fact R-CNN, and YOLO.
My question is that if we have big enough data, is it still good to use DNN? If not so, what is the negative side of using DNN?
How about object recognition in the case of small image datasets? Do you prefer to use DNN or CNN?

Is there any alternative to convolutional neural networks to classify Images?

Deep learning is famous for classifying images into different categories. However, I am interested to use any other machine learning model which is capable of classifying the images. The images are about 2000 and are in png format. Does anybody know any machine learning model which can be applied in python to classify images other than Deep learning models.
You can take a look to SVMs (scikit-learn). I can advise you to extract features from images first, with SIFT or SURF for example.
EDIT: SIFT and SURF are using the principle of convolution, but it exists plenty of other feature descriptors.

Relation between CNN and gabor filter

I am learning to use gabor filters to extract orientation and scale related features from images. On the other hand, Convolution Neural Network can also extract features including orientation and scale, is there any evidence that filters in CNN performs a similar function as gabor filters? Or pros and cons of both of them.
In my personal experience, in a traditional deep learning architecture (such as AlexNet) , when the layers near the beginning are visualized, they resemble gabor filters a lot.
Take this visualization of the first two layers of a pretrained AlexNet (taken from Andrej Karpathy's cs231n.github.io ). Some of the learnt filters look exactly like the Gabor Filters. So yes, there is evidence that CNN works (partly) the same way as Gabor Filters.
One possible explanation is that since the layers towards the beginning of a deep CNN are used to extract low level features (such as changes in texture), they perform the same functions as Gabor Filters. Features such as those detecting changes in frequency are so fundamental that they are present irrespective of the type of dataset the model is trained on. (Part of the reason why transfer learning is possible) .
But if you have more more data, you could possibly make a deep CNN learn much more high-level features than Gabor Filters, which might be more useful for the task you're extracting these features for (such as classification). I hope this provides some clarification.

Feeding image features to tensorflow for training

Is it possible to feed image features, say SIFT features, to a convolutional neural network model in Tensorflow? I am trying a tensorflow implementation of this project in which a grayscale image is coloured. Will image features be a better choice than feeding the images as is to the model?
PS. I am a novice to machine learning and is not familiar with creating neural n/w models
You can feed tensorflow neural net almost anything.
If you have extra features for each pixel, then instead of using one channel (intensity) you would use multiple channels.
If you have extra features, which are about whole image, you can make separate input a merge features at some upper layer.
As for the better performance, you should try both approaches.
General intuition is that, extra features help if you don't have many samples and their effect is diminishing if you have many samples and network can learn features by itself.
Also one more point: If you are novice, I strongly recommend using higher level framework like keras.io (which is layer over tensorflow) instead of tensorflow.

image classification using SVM technique in opencv

I need to train the sample image set to classification of them. But i have bit knowledge about the SVM technique to do the coding stuff. please help me to do the programming part of that.
The OpenCV documentation of the SVM provides a small example on how to use it: link

Resources