Image enhancement before CNN helpful? - image-processing

I have a Deep learning model ( transfer learning based in keras) to do regression problem on medical images. Does it help or have any logical idea or doing some image enhancements like strengthening the edges or doing histogram equalization before feeding the inputs to the CNN?

It is possible to train model accurately by using something you told.
For training CNN model with data, they almost use image augmentation in pre-processing phase.
There are list usually used in augmentation.
color noise
transform
rotate
whitening
affine
crop
flip
etc...
You can refer to here

Related

how decorrelating image effect image classification?

How decorrelating image effect image classification?
In many website, they use whitening(decorrelating image) for image preprocessing.
What is the benefit?
In machine learning, decorrelation data can calculate easily?
Whitening can be seen as the process of removing the mean values, the correlation between pixel to keep only interesting data. That way, the ML algorithm only sees interesting data and can train faster and more efficiently.
Here are some useful links to understand why whitening can be useful :
Standford - PCA whitening
Statistical whitening
Exploring ZCA and color image whitening

how to make Multi-scale images to train the CNN

I am working on Convolution Neural Network using satellite images. I want to try Multi-scale problem. Can you please suggest me how can I make the multi-scale dataset. As the input of the CNN is fixed image is fixed (e.g. 100x100)
how can the images of different scale to train the system for multi-scale problem.
There is a similar question about YOLO9000:Multi-Scale Training?
Since there are only convolutional and pooling layers, so when you input multi-scale image, the weight parameter amount is same. Thus, multi-scale images can use a CNN model to train.
In different tasks, the methods are different. for example, in classification task, we can we can add a global pooling after the last layer; in detection task, the output is not fixed if we input multi-scale images.

Poor performance on digit recognition with CNN trained on MNIST dataset

I trained a CNN (on tensorflow) for digit recognition using MNIST dataset.
Accuracy on test set was close to 98%.
I wanted to predict the digits using data which I created myself and the results were bad.
What I did to the images written by me?
I segmented out each digit and converted to grayscale and resized the image into 28x28 and fed to the model.
How come that I get such low accuracy on my data set where as such high accuracy on test set?
Are there other modifications that i'm supposed to make to the images?
EDIT:
Here is the link to the images and some examples:
Excluding bugs and obvious errors, my guess would be that your problem is that you are capturing your hand written digits in a way that is too different from your training set.
When capturing your data you should try to mimic as much as possible the process used to create the MNIST dataset:
From the oficial MNIST dataset website:
The original black and white (bilevel) images from NIST were size
normalized to fit in a 20x20 pixel box while preserving their aspect
ratio. The resulting images contain grey levels as a result of the
anti-aliasing technique used by the normalization algorithm. the
images were centered in a 28x28 image by computing the center of mass
of the pixels, and translating the image so as to position this point
at the center of the 28x28 field.
If your data has a different processing in the training and test phases then your model is not able to generalize from the train data to the test data.
So I have two advices for you:
Try to capture and process your digit images so that they look as similar as possible to the MNIST dataset;
Add some of your examples to your training data to allow your model to train on images similar to the ones you are classifying;
For those still have a hard time with the poor quality of CNN based models for MNIST:
https://github.com/christiansoe/mnist_draw_test
Normalization was the key.

Accelerated SVM training for HOG algorithm

Let's say I have a perfect 3D model of the rigid object I am looking for.
I want to find this object in a scene image using the histogram of oriented gradients (HOG) algorithm.
One way to train my SVM would be to render this object on top of a bunch of random backgrounds, in order to generate the positive training examples.
But, is there a faster, more direct way to use the model to train the SVM? One that doesn't involve rendering it multiple times?

Extracting Shape Context Descriptors to train SVM

I am working on a project that deals with classifying images based only on the shape obtained (binary image) after background subtraction. I want to extract shape context descriptors from the two classes and train an SVM classifier.
How can I extract shape context descriptors ? Please tell me if there is any implementation or implementation guide to extract shape context descriptors for training SVM.
These links might help you find code for shape context: (1) and (2).
This tutorial is quite clear on how to use OpenCV's implementation of SVM for classification.
Take a look here http://answers.opencv.org/question/1668/shape-context-implementation-in-opencv/
Hope this helps,
Alex
Digits classification can be done based on shapes, using HOG features. The link below contains Matlab code for this :
http://in.mathworks.com/help/vision/examples/digit-classification-using-hog-features.html

Resources