I modified the MNIST example and when I train it with my 3 image classes it returns an accuracy of 91%. However, when I modify the C++ example with a deploy prototxt file and labels file, and try to test it on some images it returns a prediction of the second class (1 circle) with a probability of 1.0 no matter what image I give it - even if it's images that were used in the training set. I've tried a dozen images and it consistently just predicts the one class.
To clarify things, in the C++ example I modified I did scale the image to be predicted just like the images were scaled in the training stage:
img.convertTo(img, CV_32FC1);
img = img * 0.00390625;
If that was the right thing to do, then it makes me wonder if I've done something wrong with the output layers that calculate probability in my deploy_arch.prototxt file.
I think you have forgotten to scale the input image during classification time, as can be seen in line 11 of the train_test.prototxt file. You should probably multiply by that factor somewhere in your C++ code, or alternatively use a Caffe layer to scale the input (look into ELTWISE or POWER layers for this).
EDIT:
After a conversation in the comments, it turned out that the image mean was mistakenly being subtracted in the classification.cpp file whereas it was not being subtracted in the original training/testing pipeline.
Are your train classes balanced?
You may get to a stacked network on a prediction of one major class.
In order to find the issue I suggest to output the train prediction during training compared to predictions with the forward example on same train images from a different class.
Related
I am reading Fit generator and data augmentation in keras, but there are still something that I am not quite sure about image augmentation in keras.
(1) In datagen.flow(), we also set a batch_size. I know batch_size is needed if we do mini-batch training, so are these two batch_size values the same, i mean, if we indicate batch_size in flow() generator, are we assuming we will do mini-batch training with the same batch_size?
(2)
Let me assume the size of training set is 10,000. I guess the only difference between model.fit_generator() and model.fit() at each epoch is that, for the former one, we are using 10,000 of randomly transformed images, rather than the original 10,000 ones. But for other epochs, we are using another 10,000 images which are totally different than those used in the first epoch, because all the images are randomly generated. Is it right?
It is like we are always using new images at each epoch, which is different from the ordinary case, when the same set of images are used at each epoch.
I am new to this area. Please help!
the 1st question:the answer is YES.
the 2nd question:yes we are always using new images at each epoch,if we use data augmentation in model.fit_generator()
I have a image set, consisting of 300 image pairs, i.e., raw image and mask image. A typical mask image is shown as follows. Each image has size of 800*800. I am trying to train a fully convolutional neural network model for this image set to perform the semantic segmentation. I am trying to generate the small patches (256*256) from the original images for constructing the training set. Are there any strategies recommended for this patch sampling process? Naturally, random sampling is a trivial approach. Here the area marked with yellow, foreground class, usually take 25% of the whole image area across the image set. It tends to reflect an imbalanced data set.
If you train a fully convolutional architecture, assuming 800x800 inputs and 25x25 outputs (after five 2x2 pooling layers, 25=800/2^5). Try to build the 25x25 outputs directly and train directly on them. You can add higher weights in the loss function for the "positive" labels to balance them with the "negative".
I definitely do not recommend sampling because it will be an expensive process and is not really fully convolutional.
I have a question regarding the preprocessing step "Image mean subtraction".
I use the UCSD Dataset for my training.
One popular preprocessing step is the mean subtraction. Now I wonder if I am doing it right.
What I am doing is the following:
I have 200 gray scaled Train images
I put all images in a list and compute the mean with numpy:
np.mean(ImageList, axis=0)
This returns me a mean image
Now I subtract the mean image from all Train images
When I now visualize my preprocessed train images they are mostly black and have also negative values in them.
Is this correct? Or is my understanding of subtracting the mean image incorrect?
Here is one of my training images:
And this is the "mean image":
It seems like you are doing it right.
As for the negative values: they are to be expected. Your original images had intensity values in range [0..1], once you subtract the mean (that should be around ~0.5), you should have values in range (roughly) [-.5..0.5].
Please note that you should save the "mean image" you got for test time as well: once you wish to predict using the trained net you need to subtract the same mean image from the test image.
Update:
In your case (static camera) the mean subtracted removes the "common" background. These settings seems to be in your favor as they focus the net to the temporal changes in the frame. this method will work well for you, as long as you test on the same image set (i.e., frames from the same static camera).
I trained a CNN (on tensorflow) for digit recognition using MNIST dataset.
Accuracy on test set was close to 98%.
I wanted to predict the digits using data which I created myself and the results were bad.
What I did to the images written by me?
I segmented out each digit and converted to grayscale and resized the image into 28x28 and fed to the model.
How come that I get such low accuracy on my data set where as such high accuracy on test set?
Are there other modifications that i'm supposed to make to the images?
EDIT:
Here is the link to the images and some examples:
Excluding bugs and obvious errors, my guess would be that your problem is that you are capturing your hand written digits in a way that is too different from your training set.
When capturing your data you should try to mimic as much as possible the process used to create the MNIST dataset:
From the oficial MNIST dataset website:
The original black and white (bilevel) images from NIST were size
normalized to fit in a 20x20 pixel box while preserving their aspect
ratio. The resulting images contain grey levels as a result of the
anti-aliasing technique used by the normalization algorithm. the
images were centered in a 28x28 image by computing the center of mass
of the pixels, and translating the image so as to position this point
at the center of the 28x28 field.
If your data has a different processing in the training and test phases then your model is not able to generalize from the train data to the test data.
So I have two advices for you:
Try to capture and process your digit images so that they look as similar as possible to the MNIST dataset;
Add some of your examples to your training data to allow your model to train on images similar to the ones you are classifying;
For those still have a hard time with the poor quality of CNN based models for MNIST:
https://github.com/christiansoe/mnist_draw_test
Normalization was the key.
I'm implementing, for the first time, a sw for objects detection for static images. My first goal is to detect simple circles, then I'll move to more complex object. Unfortunately it seems I have problem when validating my classifier.
My choice was to use a HOG descriptor (using OpenCv) and a svm as classifier (using svmlight). The code compiles and works but there is something that sounds odd to me, probably concerning the svm.
I have:
a training set composed by 5 images 48x48px of different circles and 5 images 48x48px of non-circles (I know there are too few of them in order to have a solid classifier but, up to know, it's to test that everything works)
a test set composed by 4 images 48x48px (with circles as big as the ones used for the training) and 1 image much bigger (765x600px) with multiple size circles and other geometric forms.
What happens is that:
the circles in the test set are not detected when the images are 48x48, even if in the test set there are some images used in the training phase.
in the image 765x800 (which contains circles of any size) the circles which are of the same size of the training set, or bigger, are correctly identified.
I'm using the following parameters:
hog: winSize=48x48px, winStride=4x4px, cellSize=4px, blockSize=8px, blockStride=4x4px
classifier: svm regression with a linear classifier with C=0.01. (RBF results are worse than linear)
This is the api which performs the detections with the parameters I'm using.
vector<Rect> found;
double hitThreshold = 0.; // tolerance
Size padding(Size(32, 32));
double scale = 1.05;
int groupThreshold = 2;
hog.detectMultiScale(testImg, found, hitThreshold, win_stride, padding, scale, groupThreshold);
Is there any reason why the circles in the images 48x48px are not detected and the circles in the bigger image are detected? I expect 48x48px images to be correctly classified in order to validate the classifier. I have added the bigger image when nothing where detected in 48x48px images.
Besides, what sounds stranger is the fact that in the 48x48ps test set there are some images used in the training set and I think they must be identified, instead they are not! (I know that the training set and the test set must be different but I did that when nothing were detected.)
This is my first experience with hog descriptors and svm so it might not work because of a configuration error or the choice of the images..
Any help is welcome!
Thanks in advance :)