opencv SVM Prediction - opencv

I am currently working on age estimation and extracting features using Gabor Filters. I have decided to classify ages using a SVM, the training is successful and it does not take a long time since the feature vectors are only about 3000 in dimensions by a 1000 training samples.
The problem is that when I want to predict an image, the result returned is always 18 (I pass the feature vector of the testing image to the predict function of the SVM). But when I predict an age from the training set the result is always correct. I do not know why this is happening. Any help will be appreciated.
Also, an observation is that whether or not I change and/ or include the SVMParams the prediction still outputs the same value.

Related

What's a "good" value for the loss function of a DL model like yolo?

I collected ~1500 labelled data and trained with yolo v3, got a training loss of ~10, validation loss ~ 16. Obviously we can use real test data to evaluate the model performance, but I am wondering if there is a way to tell if this training loss = 10 is a "good" one? Or does it indicate I need to use more training data to see if I can push it down to 5 or even less?
Ultimately my question is, for a well-known model with a pre-defined loss function, is there a "good" standard value for the training loss?
thanks.
you need to train your weights until avg loss become 0.0XXXXX. It is minimal requirement to detect object with matching anchor IOU.
Update:28th Nov, 2018
while training object detection model, Loss might be vary sometimes with large data set. but all you need to calculate is Mean Average Precision(MAP) which exactly gave the accuracy criteria of trained model.
./darknet detector map .data .cfg .weights
If your MAP is near to 0.1 i.e. 100%, model performing well.
Follow link to know more about MAP:
https://medium.com/#jonathan_hui/map-mean-average-precision-for-object-detection-45c121a31173
Your validation loss is a good indicator of if the training loss can further alleviate, I mean i don't have any one-shot solutions ,you will have to tweak Hyper-parameters and check on the val test and iterate.You can also get a nice idea by looking at the loss curve, was it decreasing when you stopped training or was it flat, you can get a sense of how the training has progressed and make changes accordingly.GoodLuck

Machine Learning - Huge Only positive text dataset

I have a dataset with thousand of sentences belonging to a subject. I would like to know what would be best to create a classifier that will predict a text as "True" or "False" depending on whether they talk about that subject or not.
I've been using solutions with Weka (basic classifiers) and Tensorflow (neural network approaches).
I use string to word vector to preprocess the data.
Since there are no negative samples, I deal with a single class. I've tried one-class classifier (libSVM in Weka) but the number of false positives is so high I cannot use it.
I also tried adding negative samples but when the text to predict does not fall in the negative space, the classifiers I've tried (NB, CNN,...) tend to predict it as a false positive. I guess it's because of the sheer amount of positive samples
I'm open to discard ML as the tool to predict the new incoming data if necessary
Thanks for any help
I have eventually added data for the negative class and build a Multilineal Naive Bayes classifier which is doing the job as expected.
(the size of the data added is around one million samples :) )
My answer is based on the assumption that that adding of at least 100 negative samples for author’s dataset with 1000 positive samples is acceptable for the author of the question, since I have no answer for my question about it to the author yet
Since this case with detecting of specific topic is looks like particular case of topics classification I would recommend using classification approach with the two simple classes 1 class – your topic and another – all other topics for beginning
I succeeded with the same approach for face recognition task – at the beginning I built model with one output neuron with high level of output for face detection and low if no face detected
Nevertheless such approach gave me too low accuracy – less than 80%
But when I tried using 2 output neurons – 1 class for face presence on image and another if no face detected on the image, then it gave me more than 90% accuracy for MLP, even without using of CNN
The key point here is using of SoftMax function for the output layer. It gives significant increase of accuracy. From my experience, it increased accuracy of the MNIST dataset even for MLP from 92% up to 97% for the same model
About dataset. Majority of classification algorithms with a trainer, at least from my experience are more efficient with equal quantity of samples for each class in a training data set. In fact, if I have for 1 class less than 10% of average quantity for other classes it makes model almost useless for the detection of this class. So if you have 1000 samples for your topic, then I suggest creating 1000 samples with as many different topics as possible
Alternatively, if you don’t want to create a such big set of negative samples for your dataset, you can create a smaller set of negative samples for your dataset and use batch training with a size of batch = 2x your negative sample quantity. In order to do so, split your positive samples in n chunks with the size of each chunk ~ negative samples quantity and when train your NN by N batches for each iteration of training process with chunk[i] of positive samples and all your negative samples for each batch. Just be aware, that lower accuracy will be the price for this trade-off
Also, you could consider creation of more generic detector of topics – figure out all possible topics which can present in texts which your model should analyze, for example – 10 topics and create a training dataset with 1000 samples per each topic. It also can give higher accuracy
One more point about the dataset. The best practice is to train your model only with part of a dataset, for example – 80% and use the rest 20% for cross-validation. This cross-validation of unknown previously data for model will give you a good estimation of your model accuracy in real life, not for the training data set and allows to avoid overfitting issues
About building of model. I like doing it by "from simple to complex" approach. So I would suggest starting from simple MLP with SoftMax output and dataset with 1000 positive and 1000 negative samples. After reaching 80%-90% accuracy you can consider using CNN for your model, and also I would suggest increasing training dataset quantity, because deep learning algorithms are more efficient with bigger dataset
For text data you can use Spy EM.
The basic idea is to combine your positive set with a whole bunch of random samples, some of which you hold out. You initially treat all the random documents as the negative class, and train a classifier with your positive samples and these negative samples.
Now some of those random samples will actually be positive, and you can conservatively relabel any documents that are scored higher than the lowest scoring held out true positive samples.
Then you iterate this process until it stablizes.

Why test accuracy remains constant and do not increase in binary classification when test and train dataset are from different source

I have train dataset and test dataset from two different sources. I mean they are from two different experiments but the results of both of them are same biological images. I want to do binary classification using deep CNN and I have following results on test accuracy and train accuracy. The blue line shows train accuracy and the red line shows test accuracy after almost 250 epochs. Why the test accuracy is almost constant and not raising? Is that because Test and Train dataset are come from different distributions?
Edited:
After I have add dropout layer, reguralization terms and mean subtraction I still get following strange results which says the model is overfitting from the beginning!
There could be 2 reasons. First you overfit on the training data. This can be validated by using the validation score as a comparison metric to the test data. If so you can use standard techniques to combat overfitting, like weight decay and dropout.
The second one is that your data is too different to be learned like this. This is harder to solve. You should first look at the value spread of both images. Are they both normalized. Matplotlib normalizes automatically for plotted images. If this still does not work you might want to look into augmentation to make your training data more similar to the test data. Here I can not tell you what to use, without seeing both the trainset and the testset.
Edit:
For normalization the test set and the training set should have a similar value spread. If you do dataset normalization you calculate mean and std on training set. But you also need to use those calculated values on the test set and not calculate the test set values from the test set. This only makes sense if the value spread is similar for both the training and test set. If this is not the case you might want to do per sample normalization first.
Other augmentation that are commonly used for every dataset are oversampling, random channel shifts, random rotations, random translation and random zoom. This makes you invariante to those operations.

OpenCV4Android SVM is not giving the correct prediction

I am new to machine learning and openCV. I have taken a set of 10 images for each emotion(neutral and happy) from Cohn-Kanade face database. Then I have extracted the facial features from each image and put them in my trainingData Matrix and assigned the label for the respective emotion (Example: 0 for neutral and 1 for happy).
I have used the RBF kernel with gamma = 0.1 and C = 1. Once trained, I am passing the facial features extracted from the live camera frames from a smartphone camera for prediction. The prediction always returns 1.
If I increase the number of training samples for neutral expression(example: 15 neutral expression images and 10 happy expression images), then the prediction always returns 0 and if there are equal number of images for each expression in the training samples, then SVM prediction always returns 1.
Why is the SVM behaving this way? How to check if I am using the right values for gamma and C? Also, does SVM depend on the resolution of training images and testing images?
I would request you to upload the SVM function so we can understand your code. Secondly, I have used SVM before and you need to normalize the training data and the labels. You should also make sure you are using the correct classifier as not all classifiers are supported. Follows this link for some tutorials http://docs.opencv.org/3.0-beta/modules/ml/doc/support_vector_machines.html
For answering your other questions, unfortunately you have to find the best combination for gamma and C yourself, which is kind of the drawback of SVM. https://www.quora.com/What-are-C-and-gamma-with-regards-to-a-support-vector-machine
Yes, the SVM does depend on the resolution as your features/feature vectors would change depending on the resolution and hence the inputs and the labels.
P.S. This should ideally be in comments but unfortunately i don't have enough points to do that.

Input matches no features in training set; how much more training data do I need?

I am new to Text Mining. I am working on Spam filter. I did text cleaning, removed stop words. n-grams are my features. So I build a frequency matrix and build model using Naive Bayes. I have very limited set of training data, so I am facing the following problem.
When a sentence comes to me for classification and if none of its features match with the existing features in training then my frequency vector has only zeros.
When I send this vector for classification, I obviously get a useless result.
What can be ideal size of training data to expect better results?
Generally, the more data you have, the better. You will get diminishing returns at some point. It is often a good idea to see if your training set size is a problem by plotting the cross validation performance while varying the size of the training set. In scikit-learn has an example of this type of "learning curve."
Scikit-learn Learning Curve Example
You may consider bringing in outside sample posts to increase the size of your training set.
As you grow your training set, you may want to try reducing the bias of your classifier. This could be done by adding n-gram features, or switching to a logistic regression or SVM model.
When a sentence comes to me for classification and if none of its features match with the existing features in training then my frequency vector has only zeros.
You should normalize your input so that it forms some kind of rough distribution around 0. A common method is to do this tranformation:
input_signal = (feature - feature_mean) / feature_stddev
Then all zeroes would only happen if all features were exactly at the mean.

Resources