Haar training with small samples - opencv

Please can anyone help with where I can get Haar training files that use small samples such as 5? I have downloaded a couple but one is giving me error messages while the second require 1000 samples.
Thank you very much

Little samples is not how it is designed to work with. Almost all the algorithms that lets us do classification need a large amount of training samples.

It depends on what you want to detect. If you want to detect a logo and you have a clean image of a logo, you can create many training samples out of it by adding noise, changing contrast and brightness, rotating, distorting, etc. OpenCV's Haar training module supports this, so it won't be hard.
This is called data augmentation. But if you want to detect faces, data augmentation alone won't be enough.
Creating a rule-based system by observing the few samples that you have works best for this situation, if what you want to detect is a natural object.
I can add additional links to this answer, pointing to sample code, if you can provide more details.

Related

image augmentation algorithms for preparing deep learning training set

To prepare large amounts of data sets for training deep learning-based image classification models, we usually have to rely on image augmentation methods. I would like to know what are the usual image augmentation algorithms, are there any considerations when choosing them?
The litterature on data augmentation is very very large and very dependent on your kind of applications.
The first things that come to my mind are the galaxy competition's rotations and Jasper Snoeke's data augmentation.
But really all papers have their own tricks to get good scores on special datasets for exemples stretching the image to a specific size before cropping it or whatever and this in a very specific order.
More practically to train models on the likes of CIFAR or IMAGENET use random crops and random contrast, luminosity perturbations additionally to the obvious flips and noise addition.
Look at the CIFAR-10 tutorial on TF website it is a good start. Plus TF now has random_crop_and_resize() which is quite useful.
EDIT: The papers I am referencing here and there.
It depends on the problem you have to address, but most of the time you can do:
Rotate the images
Flip the image (X or Y symmetry)
Add noise
All the previous at the same time.

Designing a classifier with minimal image data

I want to train a 3-class classifier with tissue images, but only have around 50 labelled images in total. I can't take patches from the images and train on them, so I am looking for another way to deal with this problem.
Can anyone suggest an approach to this? Thank you in advance.
The question is very broad but here are some recommendations:
It could make sense to generate variations of your input images. Things like modifying contrast, brightness or color, rotating the image, adding noise. But which of these operations, if any, make sense really depends on the type of classification problem.
Generally, the less data you have, the fewer parameters (weights etc.) your model should have. Otherwise it will result in overlearning, meaning that your classifier will classify the training data but nothing else.
You should check for overlearning. A simple method would be to split your training data into a training set and a control set. Once you have found that the classification is correct for the control set as well, you could do additional training including the control set.

How can i match gestures and compare them?

I am developing a gesture recognition project. My goal is that the webcam captures my gestures and matches them with the existing gestures in my database. I have been able to capture hand gestures and store them in my project folder. Now, how exactly do i compare them? I am clueless about this part. I have gone through so many youtube links and most of them just show them how it works and none of them explains what algorithm they have used. I am completely stuck and all i want is some ideas or any possible link which can help me understand this matching part. Thanks
There are many different approaches that you can follow here.
If your images are of good quality, then you could detect feature points in your input image, and then match them with a "prior/template" representation of a similar gesture. This would be a brute-force search. Here, you can use SIFT to detect keypoints and generate descriptors for each image, and then match them based on the BFMatcher or FLANN. All of the above are implemented in OpenCV. Just read the documentation.
Docs here: detect/match
On the other hand, you could use a Bag-Of-Words approach. A good primer for that approach is here: BoW
You can use a classification machine learning algorithm like logistic regression.
This algorithm tries to minimize the cost function to predict a picture input similarity to all classes (all gestures in your case) and it'll pick the most similar class and give you that. for pictures you should use each pixel as a feature for your data.
After feeding your algorithm with enough training set it can classify your picture into one of the gestures, and as you said you are working with webcam images the running time wouldn't be that much.
Here is a great video for learning logistic regression by professor Andrew Ng of Stanford.

OpenCV: cascade training for a single object

I'm trying to train a cascade to detect the character '1' on a blank piece of paper. So far I've been using subsets of the 3019 background images from Naotoshi Seo's tutorial. Just wondering if anyone knows a good way to train something against a known background? opencv_traincascade doesn't seem to like me using one image as the negative sample. I'm only using one positive sample before running opencv_createsamples. How should I set the rotations during opencv_createsamples?
Also, just to clarify, I'm using LBP training rather than haartraining.
Micka's comment is sensible in that a "known background" is never constant.
The best background negative images are realistic ones, eg texture constrasts, shadings, lighting in general, whatever will be the real background.

Can't train OpenCV CascadeClassifier to detect company logo

I'm trying to train CascadeClassifier from OpenCV to detect a simple high-contrast company logo, but it doesn't work. What it detects looks like just random image patches. It doesn't even work on the original sample. I'm using opencv_createsamples to create a set of positives on a plain white background from a single original logo image.
At the same time I was able to successfully train a cascade for detecting stamps using many samples from real documents. This looks strange to me, because a stamp is much more complex than company logo.
What can I be doing wrong? Can LBP or Haar features be used do describe a simple object such as logo?
It depending on the type of company logo and the accuraty level. LBP is very fast in training data, but less accurate than Haar classifier. Haar classiefier can take a week to learn the recognition, but is very accurate. To have a good classifier you need to have a lot of data. I don't know what data you have and how mutch. So I see that the question is asked long time ago...

Resources