Where to get background/negative sample images for haar training? - image-processing

I need a collection of sample images to train a Haar-based classifier for plate detection.
I know this question has been asked already, but the source on googlecode is dead.
http://tutorial-haartraining.googlecode.com/svn/trunk/data/negatives/
Where to get background sample images for haar training?
Where to get negative sample images for Haar training?

Maybe this one?
https://github.com/handaga/tutorial-haartraining/tree/master/data/negatives
It says
Automatically exported from code.google.com/p/tutorial-haartraining

I know this is quite old but for those of you who are Reading this and want to know more about how to get more negative and positive images, I suggest you check out Image Net and also This to know how to use it.
Happy Training.

Related

LBP Local Binary Pattern for mouth detection in front face

There is someone who can direct me to find a lbp cascade classifier for mouth detection?
I looked for but i didn't found anything. I found only haar files, i want to know if someone have a lbp classifier. Haar classifiers are so slow, decrease of 10 fps in my app using haar. Thank you guys.
Hi #Sandeep sorry i changed my S.O. profile so i haven't seen your question. Anyway yes! I managed with classifiers in last times. I can give you a good address.
I worked with haar cascade-classifiers the process is very simple but you need a lot of training data!
Basically you'll need a set of positive samples(that includes the Object that you want to scan) and a set negative samples(that NOT contains the object that you want to scan).
EXAMPLE:
Supposing you want to scan potholes using opencv and an haar cascade-classifier:
you'll need a set of images of streets that contains potholes(positive samples) and a set of image of streets that NOT contains potholes(negative samples).
I leave you a link that helped me so much: http://www.academia.edu/9149928/A_complete_guide_to_train_a_cascade_classifier_filter
This example uses a GitHub project i here's the link: https://github.com/sauhaardac/Haar-Training
Hope to be helpful, bye :D

Automatic face verification with only 2 images

The problem statement:
Given two images such as the two images of Brad Pitt below, figure out if the image contains the same person or no. The difficulty is that we have only one reference image for each person and what to figure out if any other incoming image contains the same person or no.
Some research:
There are a few different methods of solving this task, these are
Using color histograms
Keypoint oriented methods
Using deep convolutional neural networks or other ML techniques
The histogram methods involve calculating histograms based on color and defining some sort of metric between them and then deciding upon a threshold. One that I have tried is the Earth Mover's Distance. However this method is lacking in accuracy.
The best approach therefore should be some sort of mix between 2nd and 3rd methods, and some preprocessing.
For preprocessing obvious steps to perform are:
Run a face detection such as Viola-Jones and separate the regions containing faces
Convert the said faces to grayscale
Run eye,mouth,nose detection algorithms perhaps using haar_cascades of opencv
Align the face images according found landmarks
All of this is done using opencv.
Extracting features such as SIFT and MSER generate accuracy of between 73-76%. After some additional research I've come across this paper using fisherfaces. And the fact that opencv has now the ability to create fisherface detectors and train them is great and works fantastically, achieving the accuracy promised by the paper on the Yale datasets.
The complication of the problem is that in my case I don't have a database with several images of the same person, to train the detector on. All I have is a single image corresponding to a single person, and given another image I want to understand whether this is the same person or no.
So what I am interested in knowing is`
Has anyone tried anything of the sort? What are some papers/methods/libraries that I should look into?
Do you have any suggestions on how to tackle problem?
Since you have only one image, you can give this method using DLib a try. I have used 3-4 images per person and it is giving good results.
Detect face (sample_face)
Get face descriptor (128 D vector) using dlib compute_face_descriptor (Check link)
Get the new picture in which you want to recognise the face
detect face and compute the descriptor(lets call test_face).
Compute euclidean distance between test_face descriptor and all sample_faces descriptor
assign the test_face with class(person name) with least euclidean distance.
Give this a whirl, you can play with face aligning if you start getting good results.
This is one of the hot topic for computer visin area. To handle as you have written there are many kind of solutions are available.
But i suggest to look OpenFace which has very high accuracy. There is a implementation of that project at Github.
Thanks
You need to understand that machine learning doesn't work that way, there are intensive training carried out before your model can give some good results.
with the single image of a person you just cannot predict that its the same person, cause you need to train your model over different images of the person under different light intensities, angles and many other varying scenarios.
Still i would like to try this link :
http://hanzratech.in/2015/02/03/face-recognition-using-opencv.html
you may find some match for the image atleast.
So what I am interested in knowing is` Has anyone tried anything of
the sort?
Yes. This is 2017 and facial recognition has been researched for decades.
What are some papers/methods/libraries that I should look into?
Anything google throws at you searching "single image/sample face recognition"
Do you have any suggestions on how to tackle problem?
See above
Extracting features such as SIFT and MSER generate accuracy of between 73-76%.
I doubt humans, who's facial recognition is unmatched perform much better with only 1 image as reference. I mean I couldn't tell for sure if that's Brad Pitt or if one is just a look-alike and I have seen him on houndreds of pictures and hours of movies...

Neural network image architecture

I've got a set of 16000 images. I've got one sample images, I need to find one of 16000 images on it. I've already tried OpenCV's ORB + FLANN approach, but it is too slow. I hope once trained network will be faster than it. I don't know NN theory well, I've read some articles & websites and I've got a bunch of questions:
Should I use 16k output neurons to classificate input image?
How can I train my NN if I have only one train image per class?
What architecture should I use?
Maybe I should increase training dataset by randomly distorting input images?
Sorry in advance for my bad English:)
I'm not an expert, but I think that this kind of problem is not the perfect suit for neural networks. Probably feature extraction, interest points and descriptors, all avaliable in openCV, are the best option. Anyway, let's try this. With the infos received, I think you could try this:
SOM network - Create a Self Organizing Maps network with 16.000 classes for output. Never saw a example with that many classes and just one sample per class, but it should work. Maybe you can try to use PCA to reduce the images dimensionality. Keep training the network with your images (or PCA features). Start with, I don't know, 1.000 epochs. Keep raising this value until you have good results.
Here you can read a bit more about SOM

Haar training - where to obtain eyeglasses images?

I want to train a new haar-cascade for glasses as I'm not satisfied with the results I'm getting from the cascade that is included in OpenCV.
My main problem is that I'm not sure where to get eyeglasses images. I can manually search and download, but that's not practical for the amount of images I really need. I'm specifically looking for images of people wearing eyeglasses.
As this forum contain many experienced computer vision experts, I hope someone here can guide as to how to obtain images for training.
I'll also be happy to hear other approaches for detecting eyeglasses (on people).
Thanks in advance,
Gil
If you simply want images, it looks like #herhuyongtao pointed you to a good place. Then you can follow opencv's tutorial on training.
Another option is to see what others have trained:
There's a trained data set found here that might be of use, which states simply that it is "better". I'm assuming that it's supposed to be better than opencv.
I didn't immediately see any other places for trained or labeled data.

Face recognition with a small number of samples

Can anyone advise me way to build effective face classifier that may be able to classify many different faces (~1000)?
And i have only 1-5 examples of each face
I know about opencv face classifier, but it works bad for my task (many classes, a few samples).
It works alright for one face classification with small number of samples. But i think that 1k separate classifier is not good idea
I read a few articles about face recognition but methods from these articles reqiues a lot of samples of each class for work
PS Sorry for my writing mistakes. English in not my native language.
Actually, for giving you a proper answer, I'd be happy to know some details of your task and your data. Face Recognition is a non-trivial problem and there is no general solution for all sorts of image acquisition.
First of all, you should define how many sources of variation (posing, emotions, illumination, occlusions or time-lapse) you have in your sample and testing sets. Then you should choose an appropriate algorithm and, very importantly, preprocessing steps according to the types.
If you don't have any significant variations, then it is a good idea to consider for a small training set one of the Discrete Orthogonal Moments as a feature extraction method. They have a very strong ability to extract features without redundancy. Some of them (Hahn, Racah moments) can also work in two modes - local and global feature extraction. The topic is relatively new, and there are still few articles about it. Although, they are thought to become a very powerful tool in Image Recognition. They can be computed in near real-time by using recurrence relationships. For more information, have a look here and here.
If the pose of the individuals significantly varies, you may try to perform firstly pose correction by Active Appearance Model.
If there are lots of occlusions (glasses, hats) then using one of the local feature extractors may help.
If there is a significant time lapse between train and probe images, the local features of the faces could change over the age, then it's a good option to try one of the algorithms which use graphs for face representation so as to keep the face topology.
I believe that non of the above are implemented in OpenCV, but for some of them you can find MATLAB implementation.
I'm not native speaker as well, so sorry for the grammar
Coming to your problem , it is very unique in its way. As you said there are only few images per class , the model which we train should either have an awesome architecture which can create better features within an image itself , or there should be an different approach which can achieve this task .
I have four things which I can share as of now :
Do data pre-processing and then create a bigger dataset and train on a neural network ideally. Here, we can do pre-processing like:
- image rotation
- image shearing
- image scaling
- image blurring
- image stretching
- image translation
and create atleast 200 images per class. Please checkout opencv documentation which provides many more methods on how you can increase the size of your dataset. Once you do this, then we can apply transfer learning , which is a better approach than training a neural network from scratch.
Transfer learning is a method where we train a network on our own custom classes , and this network is already pre-trained on 1000's of classes. Since our data here is very less, I would prefer transfer learning only. I have written a blog on how you can approach this using tranfer learning after you have the required amount of data. It is linked here. Face recognition also is a classification task itself, where each human is a separate class. So, follow the instructions given in the blog , may be it would help you create your own powerful classifer.
Another suggestion would be , after creating a dataset , encode them properly. This encoding would help you preserve the features in an image and can help you train better networks. VLAD ,Fisher , Bag of Words are few encoding techniques. You can search few repositories online which have implemented these already on ORL database. Once you encode , train the network on the encodings , you will obviously see a better performance.
Even do check out , Siamese network here which is meant for this purpose I feel . Here they compare two images with similar characteristics on different networks and there by achieve better classification accuracies . Git repository is here.
Another standard approach would be using SVM , Random forests since the data is less. If you still prefer neural networks the above methods would serve you the purpose. If you intend to go with encodings , then I would suggest random forests , as it is highly preferrable in learning and flexible too.
Hopefully , this answer would help you proceed in the right direction of achieving things.
You might want to take a look at OpenFace, a Python and Torch implementantion of face recognition with deep neural networks: https://cmusatyalab.github.io/openface/

Resources