Haarcascades in opencv - opencv

I have seen multiple haarcascade xmls in opencv for face detection, eye detection , ear detection, Human body detection etc., But couldnt see proper documentation or explanation for these xmls.
For example in a application if I need to detect side faces which xml should I use and what are the parameters to be passed for detectMultiScale?
In some cases if I vary the parameters to detectMultiScale the false detections get reduced, but I did all the tests with trial and error method. I couldnt find any definite articles on explaining the use of each xml and parameters.
Can some one provide the documents on this if any, else some explanation on this would be grateful.

OpenCV has a built-in profile face classifier xml under "..\data\haarcascades". If you want to create your own cascade classifier, you should follow this procedure. Here is another link regarding that.
To learn about the detectMultiScale method, check out the documentation. To understand the how the classifier and its parameters work, check out the viola-jones (2001) article or its explanation.

Here is a paper by Vadim Pisarevsky, one of the OpenCV developers, which may be helpful, in understanding some of the parameters.
On the other hand, if using OpenCV is not a hard requirement, please take a look at vision.CascadeObjectDetector in the Computer Vision System Toolbox for Matlab, which provides the same functionality. It also saves you the trouble of figuring out which xml file to use for profile faces.

Related

How does OpenCV use a XML file to identify objects?

I'm building an in androidStudio that uses OpenCV to identify an object. The detection is ok, but I don't know how a simple file XML allow my programm to idetify my object.
Everything that I know is that somehow OpenCV uses a convolucional neural network to do it, so it's necessary that I do the training of the CNN do adjust the internal parameter, but what does the XML exactly??? How to works this magic thing??
I'm not completely sure if this is what you mean, but I'm going to make a guess.
If you use an xml inside openCV to do some sort of detection, you're most likely working with a Haar feature-based cascade classifier or something very similar. You can learn more about it on this page!
There is also a very good blog post written on which steps openCV takes to make the detection happen: here
Hopefully this can dispel a bit of this magic and clear things up!
In very short, the xml holds a sort of 'pattern' that was learned using machine learning and a lot of examples. Once trained, openCV can search images for the pattern that it found in the xml.

Face landmark extraction in OpenCV 3.0. Can anyone suggest any good open source libraries that will allow me to extract facial landmarks?

I am currently using OpenCV3.0 with the hope i will be able to create a program that does 3 things. First, finds faces within a live video feed. Secondly, extracts the locations of facial landmarks using ASM or AAM. Finally, uses a SVM to classify the facial expression on the persons face in the video.
I have done a fair amount of research into this but can't find anywhere the most suitable open source AAM or ASM library to complete this function. Also if possible I would like to be able to train the AAM or ASM to extract the specific face landmarks i require. For example, all the numbered points in the picture linked below:
www.imgur.com/XnbCZXf
If there are any alternatives to what i have suggested to get the required functionality then feel free to suggest them to me.
Thanks in advance for any answers, all advice is welcome to help me along with this project.
In the comments, I see that you are opting to train your own face landmark detector using the dlib library. You had a few questions regarding what training set dlib used to generate their provided "shape_predictor_68_face_landmarks.dat" model.
Some pointers:
The author (Davis King) stated that he used the annotated images from the iBUG 300-W dataset. This dataset has a total of 11,167 images annotated with the 68-point convention. As a standard trick, he also mirrors each image to effectively double the training set size, ie 11,167*2=22334 images. Here's a link to the dataset: http://ibug.doc.ic.ac.uk/resources/facial-point-annotations/
Note: the iBUG 300-W dataset includes two datasets that are not freely/publicly available: XM2VTS, and FRGCv2. Unfortunately, these images make up a majority of the ibug 300-W (7310 images, or 65.5%).
The original paper only trained on the HELEN, AFW, and LFPW datasets. So, you ought to be able to generate a reasonably-good model on only the publicly-available images (HELEN,LFPW,AFW,IBUG), ie 3857 images.
If you Google "one millisecond face alignment kazemi", the paper (and project page) will be the top hits.
You can read more about the details of the training procedure by reading the comments section of this dlib blog post. In particular, he briefly discusses the parameters he chose for training: http://blog.dlib.net/2014/08/real-time-face-pose-estimation.html
With the size of the training set in mind (thousands of images), I don't think you will get acceptable results with just a handful of images. Fortunately, there are many publicly available face datasets out there, including the dataset linked above :)
Hope that helps!
AAM and ASM are pretty old school and results are a little bit disappointing.
Most of Facial landmarks trackers use cascade of patches or deep-learning. You have DLib that performs pretty well (+BSD licence) with this demo, some other on github or a bunch of API as this one that is free to use.
You can also give a look at my project using C++/OpenCV/DLib with all functionalities you quoted and perfectly operational.
Try Stasm4.0.0. It gives approximately 77 points on face.
I advise you to use FaceTracker library. It is written in C++ using OpenCV 2.x. You won't be disappointed on it.

algorithm for finding feature points of a face in opencv

I am doing a project on expression recognition in opencv and have successfully extracted the face region,i am having trouble building my own algorithm for the feature points extraction of a face,can someone help me with it?
since this is my first try to answer question, I will try to do my best. I cannot post more than two links, so I will try at least to give some hints.
Your question is quite broad. It depends on the type of application and requirements. Are you doing on-line detection, is it static etc.? Based on that, you should consider keypoint detection algorithm. I do not think it is a good idea to build your own algorithm, because OpenCV already has a lot of methods that you can choose from. All you have to do in most cases is to do some pre-processing, but it depends as well.
Most popular feature detection methods are: SURF (Opencv SURF), SIFT, ORB, FAST etc. Keep in mind that SURF and SIFT ar non-free. SURF and SIFT brings a lot of features and are quite accurate, somewhat scale and rotation invariant, but also a bit slow (especially on online tracking). FAST and ORB are fast, but they are more sensitive to noise and have their own disadvantages (see descriptions on OpenCV documentation). If I were you, I would try most of them and see which one does the best job (it is not difficult to test them).
Secondly, you have to choose descriptors. Very goog introduction is here:
Descriptors tutorial. There you will find all basic info. It is important to say, that you can mix various keypoint detection algorithms and feature description algorithms (but keep in mind that not all are compatible, tutorial will explain this).
I am out of links for this post, but OpenCV documentation provides a lot of sample code for this problem, so go ahead and have a look.
Hope you succeed. Good luck.

How to use Opencv for Document Recognition with OCR?

I´m a beginner on computer vision, but I know how to use some functions on opencv. I´m tryng to use Opencv for Document Recognition, I want a help to find the steps for it.
I´m thinking to use opencv example find_obj.cpp , but the documents, for example passport, has some variables, name, birthdate, pictures. So, I need a help to define the steps for it, and if is possible how function I have to use on the steps.
I'm not asking a whole code, but if anyone has any example link or you can just type a walkthrough, it is of great help.
There are two very different steps involved here. One is detecting your object, and the other is analyzing it.
For object detection, you're just trying to figure out whether the object is in the frame, and approximately where it's located. The OpenCv features framework is great for this. For some tutorials and comprehensive sample code, see the OpenCv features2d tutorials and especially the feature matching tutorial.
For analysis, you need to dig into optical character recognition (OCR). OpenCv does not include OCR libraries, but I recommend checking out tesseract-ocr, which is a great OCR library. If your documents have a fixed structured (consistent layout of text fields) then tesseract-ocr is all you need. For more advanced analysis checking out ocropus, which uses tesseract-ocr but adds layout analysis.

Face Recognition in OpenCV

I was trying to build a basic Face Recognition system (PCA-Eigenfaces) using OpenCV 2.2 (from Willow Garage). I understand from many of the previous posts on Face Recognition that there is no standard open source library that can provide all the face recognition for you.
Instead, I would like to know if someone has used the functions(and integrated them):
icvCalcCovarMatrixEx_8u32fR
icvCalcEigenObjects_8u32fR
icvEigenProjection_8u32fR
et.al in the eigenobjects.cpp to form a face recognition system, because the functions seem to provide much of the required functionality along with cvSvd?
I am having a tough time trying to understand to do so since I am new to OpenCV.
Update: OpenCV 2.4.2 now comes with the very new cv::FaceRecognizer. Please see the very detailed documentation at:
http://docs.opencv.org/trunk/tutorial_face_main.html
I worked on a project with CV to recognize facial features. Most people don't understand the difference between Biometrics and Facial Recognition. There is a huge difference based on the fact that Biometrics is mainly based on histogram density matching while Facial Recognition implements this and vector support based on feature recognition from density. Check out the following link. This is the library you want to use if you are pursuing CV and Facial Recognition: www.betaface.com . Oleksander is awesome and based out of Germany, but he answers questions which is nice.
With OpenCV it's easy to get started with face detection. It comes with some predefined sets for feature detection, including face detection.
You might already know this one: OpenCV Wiki, FaceDetection
The important functions in this example are cvLoad and cvHaarDetectObjects. The first one loads the classifier and the second one applies it to an image.
The standard classifiers work pretty well. Of course you can train your own classifiers, if the standard ones don't fit your purpose.
As you said there are a lot of algorithms for face detection. Some of them might provide better results, but OpenCV is definitively a good start.

Resources