Implementing Face Recognition using Local Descriptors (Unsupervised Learning) - opencv

I'm trying to implement a face recognition algorithm using Python. I want to be able to receive a directory of images, and compute pair-wise distances between them, when short distances should hopefully correspond to the images belonging to the same person. The ultimate goal is to cluster images and perform some basic face identification tasks (unsupervised learning).
Because of the unsupervised setting, my approach to the problem is to calculate a "face signature" (a vector in R^d for some int d) and then figure out a metric in which two faces belonging to the same person will indeed have a short distance between them.
I have a face detection algorithm which detects the face, crops the image and performs some basic pre-processing, so the images i'm feeding to the algorithm are gray and equalized (see below).
For the "face signature" part, I've tried two approaches which I read about in several publications:
Taking the histogram of the LBP (Local Binary Pattern) of the entire (processed) image
Calculating SIFT descriptors at 7 facial landmark points (right of mouth, left of mouth, etc.), which I identify per image using an external application. The signature is the concatenation of the square root of the descriptors (this results in a much higher dimension, but for now performance is not a problem).
For the comparison of two signatures, I'm using OpenCV's compareHist function (see here), trying out several different distance metrics (Chi Square, Euclidean, etc).
I know that face recognition is a hard task, let alone without any training, so I'm not expecting great results. But all I'm getting so far seems completely random. For example, when calculating distances from the image on the far right against the rest of the image, I'm getting she is most similar to 4 Bill Clintons (...!).
I have read in this great presentation that it's popular to carry out a "metric learning" procedure on a test set, which should significantly improve results. However it does say in the presentation and elsewhere that "regular" distance measures should also get OK results, so before I try this out I want to understand why what I'm doing gets me nothing.
In conclusion, my questions, which I'd love to get any sort of help on:
One improvement I though of would be to perform LBP only on the actual face, and not the corners and everything that might insert noise to the signature. How can I mask out the parts which are not the face before calculating LBP? I'm using OpenCV for this part too.
I'm fairly new to computer vision; How would I go about "debugging" my algorithm to figure out where things go wrong? Is this possible?
In the unsupervised setting, is there any other approach (which is not local descriptors + computing distances) that could work, for the task of clustering faces?
Is there anything else in the OpenCV module that maybe I haven't thought of that might be helpful? It seems like all the algorithms there require training and are not useful in my case - the algorithm needs to work on images which are completely new.
Thanks in advance.

What you are looking for is unsupervised feature extraction - take a bunch of unlabeled images and find the most important features describing these images.
The state-of-the-art methods for unsupervised feature extraction are all based on (convolutional) neural networks. Have look at autoencoders (http://ufldl.stanford.edu/wiki/index.php/Autoencoders_and_Sparsity) or Restricted Bolzmann Machines (RBMs).
You could also take an existing face detector such as DeepFace (https://www.cs.toronto.edu/~ranzato/publications/taigman_cvpr14.pdf), take only feature layers and use distance between these to group similar faces together.
I'm afraid that OpenCV is not well suited for this task, you might want to check Caffe, Theano, TensorFlow or Keras.

Related

Detecting object in an image

currently I am struggling in implementing an algorithm to locate an object in an image. Suppose I have 100 training sets that all have a cat in there and each has the correct coordinate of each cat. My first idea is to create a fixed-sized square and traverse through the image. For each collection of pixels contained in the square, we can use it as a point for Support Vector Machine algorithm.
The problem is that I am not sure how to do that since usually, each point represents each class (has object or has no object) and it usually has a simple d features, while for this case, it has a dx3 matrix as its features (each features has RGB value).
A simple help will be welcome, thanks!
If I have understood your question well, applying machine learning for image Processing and computer vision is a bit different from Other kinds of problems. main difference is that you should somehow overcome the issue of locality and scale. does all kitties always appear in the specific coordinate (x,y) ?! of course not! they can be anywhere in the scene. so how is it possible to give a specific point to SVM for an object? it will not be generalized at all. This is the reason almost all basic operation in computer vision has something to do with convolution operation to extract features independent from their location. A pixel alone carries zero useful information, you need to analyse groups of pixels. there are 2 approach you can take:
classic methods:
use OpenCV to perform noise removal, edge detection, feature extraction using methods like SIFT and feed those features to a model like SVM, not the raw unprocessed pixels. feature extraction means to reach from d features to k more meaningful representation of inputs where usually (k < d) if not always.
Deep Learning:
Convolutional Neural networks(CNNs) Have shed lights on many Computer vision tasks which were far beyond reach until recently and more importantly with frameworks like Keras and tensorflow most problems in computer vision is just programming tasks to be honest and doesn't require much knowledge as one needed before. because (CNNs) extract features themselves and you don't need to do the feature engineering anymore which requires a well educated and knowledgeable person on the task.
so, choose whatever method you see fit for kitty detection =^.^= .

Automatic face verification with only 2 images

The problem statement:
Given two images such as the two images of Brad Pitt below, figure out if the image contains the same person or no. The difficulty is that we have only one reference image for each person and what to figure out if any other incoming image contains the same person or no.
Some research:
There are a few different methods of solving this task, these are
Using color histograms
Keypoint oriented methods
Using deep convolutional neural networks or other ML techniques
The histogram methods involve calculating histograms based on color and defining some sort of metric between them and then deciding upon a threshold. One that I have tried is the Earth Mover's Distance. However this method is lacking in accuracy.
The best approach therefore should be some sort of mix between 2nd and 3rd methods, and some preprocessing.
For preprocessing obvious steps to perform are:
Run a face detection such as Viola-Jones and separate the regions containing faces
Convert the said faces to grayscale
Run eye,mouth,nose detection algorithms perhaps using haar_cascades of opencv
Align the face images according found landmarks
All of this is done using opencv.
Extracting features such as SIFT and MSER generate accuracy of between 73-76%. After some additional research I've come across this paper using fisherfaces. And the fact that opencv has now the ability to create fisherface detectors and train them is great and works fantastically, achieving the accuracy promised by the paper on the Yale datasets.
The complication of the problem is that in my case I don't have a database with several images of the same person, to train the detector on. All I have is a single image corresponding to a single person, and given another image I want to understand whether this is the same person or no.
So what I am interested in knowing is`
Has anyone tried anything of the sort? What are some papers/methods/libraries that I should look into?
Do you have any suggestions on how to tackle problem?
Since you have only one image, you can give this method using DLib a try. I have used 3-4 images per person and it is giving good results.
Detect face (sample_face)
Get face descriptor (128 D vector) using dlib compute_face_descriptor (Check link)
Get the new picture in which you want to recognise the face
detect face and compute the descriptor(lets call test_face).
Compute euclidean distance between test_face descriptor and all sample_faces descriptor
assign the test_face with class(person name) with least euclidean distance.
Give this a whirl, you can play with face aligning if you start getting good results.
This is one of the hot topic for computer visin area. To handle as you have written there are many kind of solutions are available.
But i suggest to look OpenFace which has very high accuracy. There is a implementation of that project at Github.
Thanks
You need to understand that machine learning doesn't work that way, there are intensive training carried out before your model can give some good results.
with the single image of a person you just cannot predict that its the same person, cause you need to train your model over different images of the person under different light intensities, angles and many other varying scenarios.
Still i would like to try this link :
http://hanzratech.in/2015/02/03/face-recognition-using-opencv.html
you may find some match for the image atleast.
So what I am interested in knowing is` Has anyone tried anything of
the sort?
Yes. This is 2017 and facial recognition has been researched for decades.
What are some papers/methods/libraries that I should look into?
Anything google throws at you searching "single image/sample face recognition"
Do you have any suggestions on how to tackle problem?
See above
Extracting features such as SIFT and MSER generate accuracy of between 73-76%.
I doubt humans, who's facial recognition is unmatched perform much better with only 1 image as reference. I mean I couldn't tell for sure if that's Brad Pitt or if one is just a look-alike and I have seen him on houndreds of pictures and hours of movies...

Compare Images By Color Using a Genetic Algorithm

I am right now in a serious problem, I need to compare images of flowers (carnations) using a genetic algorithm, the program must determine which variety does the flower belongs to (until now I am using 15 different varieties), the thing is I am having difficulties constructing the chromosome, right know I am only analysing the HSV of each image, then a take every channel and calculate the mean for each (n=255), after that I calculate the correlation between HS, HV and SV, I expected that the mean would be enough to locate any new flower next to the clusters of flowers of the variety it belongs (by the way, I have a database of all the flowers used for training purpose) by calculating the distance between the mean of the flower and the centroids of each cluster, and probably using the correlations for adjustment, but that distance is usually way smaller to a different variety than the one it must be. Is there a way to classify this flowers using ONLY colours (I've read of applications that uses the texture, but that's way out of my league), especially using a genetic algorithm (I know Neural Networks are more appropriate to this kind of analysis but that's what the teacher asked)?. Thank you very much. By the way I am working on OpenCV, don’t know if it's relevant. PS: Excuse my English if any mistakes were done, not my native language.

Image classification/recognition open source library

I have a set of reference images (200) and a set of photos of those images (tens of thousands). I have to classify each photo in a semi-automated way. Which algorithm and open source library would you advise me to use for this task? The best thing for me would be to have a similarity measure between the photo and the reference images, so that I would show to a human operator the images ordered from the most similar to the least one, to make her work easier.
To give a little more context, the reference images are branded packages, and the photos are of the same packages, but with all kinds of noises: reflections from the flash, low light, imperfect perspective, etc. The photos are already (manually) segmented: only the package is visible.
Back in my days with image recognition (like 15 years ago) I would have probably tried to train a neural network with the reference images, but I wonder if now there are better ways to do this.
I recommend that you use Python, and use the NumPy/SciPy libraries for your numerical work. Some helpful libraries for handling images are the Mahotas library and the scikits.image library.
In addition, you will want to use scikits.learn, which is a Python wrapper for Libsvm, a very standard SVM implementation.
The hard part is choosing your descriptor. The descriptor will be the feature you compute from each image, intended to compute a similarity distance with the set of reference images. A good set of things to try would be Histogram of Oriented Gradients, SIFT features, and color histograms, and play around with various ways of binning the different parts of the image and concatenating such descriptors together.
Next, set aside some of your data for training. For these data, you have to manually label them according to the true reference image they belong to. You can feed these labels into built-in functions in scikits.learn and it can train a multiclass SVM to recognize your images.
After that, you may want to look at MPI4Py, an implementation of MPI in Python, to take advantage of multiprocessors when doing the large descriptor computation and classification of the tens of thousands of remaining images.
The task you describe is very difficult and solving it with high accuracy could easily lead to a research-level publication in the field of computer vision. I hope I've given you some starting points: searching any of the above concepts on Google will hit on useful research papers and more details about how to use the various libraries.
The best thing for me would be to have a similarity measure between the photo and the reference images, so that I would show to a human operator the images ordered from the most similar to the least one, to make her work easier.
One way people do this is with the so-called "Earth mover's distance". Briefly, one imagines each pixel in an image as a stack of rocks with height corresponding to the pixel value and defines the distance between two images as the minimal amount of work needed to transfer one arrangement of rocks into the other.
Algorithms for this are a current research topic. Here's some matlab for one: http://www.cs.huji.ac.il/~ofirpele/FastEMD/code/ . Looks like they have a java version as well. Here's a link to the original paper and C code: http://ai.stanford.edu/~rubner/emd/default.htm
Try Radpiminer (one of the most widely used data-mining platform, http://rapid-i.com) with IMMI (Image Mining Extension, http://www.burgsys.com/mumi-image-mining-community.php), AGPL licence.
It currently implements several similarity measurement methods (not only trivial pixel by pixel comparison). The similarity measures can be input for a learning algorithm (e.g. neural network, KNN, SVM, ...) and it can be trained in order to give better performance. Some information bout the methods is given in this paper:
http://splab.cz/wp-content/uploads/2012/07/artery_detection.pdf
Now-a-days Deep Learning based framworks like Torch , Tensorflow, Theano, Keras are the best open source tool/library for object classification/recognition tasks.

machine learning - svm feature fusion techique

for my final thesis i am trying to build up an 3d face recognition system by combining color and depth information. the first step i did, is to realign the data-head to an given model-head using the iterative closest point algorithm. for the detection step i was thinking about using the libsvm. but i dont understand how to combine the depth and the color information to one feature vector? they are dependent information (each point consist of color (RGB), depth information and also scan quality).. what do you suggest to do? something like weighting?
edit:
last night i read an article about SURF/SIFT features i would like to use them! could it work? the concept would be the following: extracting this features out of the color image and the depth image (range image), using each feature as a single feature vector for the svm?
Concatenation is indeed a possibility. However, as you are working on 3d face recognition you should have some strategy as to how you go about it. Rotation and translation of faces will be hard to recognize using a "straightforward" approach.
You should decide whether you attempt to perform a detection of the face as a whole, or of sub-features. You could attempt to detect rotation by finding some core features (eyes, nose, etc).
Also, remember that SVMs are inherently binary (i.e. they separate between two classes). Depending on your exact application you will very likely have to employ some multi-class strategy (One-against-all or One-against-many).
I would recommend doing some literature research to see how others have attacked the problem (a google search will be a good start).
It sounds simple, but you can simply concatenate the two vectors into one. Many researchers do this.
What you arrived at is an important open problem. Yes, there are some ways to handle it, as mentioned here by Eamorr. For example you can concatenate and do PCA (or some non linear dimensionality reduction method). But it is kind of hard to defend the practicality of doing so, considering that PCA takes O(n^3) time in the number of features. This alone might be unreasonable for data in vision that may have thousands of features.
As mentioned by others, the easiest approach is to simply combine the two sets of features into one.
SVM is characterized by the normal to the maximum-margin hyperplane, where its components specify the weights/importance of the features, such that higher absolute values have a larger impact on the decision function. Thus SVM assigns weights to each feature all on its own.
In order for this to work, obviously you would have to normalize all the attributes to have the same scale (say transform all features to be in the range [-1,1] or [0,1])

Resources