Face Recognition using Kinect - opencv

I went through the Kinect SDK and Toolkit provided by Microsoft. Tested the Face Detection Sample, it worked successfully. But, how to recognize the faces ? I know the basics of OpenCV (VS2010). Is there any Kinect Libraries for face recognition? if no, what are the possible solutions? Are there, any tutorials available for face recognition using Kinect?

I've been working on this myself. At first I just used the Kinect as a webcam and passed the data into a recognizer modeled after this code (which uses Emgu CV to do PCA):
http://www.codeproject.com/Articles/239849/Multiple-face-detection-and-recognition-in-real-ti
While that worked OK, I thought I could do better since the Kinect has such awesome face tracking. I ended up using the Kinect to find the face boundaries, crop it, and pass it into that library for recognition. I've cleaned up the code and put it out on github, hopefully it'll help someone else:
https://github.com/mrosack/Sacknet.KinectFacialRecognition

I've found project which could be a good source for you - http://code.google.com/p/i-recognize-you/ but unfortunetly(for you) its homepage is not in english. The most important parts:
-project(with source code) is at http://code.google.com/p/i-recognize-you/downloads/list
-in bibliography author mentioned this site - http://www.shervinemami.info/faceRecognition.html. This seems to be a good start point for you.

There are no built in functionality for the Kinect that will provide face recognition. I'm not aware of any tutorials out there that will do it, but someone I'm sure has tried. It is on my short list; hopefully time will allow soon.
I would try saving the face tracking information and doing a comparison with that for recognition. You would have a "setup" function that would ask the user the stare at the Kinect, and would save the points the face tracker returns to you. When you wish to recognize a face, the user would look at the screen and you would compare the face tracker points to a database of faces. This is roughly how the Xbox does it.
The big trick is confidence levels. Numbers will not come back exactly as they did previously, so you will need to include buffers of values for each feature -- the code would then come back with "I'm 93% sure this is Bob".

Related

To begin traffic light recognition

I and my partner decided to implement a traffic light recognition program as a student project.
But we are absolute beginners with computer vision and have no idea how to start with this. (What only we know is to use OpenCV)
Should we firstly learn image recognition or just start with object tracking?
Our ideal production is to recognize traffic light in a video but not just an image.
In my opinion, you should take a serious course about computer vision before going deeper.
The video is just a sequence of picture. So you could use opencv to read each image then process them.
For you current project, a simple object detection using hog feature should be more than enough.
There's tutorial at http://www.hackevolve.com/create-your-own-object-detector/ . It's very easy to understand and source code is also available, so you can move quick.
Good luck.

3D Object tracking detection using Kinect

I am working on identifying an object by using Kinect sensor so to get x,y,z coordinates of the object.
I am trying to find the related information for this but could not able to find much. I have seen the videos as well but nobody is sharing the information or any sample code?
This is what I want to achieve https://www.youtube.com/watch?v=nw3yix3XomY
Proabably, few people may asked same question but as I am new to the Kinect and these libraries due to which I need little more guidance.
I read somewhere that object detection is not possible using Kinect v1. We need to use 3rd party libraries like open CV or point-clouds (pcl).
Can somebody help me that even by using third party libraries how exactly can I identify object via a Kinect sensor?
It will be really helpful.
Thank you.
As the author of the video you linked stated in the comment, following this PCL tutorial will help you. As you found out already, realizing this may not be possible using the standalone SDK. Relying on PCL will help you not reinvent the wheel.
The idea there is to:
Downsample the cloud to have less data to deal with in the next steps (this also reduces noise a bit).
Identify keypoints/features (i.e. points, areas, textures that remain somehow invariant to some transformations).
Compute the keypoint descriptors, mathematical representations of these features.
For each scene keypoint descriptor, find nearest neighbor into the model keypoints descriptor cloud and add it to the correspondences vector.
Perform clustering on the keypoints and detect the model in the scene.
The software in the tutorial needs the user to manually feed in the model and scene files. It doesn't do that on live feed, as the video you linked.
The process should be pretty similar though. I'm not sure how cpu-intensive the detection is, so it might require additional performance tweaking.
Once you have frame-by-frame detection in place, you could start thinking about actually tracking an object across the frames. But that's another topic.

iPhoto face recognition algorithm

I'm writing a project in which we need to be able to recognize faces using OpenCV. I'm training my base on photos, then give test photos to the program with people, which we attended. Recognition works good (80-90%). But! If I give the program a photo with person, which we didn't use in the teaching of our base, the program finds a man in our base with the terrible low distance. At the same time, Apple iPhoto works good with all photos. Can anyone know what algorithm they used to recognize faces ? or had my problem? Help please.
P.S. Tested algorithms: LBPHFaceRecognizer, FisherFaceRecognizer, EigenFaceRecognizer.
You mention iPhoto so I'm going to assume you're using OS X or iOS. If so, you may want to try Apple's built-in face detection.

Emgu Cv Motion Detection

I'm working on hand detection using EmguCv. I have successfully detected the skin color object in live video feed. With in that skin detected object I want to track the moving hand only. Please someone tell me how to achieve this without degrading the performance. A code or step by step procedure will be helpful.
Is there any best reference ebook on EmguCv for learning or any other material with code snipets?
you actually need to perform a number of steps to do it.
1) you need to find the hand , the best way to figure that out will be to use HarrCascades,
you can find more about it here: http://www.360doc.com/content/11/1220/16/5087210_173660914.shtml
2)Then you need to use absolute subtraction between the current frame and the previous frame to find out the moving part in the video.
for absolute subtraction you can check this link out.
After detecting the hand you can use Mean Shift Tracking Algorithm to track the hand.There is a good implementation in Accord.NET and an example that you can use for learning how to use it. About Viola-Jones algorithm m there is a HandCascade.xml file from a guy Nikolas Markou , but when I tried it , the performance wasn't good at all and there are similare complaints from other people using that Haar Cascade.
About the reference book of EmguCv, to my best knowledge, there is only one book so far:
Emgu CV Essentials
http://www.amazon.com/Emgu-CV-Essentials-Shin-Shi-ebook/dp/B00GOMTTHI

Face Recognition in OpenCV

I was trying to build a basic Face Recognition system (PCA-Eigenfaces) using OpenCV 2.2 (from Willow Garage). I understand from many of the previous posts on Face Recognition that there is no standard open source library that can provide all the face recognition for you.
Instead, I would like to know if someone has used the functions(and integrated them):
icvCalcCovarMatrixEx_8u32fR
icvCalcEigenObjects_8u32fR
icvEigenProjection_8u32fR
et.al in the eigenobjects.cpp to form a face recognition system, because the functions seem to provide much of the required functionality along with cvSvd?
I am having a tough time trying to understand to do so since I am new to OpenCV.
Update: OpenCV 2.4.2 now comes with the very new cv::FaceRecognizer. Please see the very detailed documentation at:
http://docs.opencv.org/trunk/tutorial_face_main.html
I worked on a project with CV to recognize facial features. Most people don't understand the difference between Biometrics and Facial Recognition. There is a huge difference based on the fact that Biometrics is mainly based on histogram density matching while Facial Recognition implements this and vector support based on feature recognition from density. Check out the following link. This is the library you want to use if you are pursuing CV and Facial Recognition: www.betaface.com . Oleksander is awesome and based out of Germany, but he answers questions which is nice.
With OpenCV it's easy to get started with face detection. It comes with some predefined sets for feature detection, including face detection.
You might already know this one: OpenCV Wiki, FaceDetection
The important functions in this example are cvLoad and cvHaarDetectObjects. The first one loads the classifier and the second one applies it to an image.
The standard classifiers work pretty well. Of course you can train your own classifiers, if the standard ones don't fit your purpose.
As you said there are a lot of algorithms for face detection. Some of them might provide better results, but OpenCV is definitively a good start.

Resources