Facial expression detection - opencv

I am currently working on a project where I have to extract the facial expression of a user (only one user at a time) like sad or happy.
There are a lot of programs/APIs to do face detection but I did not find any one to do automatic expression recognition.
The best possibility I found so far:
-Using Luxand FaceSDK, which will give me access to 66 different points within the face, so I would still have to manually map them to expressions.
I used OpenCV for face detection earlier, which was working great, so If anyone has some tips on how to do it with OpenCV, that would be great!
Any programming language is welcome (Java preferred).
Some user on a OpenCV board suggested looking for AAM (active apereance models) and ASM (active shape models), but all I found were papers.

You are looking for machine learning solutions. FaceSDK looks like a good feature extractor. I don't think that there will be an available library to solve your specific problem. Your best bet is to:
choose a machine learning framework (SVM, PCA) with a java implementation
take a serie of photos and label them yourself with the target expression (happy or sad)
compute your model and test it
This involves some knowledge about machine learning.

Related

Suspicious facial expression and foreign object recognition using machine learning and image processing

I am stuck with a project on detecting suspicious facial expression along with detection of foreign objects (e.g. guns, metal rods or anything). I know nothing much of ML or image processing. I need to complete the project as soon as possible. It would be helpful if anyone could direct me with some things.
How can I manage a dataset?
Which type of code should I follow?
How do I present the final system?
I know it is a lot to ask but any amount of help is appreciated.
I have tried to train a machine using transfer learning following this link in in youtube:
https://www.youtube.com/watch?v=avv9GQ3b6Qg\
The tutorial uses mobilenet as the model and a known dataset of 7 subset (Angry, Disgust, Fear, Happy, Neutral, Sad, Surprised). I was able to successfully train the model get the face detected based on these 7 emotions.
How do I further develop it to achieve what I want?

OpenCV vs Mahout for Computer Vision based Machine Learning?

For some time, I have been using OpenCV. It satisfied all my needs of feature extraction, matching and clustering(k-means till now) and classification(SVM). Recently, I came across Apache Mahout. But, most of the algorithms for machine learning are already available in OpenCV as well. Are there any advantages of using Mahout over OpenCV if the work relates to Videos and Images ?
This question might be put on hold since it is opinion based. I still want to add a basic comparison.
OpenCV is capable of anything about vision and ml that is possibly researched, or invented. The vision literature is based on it, and it develops according to the literature. Even the newborn ml algorithms -like TLD, originated on MATLAB- (http://www.tldvision.com/) can also be implemented using OpenCV (http://gnebehay.github.io/OpenTLD/) with some effort.
Mahout is capable, too and specific to ml. It includes not only the well known ml algorithms, but also the specific ones. Say you came across to a paper "Processing Apples with K-means Orientation Filtering". You can find OpenCV implementations of this paper all around the web. Even the actual algorithm might be open source and developed using OpenCV. With OpenCV, say it takes 500 lines of code, but with Mahout, the paper might be already implemented with a single method making everything easier
An example about this case is http://en.wikipedia.org/wiki/Canopy_clustering_algorithm, which is harder to implement using OpenCV right now.
Since you are going to work with image data sets you will need to learn about HIPI, too.
To sum up, here is a simple pro-con table:
know-how (learning curve): OpenCV is easier, since you already know about it. Mahout+HIPI will take more time.
examples: Literature + vision community commonly use OpenCV. Open source algorithms are mostly created with C++ api of OpenCV.
ml algorithms: Mahout is only about ml, whereas OpenCV is more generic. Still OpenCV has access to basic ml algorithms.
development: Mahout is easier to work with in terms of coding and time complexity (I am not sure about the latter, but I reckon it is).

Active Shape Models vs Active Appearance Models

I am implementing active ASM/AAM using OpenCV for segmentation of face images using OpenCV (to be further used in face recognition). I am pretty much done with the canonical implementation of ASM (as per T. Cootes papers) and result I get is not ideal, it does not always converge and when it does some boundaries are not captured, which I believe is a problem in the modeling of a local structure - i.e. gradient profile matching.
Now I am a bit unsure what to do next. ASM is a simpler and computationally less intensive algorithm compared to AAM. Should I continue improving ASM(say for example using 2D profiles rather than 1D profiles, or use different profile structure for different type of lanmarks) or get my hands straight on AAM?
Edit: Also, what are the papers you could recommend that improve on the original work by T.Cootes? I know there are so many of them, but maybe there are techniques that are considered canonical today?
You can find clarifications and implemented AAM whith 2D profiles in the book "Mastering OpenCV with Practical Computer Vision Projects" by Packt Publishing 2012. A lot of projects described in this book are open source and can be downloaded here: GitHub. They are more advanced than T.Cootes implementation.
I can say that AAM (as existing implementation you can look also at vosm) have good convergence (better than ASM) only if you train it on the same man (very good results for example for FRANCK (Talking Face Video) sequence) in other cases ASM works better.

Creating Semantic Maps using OpenCV + OpenSLAM?

I'm currently working on a project that aims to recognize objects in an indoor household type of environment and to roughly represent the location of these objects on a map. It is hoped that this can all be done using a single Kinect camera.
So far I have managed to implements a basic object recognition system using OpenCV's SURF library. I have followed and used similar techniques to what has been described in "OpenCV 2 Computer Vision Application Programming Cookbook".
I'm now slowly shifting focus to the mapping portion of this project. I have looked into RGBDSLAM as a method to create 3D maps and represent any objects found. However, I can't seem to find a way to do this. I have already asked a question about this at http://answers.ros.org/question/37595/create-semantic-maps-using-rgbdslam-octomaps/ with no luck so far.
I have also briefly researched GMapping and MonoSLAM but I'm finding it difficult to assess whether these are suitable since I've only just started learning about SLAM.
So, any advise on these SLAM techniques would be much appreciated!
I'm also open to alternatives to what I've talked about. If you know of any other methods of creating semantic maps of environment then please feel free to share!
Cheers.
I have used A.J Davison's MonoSLAm method and it is only suitable for small environments like a desktop or a small room (using a fish eye lens). Try to use PTAMM (by Dr. Robert Castle), its much more robust and the source code is free for academic use.

Face Recognition in OpenCV

I was trying to build a basic Face Recognition system (PCA-Eigenfaces) using OpenCV 2.2 (from Willow Garage). I understand from many of the previous posts on Face Recognition that there is no standard open source library that can provide all the face recognition for you.
Instead, I would like to know if someone has used the functions(and integrated them):
icvCalcCovarMatrixEx_8u32fR
icvCalcEigenObjects_8u32fR
icvEigenProjection_8u32fR
et.al in the eigenobjects.cpp to form a face recognition system, because the functions seem to provide much of the required functionality along with cvSvd?
I am having a tough time trying to understand to do so since I am new to OpenCV.
Update: OpenCV 2.4.2 now comes with the very new cv::FaceRecognizer. Please see the very detailed documentation at:
http://docs.opencv.org/trunk/tutorial_face_main.html
I worked on a project with CV to recognize facial features. Most people don't understand the difference between Biometrics and Facial Recognition. There is a huge difference based on the fact that Biometrics is mainly based on histogram density matching while Facial Recognition implements this and vector support based on feature recognition from density. Check out the following link. This is the library you want to use if you are pursuing CV and Facial Recognition: www.betaface.com . Oleksander is awesome and based out of Germany, but he answers questions which is nice.
With OpenCV it's easy to get started with face detection. It comes with some predefined sets for feature detection, including face detection.
You might already know this one: OpenCV Wiki, FaceDetection
The important functions in this example are cvLoad and cvHaarDetectObjects. The first one loads the classifier and the second one applies it to an image.
The standard classifiers work pretty well. Of course you can train your own classifiers, if the standard ones don't fit your purpose.
As you said there are a lot of algorithms for face detection. Some of them might provide better results, but OpenCV is definitively a good start.

Resources