I have been trying OpenCV iOS sample to achieve facial emotion recognition.
I got OpenCV sample iOS project 'openCViOSFaceTrackingTutorial' from below link.
https://github.com/egeorgiou/openCViOSFaceTrackingTutorial/tree/master/openCViOSFaceTrackingTutorial
This sample project uses 'face detection', it works fine. It uses 'haarcascade_frontalface_alt2.xml' trained model.
I want to use the same project but have haarcascade for other facial emotions like Sad, Surprise. I have been searching for how to train haarcascade for emotions like Sad, Surprise etc. but couldn't find any clue.
Could someone advise me, how to train haarcascade for emotions like Sad, Surprise etc. to use in this sample OpenCV iOS project? Or will there be readymade haarcascade for emotions like Sad, Surprise etc. to use for this iOS sample.
Thanks
you can read tutorial on how to generate haarcascade file, but generating haarcascade for emotions is not easy task.
I would suggest to extract Mouth and eye from face and using haarcascade and process these rectangles for detecting emotions. you can get gaarcascade for mouth and eye from here . Mouth and Eye are complicated feature so will not work if you will try to find it in whole image, so first find the front face and try to detect mouth and eye within face rectangle only.
There are open source library available on github for emotion detaction, though these are not for ios, you can use similar algorithm to implement in ios.
Related
I'm working on an iOS application that involves object detection (in this case, the eye) using OpenCV. How can I create a Haar cascade that I can implement into my iOS application? I've already seen this question, but it doesn't answer the question.
The source of opencv_contrib face module, ~\face\data\cascades, comes with pre-trained face detection data which included below features.
left and right ears (haarcascade_mcs_leftear.xml, haarcascade_mcs_rightear.xml)
left and right eyes (haarcascade_mcs_lefteye.xml, haarcascade_mcs_righteye.xml)
eye pair (haarcascade_mcs_eyepair_big.xml, haarcascade_mcs_eyepair_small.xml)
mouth (haarcascade_mcs_mouth.xml)
nose (haarcascade_mcs_nose.xml)
upper body (haarcascade_mcs_upperbody.xml)
You can use them in your code to build your iOS application. There are samples and tutorials in ~\face.
I am trying to extract mouth(lips) from images. What I did is this. I first extract faces from images and tried to detect mouth using haarcascade_mcs_mouth.xml. However, It keeps detecting other parts instead detecting mouth from the face. Is there any other way to detect mouth from face images?
You can use some facial landmark detector such as Flandmark detector:
http://cmp.felk.cvut.cz/~uricamic/flandmark/
Or STASM:
http://www.milbo.users.sonic.net/stasm/
To correctly detect mouth regions, you can try more advanced methods like the following one:
X. Zhu, D. Ramanan. Face Detection, Pose Estimation and Landmark Localization in the Wild Computer Vision and Pattern Recognition (CVPR) Providence, Rhode Island, June 2012. (Project page), (PDF Online available), (Source code)
Several real examples can be seen from here.
I'm making system for emotion detection on Android mobile phone. I'm using OpenCV's Cascades (LBP's or Haars) to find face, eyes, mouth areas etc. What I have observed till now that accuracy isn't stable. There are situations where I can't find eye or I have "additional faces" in the background due to very slight change of light. What I wanted to ask is:
1) Is Haar Cascade more Accurate than LBP?
2) Is there any good method for increasing accuracy of detection? Like find face/eyes etc on binarized image, or use some edge detection filter, saturation, anything else?
you can try Microsoft API for face emotion detection ..i am trying in my project so..result is best..try this link
https://www.microsoft.com/cognitive-services/en-us/emotion-api
sometimes HAAR or LBP will not get a good enough result for a face detection system. if you want to get more good acc. i think you can try to using STASM
it base on opencv and using Haar to detect face and landmark. something others you can also try YOLO Face detection
if you want to build your own face detection system just base on Haar or LBP and make them get a good result, maybe you will need to using the LBP to find out the face faster and train a CNN model to get the last good result, it can make your system to detect faces in realtime. as i know, the SEETAFACE is using this way to make a realtime faces detection.
I'm using cv::FaceRecognizer(EigenFaceRecognizer) to recognize my face.
I input 10 images of my face (which is photographed only my face. Not a background and size is 70x70, format is pgm) to train the recognizer.
Then try to predict exactly same photos that I used in training with Face CascadeClassifier and the Recognizer. But none of the photos are recognized as me!
Is there anything wrong?
Yes you are probably doing something wrong, or maybe your input photos are too similar.
You should start with one of the tutorials on using FaceRecognizer, such as in the OpenCV official tutorials or Chapter 8 of the book "Mastering OpenCV". And then to improve your recognition accuracy, follow the recommendations at "http://www.answers.opencv.org/question/15362/opencv-and-face-recognition/" and "http://www.answers.opencv.org/question/5651/face-recognition-with-opencv-24x-accuracy/".
And for further questions about OpenCV, you should post them on "answers.opencv.com" instead of StackOverflow, since that site has official support!
For my new iPhone app I like to use OpenCV for detecting face elements with the purpose of morphing faces. Does anybody know what elements need to be detected for this, and if it's even possible with OpenCV? Are there perhaps better alternatives?
You could use dlib for detecting facial landmarks, there is 68 points:
Here is very useful project: https://github.com/zweigraf/face-landmarking-ios
Opencv has eye, ear,mouth, nose, eye pair detector cascades.