Eye blink detection when slightly moving/rotating photo - opencv

I implemented an eye blink detection solution using the research from this article:
http://www.iu.hio.no/~frodes/unitech10/011-Krolak/
I use a Haar eye classifier to identify the two eye regions, then using template matching on both eyes to detect blink state change. I also require that the face and eye regions remain fairly still. It works pretty well, except I occasionally get false positive on photos if I slightly move them (particularly rotate/scale). Does anyone have any suggestions to eliminate such cases? I don't want to make the stillness too strict, because it makes the live case unusable.

I have implemented with some success two cascades, one that detect open eyes and one that detect closed eyes.
You can use the face detector and restrict the search on the eyes region, then apply the "open eye" cascade. The good thing with that approach is that you can add slightly different poses and angles for the eyes on the training set. Worked really well for me.

Related

Different ways of detecting smile

I would like to know more about different ways of detecting smile on image. As far as I know, there are many libraries that are allowing to detect face and smile. The ones that I've tried are:
FaceSDK from Luxand
OpenCV
OpenIMAJ
Instead of just using them, I'm curious about how they are working. I know that OpenCV and OpenIMAJ are working based on Haar classifiers. I don't really follow, how FaceSDK is doing face and smile detection though.
I can imagine, that you can get two different ways of smile detection:
Perform fully emotional detection. You can find eyes, nose, mouth and other features on face and then compute emotion based on those informations. If you get "happy" emotion, you can assume, that there is smile (or something like that, just finding mouth and checking curve on lower lip?).
Similiar to Haar cascades, search image and try to find object that is similiar to the one that you are searching (having many negative and positive samples). This one seems to be faster, but less trustworthy if not used with some "helpers".
Is there any other way? Do you guys have some articles on one of those ways?

Image Processing - Determine if someone looking into camera

with image processing libraries like opencv you can determine if there are faces recognized in an image or even check if those faces have a smile on it.
Would it be possible to somehow determine, if the person is looking directly into the camera? As it is hard even for the human eye to determine is someone is looking into the camera or to a close point, i think that this will be very tricky.
Can someone agree?
thanks
You can try using an eye detection program, I remember doing back a few years ago, and it wasn't that strong, so when we tilt our head slightly off the camera, or close our eyes, the eyes can't be detected.
Is it is not clear, what I really meant was our face must be facing straight at the camera with our eyes open before it can detect our eyes. You can try doing something similar with a bit of tweaks here and there.
Off the top of my head, split the image to different sections, for each ROI, there are different eye classifiers, for example, upper half of the image, u can train the a specific classifiers of how eyes look like when they look downwards, lower half of image, train classifiers of how eyes look like when they look upwards. and for the whole image, apply the normal eye detection in case the user move their head along while looking at the camera.
But of course, this will be based on extremely strong classifiers and ultra clear quality images, video due to when the eye is looking at. Making detection time, extremely slow even if my method is successful.
There maybe other ideas available too that u can explore. It's slightly tricky, but it not totally impossible. If openCV can't satisfy, openGL? so many libraries, etc available. I wish you best of luck!

Closed eye detection opencv C++

I need to detect closed eyes only and also both eyes separately. That means I need to tell if left eye is open or closed, also same about the right eye.
I tried few ways. One of them is to detect eyes with haarcascade_eye and haarcascade_eye_tree_eyeglasses separately and then compare the results. If both detect eye, then eye open, if one detect and another can't,then eye closed. This trick was taken from this link:
http://tech.groups.yahoo.com/group/OpenCV/messages/87666?threaded=1&m=e&var=1&tidx=1
But it doesn't work as expected.eye cascade detectors don't work as mentioned in the link. Much close results are found with those haarcascade that I mentioned above. Sometimes it gives correct result, sometimes it can't. I don't know why. Besides it can't be told with this method that which eye is open and which eye is closed.
Now can someone help me to solve this?? At least I need a way to tell that one of the eyes is closed regardless which one and need to do that accurately. Please help.......
If you want to avoid training your own Haar cascade to detect a single eye, you can attempt simpler techniques such as pupil detection. If you fail to detect a black circle, the eye is closed. If you have a smallish region of interest, this probably works very well. Another option would be color histograms of the eye region, which may look pretty different for the open and closed state.
If you cannot predict with reasonable accuracy where the eyes can be found in the image, these approaches are doomed and your best shot is training your own cascade I think.

fingertip detection and tracking

i am working on a project detecting and tracking fingers. Though i find there is quiet a lot resource on this task, i haven't found a effective one yet :(.
So far i have thought of methods to detect hands as follow:
Haar training. But firstly we don't have a trained set(xml) as that in the face detection. Secondly, if we do the training ourselves, we don't have enough samples (i am still a college student)
skin color detection in HSV space. I have tried this one but the result has a lot of noises so cannot helps me continue the further detection on fingertip.
3.use Handvu. But i have heart that this lib is hard to set up and used in windows...
So in a word, can anyone give me any suggestions on how to detect hands effectively? (After that i may consider about detecting fingertips..)
Thanks!!
Here is a pretty in-depth paper on finger segmentation using Zernike moments. Here is a good paper on using Zernike moments for image recognition as a basis for the first paper.
Can you explain more about your experimental setup? Are you trying to track fingers against a cluttered background, or a plain cardboard sheet?
Haar like features perform very well for face recognition (the Viola Jones paper being a classic example) however I would not recommend them for your task. Although they can be computed fast using the integral image, they work well using a CASCADED Adaboost classification framework.
For skin colour detection, it depends on your setup. As a first step you could try doing background subtraction: simply learn the distribution (histogram) of pixels for foreground (ie. the hand) and the background and use these to do image segmentation.
I don't know what Handvu is
Zernike moments are also very good shape descriptors that are rotation invariant and can be made to be both scale and translation invariant.
I hope this helps!

how to recognize the object I detect in video frames is a people or a car

I have a problem to detect object in images or video frames.
I have a task that is detect some people or something who enter into the sight of web camera, and then my system will be alarm.
Next step is recognize which kind of thing the object is, in this phase I know use Hough transform to detect line, circle, even rectangle. But when a people come into the sight of camera, people's profile is more complex than line, circle and rectangle. How can i recognize the object is people not a car.
I need help to know that.
thanks in advance
I suggest you look at the paper "Histograms of Oriented Gradients for Human Detection" by Dalal and Triggs. They used Histograms of Oriented Gradients to detect humans in images
I think one method is to use Bayesian analysis on your image and see how that matches with a database of known images. I believe some people run a wavelet transform to emphasize more noticeable features.

Resources