what is the range in value of levelweights in opencv detectmultiscale3? - opencv

I would like to know what range of values are returned by the levelWeights variable in OpenCV detectMultiScale3 so that I can understand the confidence level of the detection. What are minimum and maximum values? What is a good level to use as a cutoff for detection?
import cv2
faces, rejectLevels, levelWeights = faceCascade.detectMultiScale3(image_array, scaleFactor=1.05, minNeighbors=1, outputRejectLevels=True)

Just in case anyone got an answer. I am interested too.

So I did a few tests with a few subjects using the Haar Cascade Eye Detection. What I did was
Eyes wide open (abnormally raising the brows)
Eyes fully open (normal way an average person does it)
Eyes Half open (Squinting, but making sure the eye balls show a bit)
Eyes Closed (fully closed).
The sweet spot for me is anything above 3.
When eyes are fully closed, levelWeights are usually empty, but sometimes I get values that are way below 1. Anything below 3 is either half closed or fully closed.
So I guess you might have to do your own experiments to find the sweet spot.

Related

Eye blink detection when slightly moving/rotating photo

I implemented an eye blink detection solution using the research from this article:
http://www.iu.hio.no/~frodes/unitech10/011-Krolak/
I use a Haar eye classifier to identify the two eye regions, then using template matching on both eyes to detect blink state change. I also require that the face and eye regions remain fairly still. It works pretty well, except I occasionally get false positive on photos if I slightly move them (particularly rotate/scale). Does anyone have any suggestions to eliminate such cases? I don't want to make the stillness too strict, because it makes the live case unusable.
I have implemented with some success two cascades, one that detect open eyes and one that detect closed eyes.
You can use the face detector and restrict the search on the eyes region, then apply the "open eye" cascade. The good thing with that approach is that you can add slightly different poses and angles for the eyes on the training set. Worked really well for me.

Image Processing - Determine if someone looking into camera

with image processing libraries like opencv you can determine if there are faces recognized in an image or even check if those faces have a smile on it.
Would it be possible to somehow determine, if the person is looking directly into the camera? As it is hard even for the human eye to determine is someone is looking into the camera or to a close point, i think that this will be very tricky.
Can someone agree?
thanks
You can try using an eye detection program, I remember doing back a few years ago, and it wasn't that strong, so when we tilt our head slightly off the camera, or close our eyes, the eyes can't be detected.
Is it is not clear, what I really meant was our face must be facing straight at the camera with our eyes open before it can detect our eyes. You can try doing something similar with a bit of tweaks here and there.
Off the top of my head, split the image to different sections, for each ROI, there are different eye classifiers, for example, upper half of the image, u can train the a specific classifiers of how eyes look like when they look downwards, lower half of image, train classifiers of how eyes look like when they look upwards. and for the whole image, apply the normal eye detection in case the user move their head along while looking at the camera.
But of course, this will be based on extremely strong classifiers and ultra clear quality images, video due to when the eye is looking at. Making detection time, extremely slow even if my method is successful.
There maybe other ideas available too that u can explore. It's slightly tricky, but it not totally impossible. If openCV can't satisfy, openGL? so many libraries, etc available. I wish you best of luck!

Histogram comparision

Is it possible to compare two intensity histograms (derived from gray-scale images) and obtain a likeness factor? In other words, I'm trying to detect the presence or absence of an soccer ball in an image. I've tried feature detection algorithms (such as SIFT/SURF) but they are not reliable enough for my application. I need something very reliable and robust.
Many thanks for your thoughts everyone.
This answer (Comparing two histograms) might help you. Generally, intensity comparisons are quite sensitive as e.g. white during day time is different from white during night time.
I think you should be able to derive something from compareHist() in openCV (http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html) to suit your needs if compareHist() does fit your purpose.
If not, this paper http://www.researchgate.net/publication/222417618_Tracking_the_soccer_ball_using_multiple_fixed_cameras/file/32bfe512f7e5c13133.pdf
tracks the ball from multiple cameras and you might get some more ideas from that even though you might not be using multiple cameras.
As kkuilla have mentioned, there is an available method to compare histogram, such as compareHist() in opencv
But I am not certain if it's really applicable for your program. I think you will like to use HoughTransfrom to detect circles.
More details can be seen in this paper:
https://files.nyu.edu/jb4457/public/files/research/bristol/hough-report.pdf
Look for the part with coins for the circle detection in the paper. I did recall reading up somewhere before of how to do ball detection using Hough Transform too. Can't find it now. But it should be similar to your soccer ball.
This method should work. Hope this helps. Good luck(:

Closed eye detection opencv C++

I need to detect closed eyes only and also both eyes separately. That means I need to tell if left eye is open or closed, also same about the right eye.
I tried few ways. One of them is to detect eyes with haarcascade_eye and haarcascade_eye_tree_eyeglasses separately and then compare the results. If both detect eye, then eye open, if one detect and another can't,then eye closed. This trick was taken from this link:
http://tech.groups.yahoo.com/group/OpenCV/messages/87666?threaded=1&m=e&var=1&tidx=1
But it doesn't work as expected.eye cascade detectors don't work as mentioned in the link. Much close results are found with those haarcascade that I mentioned above. Sometimes it gives correct result, sometimes it can't. I don't know why. Besides it can't be told with this method that which eye is open and which eye is closed.
Now can someone help me to solve this?? At least I need a way to tell that one of the eyes is closed regardless which one and need to do that accurately. Please help.......
If you want to avoid training your own Haar cascade to detect a single eye, you can attempt simpler techniques such as pupil detection. If you fail to detect a black circle, the eye is closed. If you have a smallish region of interest, this probably works very well. Another option would be color histograms of the eye region, which may look pretty different for the open and closed state.
If you cannot predict with reasonable accuracy where the eyes can be found in the image, these approaches are doomed and your best shot is training your own cascade I think.

OpenCV: Detect blinking lights in a video feed

I have a video feed. This video feed contains several lights blinking at different rates. All lights are the same color (they are all infrared LEDs). How can I detect the position and frequency of these blinking lights?
Disclaimer: I am extremely new to OpenCV. I do have a copy of Learning OpenCV, but I am finding it a bit overwhelming. If anyone could explain a solution in OpenCV terminology, it would be greatly appreciated. I am not expecting code to be written for me.
Threshold each image in the sequence with a threshold that makes the LED:s visible. If you can threshold it with a threshold that only keeps the LED and removes background then you are more or less finished since all you need to do now is to keep track of each position that has seen a LED and count how often it occurs.
As a middle step, if there is "background noise" in the thresholded image would be to use erosion to remove small mistakes, and then maybe dilate to "close holes" in the blobs you are actually interested in.
If the scene is static you could also make a simple background model by taking the median of a few frames and removing the resulting median image from any frame and threshold that. Stuff that has changed (your LEDs) will appear stronger.
If the scene is moving I see no other (easy) solution than making sure the LED are bright enough to be able to use the threshold approach given above.
As for OpenCV: if you know what you want to do, it is not very hard to find a function that does it. The hard part is coming up with a method to solve the problem, not the actual coding.
If the leds are stationary, the problem is far simpler than when they are moving. Assuming they are stationary, a solution to find the frequency could simply be to keep a vector or an array for each pixel location in which you store the values of that pixel, preferably after the preprocessing described by kigurai, over some timeframe. You can then compute the 1D fourier transform of those value vectors and find the ground frequency as the first significant component after the DC peak. If the DC peak is too low, it means there is no led there.
Hope this problem is still somewhat actual, and that my solution makes sense.

Resources