I have run this Opencv Haarlike eye-detection from this link with C++ visual studio 2010
http://docs.opencv.org/doc/tutorials/objdetect/cascade_classifier/cascade_classifier.html
And my camera isn't running smooth. So I delete for-loop(of this code) out and running only camera. The camera running smooth.
The question is if I want to modified this code to detect eye and face.
How can I modified this code to running in smooth?
please show the example to modified this code.
Best thank and sorry for bad language
Chairat(Thailand)
Generally it's not a trivial question, but a basic idea (which i used for my BSc thesis) is quite simple. It's not the entire solution i've used, but this should be enough right now, if not - let me know i will write more about it.
For first frame:
Find face (i've used haarcascade_frontalface_default.xml cascade, but you may try with different) and remember its position.
Within face rectangle find eyes (use Haar cascade for eye pair (haarcascade_mcs_eyepair_big.xml), not for one eye - it's much faster and simpler solution) and remember position.
For other frames:
Expand (for about 20-50%) rectangle in which you have found face recently.
Find face in expanded rectangle.
Find eyes within face. If you haven't found face in previous step, you may try to search for eyes in expanded rectangle of previous eyes position.
Few important things:
while searching use CV_HAAR_FIND_BIGGEST_OBJECT flag.
convert frame to grayscale before searching - during searching opencv uses only grayscale images so it will be faster to convert whole image once than converting whole image (for first search - face) and than converting only rectangle containing face (for second search - eyes)
some people say that equalizing histogram before searching may improve results, i'm not sure about that, but if you want you may try this - use equalizeHist function. Note that it works only on grayscale images.
Related
What will be the procedure to correct the following distorted images ? It looks like the images are bulging out from center. These are of the same QR code, and so a combination of such images can be used to arrive at a single correct and straight image.
Please advice.
The distortion you are experiencing is called "barrel distortion". A technical name is "combination of radial distortion and tangential distortions"
The solution for your problem is openCV camera calibration module. Just google it and you will find documentations in openCV wiki. More over, openCV already has built in source code examples of how to calibrate the camera.
Basically, You need to print an image of a chess board, take a few pictures of it, run the calibration module (built in method) and get as output transformation matrix. For each video frame you apply this matrix (I think the method called cvUndistort()) and it will straighten the curved lines in the image.
Note: It will not work if you change the zoom or focal length of the camera.
If camera details are not available and uncontrollable - then your problem is very serious. There is a way to solve the distortion, but I don't know if openCV has built in modules for that. I am afraid that you will need to write a lot of code.
Basically - you need to detect as much as possible long lines. Then from those lines (vertical and horizontal) you build a grid of intersection points. Finally you fit the grid of those points to openCV calibration module.
If you have enough intersection points (say 20 or more) you will be able to calculate the distortion matrix and un-distort the image.
You will not be able to fully calibrate the camera. In other words, you will not be able to run a one time process that calculates the expected distortion. Rather - in each and every video frame, you will calculate the distortion matrix directly - reverse it and un-distort the image.
If you are not familiar with image processing techniques or unable to find a reliable open source code which directly solves your problem - then I am afraid that you will not be able to remove the distortion. sorry
First of all I'm a total newbie in image processing, so please don't be too harsh on me.
That being said, I'm developing an application to analyse changes in blood flow in extremities using thermal images obtained by a camera. The user is able to define a region of interest by placing a shape (circle,rectangle,etc.) on the current image. The user should then be able to see how the average temperature changes from frame to frame inside the specified ROI.
The problem is that some of the images are not steady, due to (small) movement by the test subject. My question is how can I determine the movement between the frames, so that I can relocate the ROI accordingly?
I'm using the Emgu OpenCV .Net wrapper for image processing.
What I've tried so far is calculating the center of gravity using GetMoments() on the biggest contour found and calculating the direction vector between this and the previous center of gravity. The ROI is then translated using this vector but the results are not that promising yet.
Is this the right way to do it or am I totally barking up the wrong tree?
------Edit------
Here are two sample images showing slight movement downwards to the right:
http://postimg.org/image/wznf2r27n/
Comparison between the contours:
http://postimg.org/image/4ldez2di1/
As you can see the shape of the contour is pretty much the same, although there are some small differences near the toes.
Seems like I was finally able to find a solution for my problem using optical flow based on the Lukas-Kanade method.
Just in case anyone else is wondering how to implement it in Emgu/C#, here's the link to a Emgu examples project, where they use Lukas-Kanade and Farneback's algorithms:
http://sourceforge.net/projects/emguexample/files/Image/BuildBackgroundImage.zip/download
You may need to adapt a few things, e.g. the parameters for the corner detection (the frame.GoodFeaturesToTrack(..) method) , but it's definetly something to start with.
Thanks for all the ideas!
with image processing libraries like opencv you can determine if there are faces recognized in an image or even check if those faces have a smile on it.
Would it be possible to somehow determine, if the person is looking directly into the camera? As it is hard even for the human eye to determine is someone is looking into the camera or to a close point, i think that this will be very tricky.
Can someone agree?
thanks
You can try using an eye detection program, I remember doing back a few years ago, and it wasn't that strong, so when we tilt our head slightly off the camera, or close our eyes, the eyes can't be detected.
Is it is not clear, what I really meant was our face must be facing straight at the camera with our eyes open before it can detect our eyes. You can try doing something similar with a bit of tweaks here and there.
Off the top of my head, split the image to different sections, for each ROI, there are different eye classifiers, for example, upper half of the image, u can train the a specific classifiers of how eyes look like when they look downwards, lower half of image, train classifiers of how eyes look like when they look upwards. and for the whole image, apply the normal eye detection in case the user move their head along while looking at the camera.
But of course, this will be based on extremely strong classifiers and ultra clear quality images, video due to when the eye is looking at. Making detection time, extremely slow even if my method is successful.
There maybe other ideas available too that u can explore. It's slightly tricky, but it not totally impossible. If openCV can't satisfy, openGL? so many libraries, etc available. I wish you best of luck!
I have opencv installed and working on my iphone (big thanks to this community). I'm doing template matching with it. It does find the object in the captured image. However, the exact location seems to be hard to tell.
Please take a look at the following video (18 seconds):
http://www.youtube.com/watch?v=PQnXNZMqpsU
As you can see in the video, it does find the template in the image. But when i move the camera a bit further away, then the found template is positioned somewhere inside that square. That way it's hard to tell the exact location of the found object.
The square that you see is basically the found x,y location of the template plus the width,height of the actual template image.
So basically my question is, is there a way to find the exact location of the found template image? Because currently it can be at any locastion inside that square. No real way to tell the exact location...?
It seems that you're not well-pleased with your template matching algorithm :)
Shortly, there are some ways to improve it, but I would recommend you to try something else. If your images are always as simple as in the video, you can use thresholding, contour finding, blob detection, etc. They are simple and fast.
For a more demanding environment, you may try feature matching. Look for SIFT, SURF, ORB, or other ways to describe your objects with features. Actually, ORB was specifically designed to be fast enough for the limited power of mobile phones.
Try this sample in the OCV samples/cpp/ folder
matching_to_many_images.cpp
And check this detailed answer on how to use feature detectors;
Detecting if an object from one image is in another image with OpenCV
Template matching (cvMatchTemplate()) is not invariant to scale and rotation. When you move the phone back, the image appears smaller, and the template "match" is just the place with the best match score, though it is not really a true match.
If you want scale and/or rotation invariance you will have to try non-template matching methods such as those using 2D-feature descriptors.
Check out the OpenCV samples for examples of how to do this.
So I'm using openCV to do square recognition on this image. I compiled the squares.c file on an image that I took and here are the results:
http://www.learntobe.org/urs/index1.php
The image on the left is the original and on the right is the image that is a result of running the square detection.
The results aren't bad, but I really need this to detect ALL of the squares and I'm really new to this openCV and image processing stuff. Does anyone know of how I can edit the squares.c file to possibly get the detection to be more inclusive so that all of the squares are highlighted?
Thanks a lot ahead of time.
All the whitish colors are tough to detect. Nothing separates it from the page itself. Try doing some kind of edge detection (check cvCanny or cvSobel).
You should also "pre-process" the image. That is, increase the contrast, make the colors more saturated, etc.
Also check this article http://www.aishack.in/2010/01/an-introduction-to-contours/ It talks about how the squares.c sample works. Then you'll understand a bit about how to improves the detection in your case.
Hope this helps!