Find corner points of eyes and Mouth - image-processing

I am able to detect eyes, nose and mouth in a given face using Matlab. Now, I want four more points i.e corners of the eyes and nose. how do i get these points??
This is the Image for corner points of nose.
Red point is showing the point, what I'm looking for.(its just to let you know.. there is no point in original image)

Active Appearance Model (AAM) could be useful in your case.
AMM is normally used for matching a statistical model of object shape and appearance to a new image and widely used for extracting face features and for head pose estimation.
I believe this could be helpful for you to start with.

You can try using corner detectors included in the computer vision system toolbox, such as detectHarrisFeatures, detectMinEigenFeatures, or detectFASTFeatures. However they may give you more points than you want, so you will have to do some parameter tweaking.

Related

why is shape-indexed-feature so effective on face alignment?

I am implementing some face alignment algorithm recently. I have read the following papers:
Supervised descent method and its applications to face alignment
Face alignment by explicit shape regression
Face alignment at 3000 fps via regressing local binary features
All the paper mentioned a important keyword: shape-indexed-feature or pose-indexed-feature. This feature plays a key role in face alignment process. I did not get the key point of this feature. Why is it so important?
A shape-indexed-feature is a feature who's index gives some clue about the hierarchical structure of the shape that it came from. So in face alignment, facial landmarks are extremely important, since they are the things that will be useful in successfully aligning the faces. But, just taking facial landmarks into account throws away some of the structure inherent to a face. You know that the pupil is inside the iris, which is inside the eye. So a shape-indexed-feature would do more than tell you that you are looking at a facial landmark - it would tell you that you are looking at a facial landmark inside another landmark inside another landmark. Because there are only a few features that are 3-nested like that, you can be more confident about aligning those correctly.
Here is a much older paper that explains some of this with simpler language (especially in the introduction): http://www.cs.ubc.ca/~lowe/papers/cvpr97.pdf
If you want get shape-indexed-feature, you should do similarity transform for the face landmarks in one image first. The aim is transform the origin landmarks to a specific location which could be the mean landmark of all images. So the landmarks of each image is at same position.
Then you could extract local features according to the relocate landmarks, which are shape-indexed-feature, cause now the landmarks of each image is a fix shape.
I seached hours get the answer above, a graduation thesis and translated it, but not sure whether it's a right answer or not. In my opinion, it make sense.

Best Facial Landmark that can be easily extracted from NIR Image?

I'm playing with Eye Gaze estimation using a IR Camera. So far i have detected the two Pupil Center points as follows:
Detect the Face by using Haar Face cascade & Set the ROI to Face.
Detect the Eyes by using Haar Eye cascade & Name it as Left & Right Eye Respectively.
Detect the Pupil Center by thresholding the Eye region & found the pupil center.
So far I've tried to find the gaze direction by using the Haar Eye Boundary region. But this Haar Eye rect is not always showing the Eye Corner points. So the results was poor.
Then I've tried to tried to detect Eye Corner points using GFTT, Harriscorners & FAST but since I'm using NIR Camera the Eye Corner points are not clearly visible & so i cant able to get the exact corner positions.So I'm stuck here!
What else is the best feature that can be tracked easily from face? I heard about Flandmark but i think that is also will not work in IR captured images.
Is there any feature that can be extracted easily from the face images? Here I've attached my sample output image.
I would suggest flandmark, even if your intuition is the opposite - I've used it in my master thesis (which was about head pose estimation, a related topic). And if the question is whether it will work with the example image you've provided, I think it might detect features properly - even on a gray scaled image. I think in the flandmark they probably convert to image to grayscale before applying a detector (like the haar detector works). Moreover, It surprisingly works with the low resolution images what is an advantage too (especially when you're saying eye corners are not clearly visible). And flandmark can detect both eye corners, mouth corners and nose tip (actually I will not rely on the last one, from my experience detecting nose tip on the single image is quite noisy, however works fine with an image sequence with some filtering e.g. average, Kalman). If you decide to use that technique and it works, please let us know!

Simple way of extracting hair pixels from an image of a human face

Disclaimer: I am not looking for a robust method that works under all conditions and/or requiring complex analysis of the image (e.g., http://vis-www.cs.umass.edu/lfw/part_labels/). Therefore please do not link me to one of the many papers on hair segmentation that exist. I am looking for something fast and simple (not perfectly robust).
That being said, my goal is to extract the area containing the hair in an image of a human face. If I am able to get at least some tiny set of pixels that I am sure are hair pixels, then I can use one of several algorithms to find the rest (e.g., a "photoshop magic wand" type algorithm).
An example (left side is the original face, right side is the gradient magnitude):
http://imgur.com/YX85MKB
Here's the information I have access to regarding any image of a human face: all locations of important features (e.g., nose, eyes, mouth and chin). One dumb/simple way of finding hair pixels could be to perform edge detection, and work up from the nose until I find two "horizontal edges" which we assume are the lower and upper boundaries of the hair, then take a sample from in-between.
Any ideas on other simple methods to try?
Instead of using image processing techniques (edge detection) you could use simple maths. You say you know where the nose, eyes, mouth and chin are. From the distance between these body parts you can certainly determine the distance to look up from the eyes to find hair. I'm not sure which distance ratio you can use, but the hair are certainly not 10x farther than the distance between eyes and chin.
Obviously this technique is not bald-proof.

Recognition of face details as a set of points, not just rectangles

I'm doing research in the field of emotion recognition. For this purpose I need to catch and classify particular face details like eyes, nose, mouth, etc. Standard OpenCV function for this is detectMultiScale(), but its disadvantage is that it returns list of rectangles (video) while I'm mostly interested in particular key points - corners of mouth, upper and lower points, edges, etc (video).
So, how do they do it? OpenCV is ideal, but other solutions are ok too.
To analyse such precise points, you can use Active appearance models. Your second video seems to be done with AAM. Check out above wikipedia link, where you can get a lot of AAM tools and API.
On the other hand, if you can detect mouth using haar-cascade, apply colour filtering. Obviously lips and surrounding region has color difference. You get precise model of lips and find its edges.
Check out this paper: Lip Contour Extraction

How do I detect squares/rectangle or an other shape with EMGU CV?

I want to make an apps detect an square/rectangle in my webcam using EMGU CV (an OPENCV wrapper). The square/rectangle will have a solid color.
if it's posible I would like to obtain the width and heigth of the square/rectangle
In this video you can see what I would like to do.
http://www.youtube.com/watch?v=ytvO2dijZ7A&NR=1
I'm working with C#
If you already know the color of the desired object then you can segment the image based on that color. (Which may be why the rectangle disapears when the guy movies the direction to and away from the camera [differences in lighting]. Once you have the object segmented out of the image you can do region calculations on the image. [In matlab think regionprops]
Once you have the blob you can attempt to do model fitting to get a good approximation of the object being represented.
In the video link provided what is probably being done is Surf feature detection. Take a look at the SURFFeture example that ships with EMGU. Rather than drawing lines in this case however the four corner points are detected and a shape drawn on top. Similar examples which will help you are ShapeDetection and TrafficSignRecognition both in the EMGU.CV.Examples folder. ShapeDetection will teach you how to classify the square and the StopSignDetector.cs class will show you another example of how to apply a surf feature detection algorithm.
It will require a little reconfiguration but if you get stuck feel free to ask another question.
Cheers
Chris

Resources