I am working on Face detection using Core Image. I m facing a problem when giving the points on face boundaries. Actually I ve to give the points on face boundaries and make those points in a movable state.
Please share your ideas with me..
Thanks in Advance...
I will suggest you four ways. I have not done them in IOS. I implemented them in java. For face boundary detection you can use following methods:
Skin color Thresholding followed by chin curve estimate and shrinking and growing algorithm.
Canny Edge Detection followed by four region divide and getting longest face edge map.
Adaptive active contour model (Snake algorithm).
Hair region detection followed by chin curve esimate and removing all the region except hair and chin. (Doesn't work for bald.)
If there are shadows in the face use Snake algorithm. If the faces in the images are clear then go for Skin Color Thresholding. Canny Edge Detection can give u very slow performance.
For further clarification read this : FACE REGION DETECTION
Related
I've been using Vision to identify Facial Landmarks, using VNDetectFaceLandmarksRequest.
It seems that whenever a face is detected, the resulting VNFaceObservation will always contain all the possible landmarks, and have positions for all of them. It also seems that positions returned for the occluded landmarks are 'guessed' by the framework.
I have tested this using a photo where the subject's face is turned to the left, and the left eye thus isn't visible. Vision returns a left eye landmark, along with a position.
Same thing with the mouth and nose of a subject wearing a N95 face mask, or the eyes of someone wearing opaque sunglasses.
While this can be a useful feature for other use cases, is there a way, using Vision or CIDetector, to figure which face landmarks actually are visible on a photo?
I also tried using CIDetector, but it appears to be able to detect mouths and smiles through N95 masks, so it doesn't seem to be a reliable alternative.
After confirmation from Apple, it appears it simply cannot be done.
If Vision detects a face, it will guess some occluded landmarks' positions, and there is no way to differentiate actually detected landmarks from guesses.
For those facing the same issue, a partial way around can be to compare the landmarks' points' positions to those of the median line's and the nose crest's points.
While this can help determine if a facial landmark is occluded by the face itself, it won't help with facial landmarks occluded by opaque sunglasses or face masks.
i'm trying write an application make eye or nose bigger after selfphy with opencv and dlib , can you help me solve solution. Thank
Detect the eye and nose position and then resize them. After that, blur the edge of the corrected area.
You can tweak the algorithm on learnopencv.com (search there for Delauney Triangulation). It creates a triangular mesh around the face landmarks. The mesh can thenn be distorted to create distortion effects as you want.
I have a problem with the pupil center detection. I trained a CNN to give me the pupil center location but it is not always at the center.
How can I make good processing and have the ellipse fitting algorithm detect the center?
The process is this. I cut the face on a picture with dlib then I make the prediction and after I get the results I want to predict the center.
Here are two examples of the cnn prediction. Any guidance will be appreciated .
Direct radial rays from the center you found. Compute intensity gradient along each ray. Maximal gradients will your points on edge of iris. Then use fit ellipse.
From those pictures, it appears that the variable occlusion of the iris is what is throwing off your center find. What may help is being more specific about just the edge between iris and eye white (and not with eyelid). To do this I would (but there may be better ways). Drop a point inside the iris blob and project a grid of radially spaced vectors outward looking for the first dark to light transition above a minimum contrast. For each ray measure the contrast of the edge. The contrast should be almost exactly the same for all iris to eyewhite transitions and will have variance with the eyelid. Perform whatever type of data clustering you prefer to isolate the chunk of only pupil to eyewhite edges and then only feed those edge points into the ellipse center find.
I'm playing with Eye Gaze estimation using a IR Camera. So far i have detected the two Pupil Center points as follows:
Detect the Face by using Haar Face cascade & Set the ROI to Face.
Detect the Eyes by using Haar Eye cascade & Name it as Left & Right Eye Respectively.
Detect the Pupil Center by thresholding the Eye region & found the pupil center.
So far I've tried to find the gaze direction by using the Haar Eye Boundary region. But this Haar Eye rect is not always showing the Eye Corner points. So the results was poor.
Then I've tried to tried to detect Eye Corner points using GFTT, Harriscorners & FAST but since I'm using NIR Camera the Eye Corner points are not clearly visible & so i cant able to get the exact corner positions.So I'm stuck here!
What else is the best feature that can be tracked easily from face? I heard about Flandmark but i think that is also will not work in IR captured images.
Is there any feature that can be extracted easily from the face images? Here I've attached my sample output image.
I would suggest flandmark, even if your intuition is the opposite - I've used it in my master thesis (which was about head pose estimation, a related topic). And if the question is whether it will work with the example image you've provided, I think it might detect features properly - even on a gray scaled image. I think in the flandmark they probably convert to image to grayscale before applying a detector (like the haar detector works). Moreover, It surprisingly works with the low resolution images what is an advantage too (especially when you're saying eye corners are not clearly visible). And flandmark can detect both eye corners, mouth corners and nose tip (actually I will not rely on the last one, from my experience detecting nose tip on the single image is quite noisy, however works fine with an image sequence with some filtering e.g. average, Kalman). If you decide to use that technique and it works, please let us know!
I want check if forehead is visible in given facial image or is covered by hairs. For this, I need to get boundary of hairs that are falling on the forehead. I tried to use Sobel operator and the dilation to get the boundary but what I am getting is only the boundary around whole face and not the boundary of hairs falling on forehead. I am using otsu's algorithm to threshold the image. Background in my image is white and hair color is black.
Can you suggest how can I get the boundary for hairs on forehead? I know grabcut works but it takes more time to extract the hair portion.
Thank You!
use face haarcascade to detect face.
use eye cascade to detect one or both eyes .
expand the region of face from top .
estimate for had using eye point and face position.
this is the simplest way to detect forehead ...
Since you already have forehead region, i have couple of alternative suggestions.
Use canny edge detector. If the skin colours is different from hair , it should work.
If above is not enough, use local binary pattern on the forehead region. This along with the canny edge image should do it for you.