I want check if forehead is visible in given facial image or is covered by hairs. For this, I need to get boundary of hairs that are falling on the forehead. I tried to use Sobel operator and the dilation to get the boundary but what I am getting is only the boundary around whole face and not the boundary of hairs falling on forehead. I am using otsu's algorithm to threshold the image. Background in my image is white and hair color is black.
Can you suggest how can I get the boundary for hairs on forehead? I know grabcut works but it takes more time to extract the hair portion.
Thank You!
use face haarcascade to detect face.
use eye cascade to detect one or both eyes .
expand the region of face from top .
estimate for had using eye point and face position.
this is the simplest way to detect forehead ...
Since you already have forehead region, i have couple of alternative suggestions.
Use canny edge detector. If the skin colours is different from hair , it should work.
If above is not enough, use local binary pattern on the forehead region. This along with the canny edge image should do it for you.
Related
I am struggling to get good results detecting objects with a series of images I have. The issue is that they are similar objects with a bit of real world noise (in this case bunches of bananas). Having them overlapping is probably not helping.
My approach has been this tried and tested classic:
- Convert to grayscale
- Blur the image slightly
- Apply Canny edge detection
- Dilate the image
- Find the contours
Right now I have been trying to do a Canny edge detection but the edges aren't quite right and so when I find the contours I get all sorts of funny shapes (and only sometimes the bananas).
I have also tried Sobel edge detection with even poorer results.
example pic before contour detection:
example pic after contour detection:
Can somebody please give me some suggestions on how I can refine this?
I am am currently working on a method to extract colors from a macbeth color chart. So far I have had moderate success by using thresholding and then extracting square contours. Sadly through, colors that are too close to each other either mix together or do no get detected.
The code in it's current form:
<script src="https://pastebin.com/embed_js/mNi0TcDE"></script>
The image before any processing
After thresholding, you can see that there are areas where lines are incomplete due to too small differences in color. I have tried to use dilation to midigate these issues and it does work to a degree. But not enough to detect all squares.
Image after thresholding
This results in the following contours being detected
Detected contours
I have tried using:
Hough lines, sadly no lines here detected.
Centroids of contours, but I was unable to find a way to use centroids to draw lines and detect the centers of the missing contours
Corner detection, corners where found. But I was unsuccessful in finding a real way to put them to use.
Can anyone point me in the right direction?
Thaks in advance,
Emil
Hum, if your goal is color calibration, you really do not need to detect the squares in their entirety. A 10x10 sample near the center of the image of each physical square will give you 100 color samples, which is plenty for any reasonable calibration procedure.
There are many ways to approach this problem. If you can guarantee that the chart will cover the image, you could even just do k-means clustering, since you know in advance the exact number of clusters you seek.
If you insist on using geometry, I'd do template matching in scale+angle space - it is reasonable to assume that the chart will be mostly facing, and only slightly rotated, so you only need to estimate scale and a small rotation about the axis orthogonal to the chart.
I'm playing with Eye Gaze estimation using a IR Camera. So far i have detected the two Pupil Center points as follows:
Detect the Face by using Haar Face cascade & Set the ROI to Face.
Detect the Eyes by using Haar Eye cascade & Name it as Left & Right Eye Respectively.
Detect the Pupil Center by thresholding the Eye region & found the pupil center.
So far I've tried to find the gaze direction by using the Haar Eye Boundary region. But this Haar Eye rect is not always showing the Eye Corner points. So the results was poor.
Then I've tried to tried to detect Eye Corner points using GFTT, Harriscorners & FAST but since I'm using NIR Camera the Eye Corner points are not clearly visible & so i cant able to get the exact corner positions.So I'm stuck here!
What else is the best feature that can be tracked easily from face? I heard about Flandmark but i think that is also will not work in IR captured images.
Is there any feature that can be extracted easily from the face images? Here I've attached my sample output image.
I would suggest flandmark, even if your intuition is the opposite - I've used it in my master thesis (which was about head pose estimation, a related topic). And if the question is whether it will work with the example image you've provided, I think it might detect features properly - even on a gray scaled image. I think in the flandmark they probably convert to image to grayscale before applying a detector (like the haar detector works). Moreover, It surprisingly works with the low resolution images what is an advantage too (especially when you're saying eye corners are not clearly visible). And flandmark can detect both eye corners, mouth corners and nose tip (actually I will not rely on the last one, from my experience detecting nose tip on the single image is quite noisy, however works fine with an image sequence with some filtering e.g. average, Kalman). If you decide to use that technique and it works, please let us know!
I have the image of hand that was detected using this link. Its hand detection using HSV color space.
Now I face a problem: I need to get the enclosing area/draw bounding lines possible enough to determine the hand area, then fill the enclosing area and subtract it from the original to remove the hand.
I have thus so far tried to blurring the image to reduce noise, dilating the image, closing holes, etc. that seem to be an overdose. I have tried contours, and that seem to be the best approach so far. I was trying to get the convex hull (largest) and I ended up with the following after testing with different thresholds.
The inaccuracies can be seen with the thumb were the hull straightens. It must be curved. I am trying to figure out the location of the hand so to identify the region being covered by the hand. Going to subtract it to remove the hand from the original image. That is what I want to achieve.
Is there a better approach to this?
And ideas suggestions greatly appreciated.
Original and detected are as follows
Instead of the convex hull, consider using the alpha hull, which can better follow the contours of a shape by allowing concavities.
This site has a nice summary of alpha shapes: "Everything You Always Wanted to Know About Alpha Shapes But Were Afraid to Ask" by François Bélair.
http://cgm.cs.mcgill.ca/~godfried/teaching/projects97/belair/alpha.html
As David mentioned in his post, consider thresholding using HSV (or HSI) color space rather than on RGB or grayscale. If you can allow for longer processing time, you can use an algorithm such as Mean Shift to segment trickier images like yours. OpenCV has an implementation of Mean Shift, and the book Learning OpenCV provides a concise description of the algorithm.
Image Segmentation using Mean Shift explained
In any case, a standard binarization threshold doesn't appear to be helping much. Consider using a dynamic threshold; at least local/dynamic threshold is implemented for contours in OpenCV, from what I recall.
Assuming you want to identify hand area instead of the area convex hull gives and background of the application is at least in same color, I would apply hsv-threshold to identify background instead of hand if possible. Or maybe adaptive threshold if light distribution is not consistent. I believe this is what many applications do
If background can't be fixed, the segmentation is not an easy problem to resolve as you should take care of shadows and palm lines.
I am working on Face detection using Core Image. I m facing a problem when giving the points on face boundaries. Actually I ve to give the points on face boundaries and make those points in a movable state.
Please share your ideas with me..
Thanks in Advance...
I will suggest you four ways. I have not done them in IOS. I implemented them in java. For face boundary detection you can use following methods:
Skin color Thresholding followed by chin curve estimate and shrinking and growing algorithm.
Canny Edge Detection followed by four region divide and getting longest face edge map.
Adaptive active contour model (Snake algorithm).
Hair region detection followed by chin curve esimate and removing all the region except hair and chin. (Doesn't work for bald.)
If there are shadows in the face use Snake algorithm. If the faces in the images are clear then go for Skin Color Thresholding. Canny Edge Detection can give u very slow performance.
For further clarification read this : FACE REGION DETECTION