I am trying to segment the cheeks in a face image. I've tried image segmentation but that segments the whole face and removes the background. I have also tried contouring but I can't seem to locate the contours in the image. Can anyone help me?
Related
I am trying to determine the size of the bubbles in this picture:
Normally I achieve this readily through a thresholding, maker-based watershed segmentation, and regionprops using openCV, however with these pictures, the lighting is much worse and thus my methodology is not working.
Normally, I would apply a local threshold, fill the open holes on the bubbles (the reflection of the light) to create the bubble "area", then segment using watershed.
However, now, no matter what I try, I can not get a good threshold - as the bubbles do not form closed circles, I can not get the bubble "area" to segment, shown here:
I have tried Canny edge detection, Otsu thresholding, different levels of threshold, nothing is working for me.
Does anyone have any advice on how I would go about segmenting this image?
Any help is greatly appreciated.
I am trying to determine the bubble size distriubtion of this image, using image segmentation.
Scenario
I am writing a program to detect hair from skin. So far, I have done this...
Loaded source image and applied grabcut to remove background
Applied skin detection to obtain skin
Performed ANDing to obtain Hair, along with other regions
Ran contour detection to obtain contour with maximum size
Imgproc.drawContours(mask, contours, maxAreaIndex, new Scalar(255,255,255),1);
Problem
When I try to fill my mask obtained by running findContour using code
Imgproc.drawContours(mask, contours, maxAreaIndex, new Scalar(255,255,255),Core.FILLED);
it fills the contour like
Now, I cannot use this as a mask since it will produce my result as
which is not what I want.
Can anyone suggest how I can achieve a filled contour for this problem?
You could apply some dilation-erosion (closing) filtering before extracting the contours to be sure that you get a full contour. Essentially, what dilation-erosion does is closes contour areas that would normally be separate that are near each other by expanding them so that they intersect, then shrinking the now intersected contour back down so that it remains correctly scaled. Check here for more information on this topic.
HOWEVER, this method assumes that you have detected mostly the hair in your image. What you have done is different; you have detected the NOT SKIN in your picture, and assumed that the hair was the largest contour. By doing this, when you close your contours, especially on images where the person's shirt is higher cut, you may run in to some issue where you are detecting the shirt and the hair. I would add an additional step in your pipeline where you attempt to segment out the shirt on the person before applying this filtering.
Good Luck!
I am using openCV library. Histogram equalize or normalize will not give a good output, also the sharpness in bone will go down.
I need a output that has sharp bone without the black area at the top. Please help.
Also if my question is not clear, please feedback me so that i can make it more clear. Thank you for you support and suggestion.
Picture link is here
This is caused by the different illumination in different parts of image its common in X-ray image.
I think you need a adapted threshold for better results.
Try divide the image in 2 parts. The centre and the borders. Apply the equalization in the centre and do the same in the borders region.
You can do in blocks to. This preserves the information along small tiles.
Look this:
Equalization
I want check if forehead is visible in given facial image or is covered by hairs. For this, I need to get boundary of hairs that are falling on the forehead. I tried to use Sobel operator and the dilation to get the boundary but what I am getting is only the boundary around whole face and not the boundary of hairs falling on forehead. I am using otsu's algorithm to threshold the image. Background in my image is white and hair color is black.
Can you suggest how can I get the boundary for hairs on forehead? I know grabcut works but it takes more time to extract the hair portion.
Thank You!
use face haarcascade to detect face.
use eye cascade to detect one or both eyes .
expand the region of face from top .
estimate for had using eye point and face position.
this is the simplest way to detect forehead ...
Since you already have forehead region, i have couple of alternative suggestions.
Use canny edge detector. If the skin colours is different from hair , it should work.
If above is not enough, use local binary pattern on the forehead region. This along with the canny edge image should do it for you.
I need to process some images in a real-time situation. I am receiving the images from a camera using OpenCV. The language I use is C++. An example of the images is attached. After applying some threshold filters I have an image like this, Of course there may be some pixel noises here and there, but not that much.
I need to detect the center and the rotation of the squares, and the center of the white circles. I'm totally clueless about how to do it, as it needs to be really fast. The number of the squares can be predefined. Any help would be great, thanks in advance.
Is the following straight forward approch too slow?
Binarize the image, so that the originally green background is black and the rest (black squares are white dots) are white.
Use cv::findContours.
Get the centers.
Binarize the image, so that the everything except the white dots is black.
Use cv::findContours.
Get the centers.
Assign every dot contours to the squate contour, for that is an inlier.
Calculate the squares rotations by the angle of the line between their centers and the centers of their dots.