Scenario
I am writing a program to detect hair from skin. So far, I have done this...
Loaded source image and applied grabcut to remove background
Applied skin detection to obtain skin
Performed ANDing to obtain Hair, along with other regions
Ran contour detection to obtain contour with maximum size
Imgproc.drawContours(mask, contours, maxAreaIndex, new Scalar(255,255,255),1);
Problem
When I try to fill my mask obtained by running findContour using code
Imgproc.drawContours(mask, contours, maxAreaIndex, new Scalar(255,255,255),Core.FILLED);
it fills the contour like
Now, I cannot use this as a mask since it will produce my result as
which is not what I want.
Can anyone suggest how I can achieve a filled contour for this problem?
You could apply some dilation-erosion (closing) filtering before extracting the contours to be sure that you get a full contour. Essentially, what dilation-erosion does is closes contour areas that would normally be separate that are near each other by expanding them so that they intersect, then shrinking the now intersected contour back down so that it remains correctly scaled. Check here for more information on this topic.
HOWEVER, this method assumes that you have detected mostly the hair in your image. What you have done is different; you have detected the NOT SKIN in your picture, and assumed that the hair was the largest contour. By doing this, when you close your contours, especially on images where the person's shirt is higher cut, you may run in to some issue where you are detecting the shirt and the hair. I would add an additional step in your pipeline where you attempt to segment out the shirt on the person before applying this filtering.
Good Luck!
Related
I am working on accurately segmenting objects from an image.
I have found contour lines by using a simple rectangular prism in HSV space as a color filter (followed by some morphological operations on the resulting mask to clear up noise). I found this approach to be better than applying canny edge detection to the whole image as that just picked up a lot of other edges I don't care about.
Is there a way to go about refining the contour line I have extracted such that it clips to the strongest local edge kind of like Adobe Photoshop's smart cropping utility?
Here's an image of what I mean
You can see a boundary between the sky blue and the gray. The dark blue is a drawn on contour. I'd like to somehow clip this to the nearby edge. It also looks like there are other lines in the grey region, so I think the algorithm should do some sort of more globalish optimisation to ensure that the "clipping" action doesn't jump randomly between my boundary of interest and the nearby lines.
Here are some ideas to try:
Morphological snakes: https://scikit-image.org/docs/dev/auto_examples/segmentation/plot_morphsnakes.html
Active contours: https://scikit-image.org/docs/dev/auto_examples/edges/plot_active_contours.html
Whatever livewire is doing under the hood: https://github.com/PyIFT/livewire-gui
Based on this comment, the last one is the most useful.
I am am currently working on a method to extract colors from a macbeth color chart. So far I have had moderate success by using thresholding and then extracting square contours. Sadly through, colors that are too close to each other either mix together or do no get detected.
The code in it's current form:
<script src="https://pastebin.com/embed_js/mNi0TcDE"></script>
The image before any processing
After thresholding, you can see that there are areas where lines are incomplete due to too small differences in color. I have tried to use dilation to midigate these issues and it does work to a degree. But not enough to detect all squares.
Image after thresholding
This results in the following contours being detected
Detected contours
I have tried using:
Hough lines, sadly no lines here detected.
Centroids of contours, but I was unable to find a way to use centroids to draw lines and detect the centers of the missing contours
Corner detection, corners where found. But I was unsuccessful in finding a real way to put them to use.
Can anyone point me in the right direction?
Thaks in advance,
Emil
Hum, if your goal is color calibration, you really do not need to detect the squares in their entirety. A 10x10 sample near the center of the image of each physical square will give you 100 color samples, which is plenty for any reasonable calibration procedure.
There are many ways to approach this problem. If you can guarantee that the chart will cover the image, you could even just do k-means clustering, since you know in advance the exact number of clusters you seek.
If you insist on using geometry, I'd do template matching in scale+angle space - it is reasonable to assume that the chart will be mostly facing, and only slightly rotated, so you only need to estimate scale and a small rotation about the axis orthogonal to the chart.
I ran findCountours on the following Image:
And got the following contour image (I'm showing only "parent" contours according to the hierarchy):
As you can see, there are many different contours around each object (each one in a different color). Now, I want to unify the contours around the person to obtain one enclosing contour, so I could segment her our from the image.
I'm not sure that it can be done, but I thought I should ask here.
Is there any method to intelligently unify the contours in the image so I could segment different objects out?
Thanks,
Gil.
First, do you want to achieve the result only on this image or any other image where different people will present in different pose and different dresses?
If you want to segment only this image, then with some color thresholding or with some morphology operations you can achieve it. But to make it work for any image with different persons probably you may need to pursue a PhD in computer vision.
But if your task is segmentation only then I would suggest a Semi-Automatic Segmentation technique like Grab Cut or graph cut. These are very popular segmentation algorithms which are readily available in opencv or matlab. They work very well on all kind of images. Here is the result of grab cut algorithm on your image.
There is lots of work on Contour based segmentation in the literature out there.
The Ultrametric contour map produces a hierarchy of contours which are segmentations of objects in an input image.
Pub: Contour Detection and Hierarchical Image SegmentationPablo Arbelaez, Michael Maire, Charless Fowlkes, Jitendra Malik
I have the image of hand that was detected using this link. Its hand detection using HSV color space.
Now I face a problem: I need to get the enclosing area/draw bounding lines possible enough to determine the hand area, then fill the enclosing area and subtract it from the original to remove the hand.
I have thus so far tried to blurring the image to reduce noise, dilating the image, closing holes, etc. that seem to be an overdose. I have tried contours, and that seem to be the best approach so far. I was trying to get the convex hull (largest) and I ended up with the following after testing with different thresholds.
The inaccuracies can be seen with the thumb were the hull straightens. It must be curved. I am trying to figure out the location of the hand so to identify the region being covered by the hand. Going to subtract it to remove the hand from the original image. That is what I want to achieve.
Is there a better approach to this?
And ideas suggestions greatly appreciated.
Original and detected are as follows
Instead of the convex hull, consider using the alpha hull, which can better follow the contours of a shape by allowing concavities.
This site has a nice summary of alpha shapes: "Everything You Always Wanted to Know About Alpha Shapes But Were Afraid to Ask" by François Bélair.
http://cgm.cs.mcgill.ca/~godfried/teaching/projects97/belair/alpha.html
As David mentioned in his post, consider thresholding using HSV (or HSI) color space rather than on RGB or grayscale. If you can allow for longer processing time, you can use an algorithm such as Mean Shift to segment trickier images like yours. OpenCV has an implementation of Mean Shift, and the book Learning OpenCV provides a concise description of the algorithm.
Image Segmentation using Mean Shift explained
In any case, a standard binarization threshold doesn't appear to be helping much. Consider using a dynamic threshold; at least local/dynamic threshold is implemented for contours in OpenCV, from what I recall.
Assuming you want to identify hand area instead of the area convex hull gives and background of the application is at least in same color, I would apply hsv-threshold to identify background instead of hand if possible. Or maybe adaptive threshold if light distribution is not consistent. I believe this is what many applications do
If background can't be fixed, the segmentation is not an easy problem to resolve as you should take care of shadows and palm lines.
I uploaded an example image for better understanding: http://www.imagebanana.com/view/kaja46ko/test.jpg
In the image you can see some scanlines and a marker (the white retangle with the circle in it). I want OpenCV to go along a specified area (in the example outlined trough the scanlines) that should be around 5x5. If that area contains a gradient from black to white, I want OpenCV to save the position of that area, so that I can work with it later.
The final result would be to differentiate between the marker and the other retangles separated trough black and white lines.
Is something like that possible? I googled a lot but I only found edge detectors but that's not what I want, I really need the detection of the black to white gradient only.
Thanks in advance.
it would be a good idea to filter out some of the areas by calculating their histogram.
You can use cvCalcHist for the task, then you can establish some threshold to determine if the black-white pixels percentage corresponds to that of a gradient. This will not solve the task but it will help you in reducing complexity.
Then, you can erode the image to merge all the white areas. After applying threshold, it would be possible to find connected components (using cvFindContours) that will separate images in black zones or white zones. You can then detect gradients by finding 5x5 areas that contain both a piece of a white zone and black zone simultaneously.
hope it helps.
Thanks for your answerer dnul, but it didn't really help me work this out. I though about a histogram to approach the problem but it's not quite what I want.
I solved this problem by creating a 40x40 matrix which holds 5x5 matrix's containing the raw pixel data in all 3 channels. I iterated trough each 40px-area and inside iterated trough each border of 5px-area. I checked each pixel and saved the ones which are darker then a certain threshold a storage.
After the iteration I had a rough idea of how many black pixels their are, so I checked each one of them for neighbors with white-pixels in all 3 channels. I then marked each of those pixels and saved them to another storage.
I then used the ransac algorithm to construct lines out of these points. It constructs about 5-20 lines per marker edge. I then looked at the lines which meet each other and saved the position of those that meet in a square angle.
The 4 points I get from that are the edges of the marker.
If you want to reproduce this you would have to filter the image in beforehand and apply a threshold to make it easier to distinguish between black and white pixels.
A sample picture, save after finding the points and before constructing the lines:
http://www.imagebanana.com/view/i6gfe6qi/9.jpg
What you are describing is edge detection. This is exactly how, say, the Canny edge detector works. It looks for dark pixels near light pixels, and based on a threshold that you pass in (There is also the adaptive canny, which figures out the threshold for you), and sets them to all black or all white (aka 'marks' them).
See here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html