I have the image of hand that was detected using this link. Its hand detection using HSV color space.
Now I face a problem: I need to get the enclosing area/draw bounding lines possible enough to determine the hand area, then fill the enclosing area and subtract it from the original to remove the hand.
I have thus so far tried to blurring the image to reduce noise, dilating the image, closing holes, etc. that seem to be an overdose. I have tried contours, and that seem to be the best approach so far. I was trying to get the convex hull (largest) and I ended up with the following after testing with different thresholds.
The inaccuracies can be seen with the thumb were the hull straightens. It must be curved. I am trying to figure out the location of the hand so to identify the region being covered by the hand. Going to subtract it to remove the hand from the original image. That is what I want to achieve.
Is there a better approach to this?
And ideas suggestions greatly appreciated.
Original and detected are as follows
Instead of the convex hull, consider using the alpha hull, which can better follow the contours of a shape by allowing concavities.
This site has a nice summary of alpha shapes: "Everything You Always Wanted to Know About Alpha Shapes But Were Afraid to Ask" by François Bélair.
http://cgm.cs.mcgill.ca/~godfried/teaching/projects97/belair/alpha.html
As David mentioned in his post, consider thresholding using HSV (or HSI) color space rather than on RGB or grayscale. If you can allow for longer processing time, you can use an algorithm such as Mean Shift to segment trickier images like yours. OpenCV has an implementation of Mean Shift, and the book Learning OpenCV provides a concise description of the algorithm.
Image Segmentation using Mean Shift explained
In any case, a standard binarization threshold doesn't appear to be helping much. Consider using a dynamic threshold; at least local/dynamic threshold is implemented for contours in OpenCV, from what I recall.
Assuming you want to identify hand area instead of the area convex hull gives and background of the application is at least in same color, I would apply hsv-threshold to identify background instead of hand if possible. Or maybe adaptive threshold if light distribution is not consistent. I believe this is what many applications do
If background can't be fixed, the segmentation is not an easy problem to resolve as you should take care of shadows and palm lines.
Related
I am working on a project and I ran into a situation. I want to detect a rectangle object (a black keyboard) in an IR image. The background is pretty clean so it's not really a hard problem, I used simple threshold and minAreaRect in OpenCV to solve it.
Easy case of the problem
But I also want the program to track this object when I use my hand to move it (yes, in real time). And my hand will cover a small part of the object like this case. Tricky case of the problem
My initial thought is to learn the object size in the easy case, and for the hard case, try to match my "learned rectangle" to cover as many white pixels as possible.
Anyone has a better solution, maybe a feature-based approach? I don't know if using features can improve the situation because object is mostly black in these IR images.
Thank you in advance.
How about using morphological operations like dilation and erosion (Opencv has implementations for these) on the thresholded image. Once you get that, you could try some corner detection/contour detection or line detectors(in opencv contrib module) to understand the shape of the object.
Your "tricky" case is still fairly simple, can be solved with dilate/erode (as mentioned by Shawn Mathew) and then the same minAreaRect. Here, on the right is your thresholded image after erosion and dilation with a 5x5 kernel, minAreaRect finds a rotated rectangle for it, drawn over the original thresholded image on the left:
Are you interested in more complicated cases, for example, where you hand covers one of the short edges of the keyboard entirely?
I applied few techniques of denoising on MRI images and could not realize what techniques are applicable on my data to make the cartilage object more clear. First I applied Contrast-limited adaptive histogram equalization (CLAHE) with this function:
J = adapthisteq(I)
But I got a white image. This is original image and manual segmentation of two thin objects(cartilage):
And then I read a paper that they had used some preprocessing on microscopy images, such as: Anisotropic diffusion filter(ADF), then, K-SVD algorithm, and then Batch-Orthogonal Matching Pursuit (OMP). I applied the first two and the output is as following:
It seems my object is not clear. It should be brighter than other objects. I do not what kind of algorithms are applicable to make the cartilage objects more clear. I really appreciate any help.
Extended:
This is the object:
Edited (now knowing exactly what you are looking for)
The differences between your cartilage and the surrounding tissue is very slight and for that reason I do not think you can afford to do any filtration. What I mean by this is that the two things that I can kinda catch with my eye is that the edge on the cartilage is very sharp (the grey to black drop-off), and also there seems to be a texture regularity in the cartilage that is smoother than the rest of the image. To be honest, these features are incredibly hard to even pick out by eye, and a common rule of thumb is that if you can't do it with your eye, vision processing is going to be rough.
I still think you want to do histogram stretching to increase your contrast.
1:In order to do a clean global contrast stretch you will need to remove bone/skin edge/ whatever that line on the left is from the image (bright white). To do this, I would suggest looking at the intensity histogram and setting a cut-off after the first peak (make sure to limit this so some value well above what cartilage could be in case there is no white signal). After determining that value, cut all pixels above that intensity from the image.
2:There appears to be low frequency gradients in this image (the background seems to vary in intensity), global histogram management (normalization) doesn't handle this well, CLAHE can handle this if set up well. But a far simpler solution worth trying is just hitting the image with a high pass filter as this will help to remove some of those (low frequency) background shifts. (after this step you should see no bulk intensity variation across the image.
3: I think you should try various implementations of histogram stretching, your goal in your histogram stretch implementation is to make the cartilage look more unique in the image compared to all other tissue.
This is by far the hardest step as you need to actually take a stab at what makes that tissue different from the rest of the tissue. I am at work, but when I get off, I will try to brainstorm some concepts for this final segmentation step here. In the meantime, what you want to try to identify is anything unique about the cartilage tissue at this point. My top ideas are cylindrical style color gradient, surface roughness, edge sharpness, location, size/shape
Is there a robust way to detect the water line, like the edge of a river in this image, in OpenCV?
(source: pequannockriver.org)
This task is challenging because a combination of techniques must be used. Furthermore, for each technique, the numerical parameters may only work correctly for a very narrow range. This means either a human expert must tune them by trial-and-error for each image, or that the technique must be executed many times with many different parameters, in order for the correct result to be selected.
The following outline is highly-specific to this sample image. It might not work with any other images.
One bit of advice: As usual, any multi-step image analysis should always begin with the most reliable step, and then proceed down to the less reliable steps. Whenever possible, the less reliable step should make use of the result of more-reliable steps to augment its own accuracy.
Detection of sky
Convert image to HSV colorspace, and find the cyan located at the upper-half of the image.
Keep this HSV image, becuase it could be handy for the next few steps as well.
Detection of shrubs
Run Canny edge detection on the grayscale version of image, with suitably chosen sigma and thresholds. This will pick up the branches on the shrubs, which would look like a bunch of noise. Meanwhile, the water surface would be relatively smooth.
Grayscale is used in this technique in order to reduce the influence of reflections on the water surface (the green and yellow reflections from the shrubs). There might be other colorspaces (or preprocessing techniques) more capable of removing that reflection.
Detection of water ripples from a lower elevation angle viewpoint
Firstly, mark off any image parts that are already classified as shrubs or sky. Since shrub detection would be more reliable than water detection, shrub detection's result should be used to inform the less-reliable water detection.
Observation
Because of the low elevation angle viewpoint, the water ripples appear horizontally elongated. In fact, every image feature appears stretched horizontally. This is called Anisotropy. We could make use of this tendency to detect them.
Note: I am not experienced in anisotropy detection. Perhaps you can get better ideas from other people.
Idea 1:
Use maximally-stable extremal regions (MSER) as a blob detector.
The Wikipedia introduction appears intimidating, but it is really related to connected-component algorithms. A naive implementation can be done similar to Dijkstra's algorithm.
Idea 2:
Notice that the image features are horizontally stretched, a simpler approach is to just sum up the absolute values of horizontal gradients and compare that to the sum of absolute values of vertical gradients.
Just wish to receive some ideas on I can solve this problem.
For a clearer picture, here are examples of some of the image that we are looking at:
I have tried looking into thresholding it, like otsu, blobbing it, etc. However, I am still unable to segment out the books and count them properly. Hardcover is easy of course, as the cover clearly separates the books, but when it comes to softcover, I have not been able to successfully count the number of books.
Does anybody have any suggestions on what I can do? Any help will be greatly appreciated. Thanks.
I ran a sobel edge detector and used Hough transform to detect lines on the last image and it seemed to be working okay for me. You can then link the edges on the output of the sobel edge detector and then count the number of horizontal lines. Or, you can do the same on the output of the lines detected using Hough.
You can further narrow down the area of interest by converting the image into a binary image. The outputs of all of these operators can be seen in following figure ( I couldn't upload an image so had to host it here) http://www.pictureshoster.com/files/v34h8hvvv1no4x13ng6c.jpg
Refer to http://www.mathworks.com/help/images/analyzing-images.html#f11-12512 for some more useful examples on how to do edge, line and corner detection.
Hope this helps.
I think that #audiohead's recommendation is good but you should be careful when applying the Hough transform for images that will have the library's stamp as it might confuse it with another book (You can see that the letters form some break-lines that will be detected by sobel).
Consider to apply first an edge preserving smoothing algorithm such as a Bilateral Filter. When tuned correctly (setting of the Kernels) it can avoid these such of problems.
A Different Solution That Might Work (But can be slow)
Here is a different approach that is based on pixel marking strategy.
a) Based on some very dark threshold, mark all black pixels as visited.
b) While there are unvisited pixels: Pick the next unvisited pixel and apply a region-growing algorithm http://en.wikipedia.org/wiki/Region_growing while marking its pixels with a unique number. At this stage you will need to analyse the geometric shape that this region is forming. A good criteria to detecting a book is that the region is creating some form of a rectangle where width >> height. This will detect a book and mark all its pixels to the unique number.
Once there are no more unvisited pixels, the number of unique numbers is the number of books you will have + For each pixel on your image you will now to which book does it belongs.
Do you have to keep the books this way? If you can change the books to face back side to the camera then I think you can get more information about the different colors used by different books.The lines by Hough transform or edge detection will be more prominent this way.
There exist more sophisticated methods which are much better in contour detection and segmentation, you can have a look at them here, however it is quite slow, http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/resources.html
Once you get the ultrametric contour map, you can perform some computation on them to count the number of books
I would try a completely different approach; with paperbacks, the covers are medium-dark lines whilst the rest of the (assuming white pages) are fairly white and "bloomed", so I'd try to thicken up the dark edges to make them easy to detect, then that would give the edges akin to working with hardbacks which you say you've done.
I'd try something like an erosion to thicken up the edges. This would be a nice, fast operation.
first, I've learning just couple of week about image processing, NN, dll, by myself, so I'm really new n really far to pro. n sorry for my bad english.
there's image or photo of my drawing, I want to get the coordinates of object/shape (black dot) n the number around it, the number indicating the sequence number of dot.
How to get it? How to detect the dots? Shape recognition for the dots? Number handwriting recognition for the numbers? Then segmentation to get the position? Or use template matching? But every dot has a bit different shape because of hand drawing. Use neural network? in NN, the neuron is usually contain every pixel to recognize an character, right? can I use an picture of character or drawing dot contained by each neuron to recognize my whole picture?
I'm very new, so I'm really need your advice, correct me if I wrong! Please tell me what I must learn, what I must do, what I must use.
Thank you very much. :'D
This is a difficult problem which can't be solved by a quick solution.
Here is how I would approach it:
Get a better picture. Your image is very noisy and is taken in low light with high ISO. Use a better camera and better lighting conditions so you can get the background to be as white as possible and the dots as black as possible. Try to maximize the contrast.
Threshold the image so that all the background is white and the dots and numbers are black. Maybe you could apply some erosion and/or dilation to help connect the dark edges together.
Detect the rectangle somehow and set your work area to be inside the rectangle (crop the rest of the image so that you are left with the area inside the rectangle). You could do this by detecting the contours in the image and then the contour that has the largest area is the rectangle (because it's the largest object in the image). Of course, this is not the only way. See this: OpenCV find contours
Once you are left with only the dots, circles and numbers you need to find a way to detect them and discriminate between them. You could again find all contours (or maybe you've found them all from the previous step). You need to figure out a way to see if a certain contour is a circle, a filled circle (dot) or a number. This is a problem in it's own. Maybe you could count the white/black pixels in the contour's bounding box. Dots have more black pixels than circles and numbers. You also need to do something about numbers that connect with dots (like the number 5 in your image)
Once you know what is a dot, circle or number you could use an OCR library (Tesseract or any other OCR lib) to try and recognize the numbers. You could also use a neural network library (maybe trained with the MNIST dataset) to recognize the digits. A good one would be a convolutional neural network similar to LeNet-5.
As you can see, this is a problem that requires many different steps to solve, and many different components are involved. The steps I suggested might not be the best, but with some work I think it can be solved.