I'm trying to extract the henna tattoo pattern from the image and have tried applying a median filter (despeckle) and detecting the tattoo with Difference of Gaussians, but the detail I get is always way below what seems possible by looking at the image. I could also try tackling this problem with MathMap if I knew a good algorithm.
Related
I was wondering how I would go about comparing contours, and moving it to show the best matched position.
So what I wish to achieve is for the purple contour to overlay where it matches with the pink one best.
I have used shape matching, it does its job, but does not show me where exactly its finding the best match.
What I am trying to do is to position an object/part of an object in an image so it is aligned with another picture with the same object in it, and from there I will be able to compare the images even more. For more differences.
I have tried looking at Hausdorff distance but couldn't get it working, something like what it does here where it detects and shows the best fit is ideally what I want.
I have looked around quite a bit, but I can't seem to find a working example of it, this method being implemented.
I'm trying to do OCR to some forms that, however, have some texture as follows:
This texture causes the OCR programs to ignore it tagging it as an image region.
I considered using morphology. A closing operation with a star ends up as follows:
This result is still not good enough for the OCR.
When I manually erase the 'pepper' and do adaptive thresholding an image as follows gives good results on the OCR:
Do you have any other ideas for the problem?
thanks
For the given image, a 5x5 median filter does a little better than the closing. From there, binarization with an adaptive threshold can remove more of the background.
Anyway, the resulting quality will depend a lot on the images and perfect results can't be achieved.
Maybe have a look at this: https://code.google.com/p/ocropus/source/browse/DIRS?repo=ocroold (see ocr-doc-clean).
Considering that you know the font size, you could also consider using connected component filtering, perhaps in combination with a morphological operation. To be able to retain the commas, just be careful if a smaller connected component is near one that has a size similar to the characters that you are trying to read.
The background pattern is very regular and directionnal, so filtering in the Fourier domain must do some pretty good job here. Try for example the Butterworth filter
A concrete example of such filtering using gimp can be found here
[The issue is solved using another filter, see the update at the bottom.]
I am working with C++ and ITK, and I have a greyscale image, representing the bone map of a lumbar spine. I am trying to detect its border using a Canny filter: for this purpose, I am using the Canny filter implementation from the official ITK's guide (I am using that code as is, so I do not paste it here). As you can see, the input image is really simple, and it shouldn't be difficult to extract its contour that way.
I am expecting an output image totally black, except for the bone's border, that should be a closed curve. However, ITK produces the result that you can see in the image on the right, consisting in multiple, "concentric" borders.
How do you explain this result? Do you have a formal explanation about what's going on with that algorithm? I have tried to modify the filter's parameters (variance, upperThresold and lowerThresold), but without luck. Some combinations of parameters give a totally black image, and none of them gives a result better that this one.
UPDATE: Just as a note, I solved using a "Gradient Magnitude Filter". With that filter, I obtain a well-defined shape for the contour. So, the issue is no longer critical for me, but I keep the post open, just for the sake of curiosity. If you have any idea regarding the Canny filter's behaviour, I can test it.
I am trying to use OpenCV to match images like these:
img2 http://img849.imageshack.us/img849/8177/clearz.jpg
And I need to find the best intersection of them.
I tried using SURFDetector and matching using BruteforceMatcher, but finds descriptors not equal.
Tell me please the correct way to solve problem.
Did you have a look to this code example? Here you can see how to find an object using SURF descriptors.
Go to main(), and check the code step by step. You can try it with your images and it should work. Other approaches use SIFT and FAST detectors.
Good luck. If you don't get results keep trying, at the beggining it is hard.
You might want to apply a median filter first, to remove the noise. This will probably lead to better results for the matching, because the left image is pretty noisy.
It will also smooth the image a bit, which is good, because it leaves out the details, and you are looking for larger structures.
You will have to try out different sizes of the filter for the best result.
I want to ask you to help to choose or find a good algorithm for the following problem:
I want to recognize the template in the image, the template is a text of non standard font, so OCR possibly will not handle it. What I want is to recognize it using the template matching algorithm. Please refer to the image:
As you see there is a background, in this image I draw it myself, and the background is simple. Usually it is not so simple: it has illuminance variations and usually colored to one color. So, I want to match this template, but I want the algorithm to be invariant of background color.
I've tryed the opencv's cvMatchTemplate, it handles well if there is a template on the image. But if I rotate object under the camera or remove it so that there will not be any templates, the algorithm finds many false-positive matches.
So I want to find an algorithm that also is rotation-invariant.
Can you suggest any?
Look at Hu Moments - is rotation & size invariant. , OpenCV has a Match shapes method which does most of the work for you.