OpenCV matching images - opencv

I am trying to use OpenCV to match images like these:
img2 http://img849.imageshack.us/img849/8177/clearz.jpg
And I need to find the best intersection of them.
I tried using SURFDetector and matching using BruteforceMatcher, but finds descriptors not equal.
Tell me please the correct way to solve problem.

Did you have a look to this code example? Here you can see how to find an object using SURF descriptors.
Go to main(), and check the code step by step. You can try it with your images and it should work. Other approaches use SIFT and FAST detectors.
Good luck. If you don't get results keep trying, at the beggining it is hard.

You might want to apply a median filter first, to remove the noise. This will probably lead to better results for the matching, because the left image is pretty noisy.
It will also smooth the image a bit, which is good, because it leaves out the details, and you are looking for larger structures.
You will have to try out different sizes of the filter for the best result.

Related

How do i remove big-grain-noise from an image?

This can be generalized to: How do I remove regions that look similar to another region from an image?
The big Image is in grayscale. I have a lot of sand in it and I need to detect features.
The Sand particles are multiple pixels big. I know where the sand in the pictures is.
It looks something like this:
I have this kind of sand (not yet in grayscale):
What I want to achieve is that all the sand becomes a single value from 0.0 to 1.0 or one with very little variation;
That way I will be able to detect the features with ease.
So basically: Take everything that looks similar to some region in the image and remove that noisy aspect from the image.
I thought maybe one could do something like:
noise + noise = noise; it looks just as noise as before.
noise + features = noise; looks more noisy than before
(that might actually be the solution, though i still wanna ask you people)
What kind of algorithms are suitable and what do you suggest?
EDIT: This is an actual Image.
I can suggest you to try template matching.
(Blurring source image with mean or Gaussian filter before further transforms may have sense, but it must not affect features to much).
Filter regions with mean value and deviation close to noise (estimate this value for sand regions). Filter size shouldn't be very big in this case, 2+ times smaller than searched features.
More sophisticated way is template matching. It's pixel-to-pixel comparison of template region (sand) with image region. If result is lower (or higher, depends on method used) than some threshold template is matched. But I think in your case it may work worse than basic filters mentioned above.
Also you can try to use Sobel's operator or some other variant of image derivatives. In order to find edges on image (your features seemed to have one while sand doesn't).
P.S. Will try to add couple of pics with described method applied to your example image little later.
For those that happen to stumble upon this.
In the end I settled for the Trainable WEKA Segmentation. I used Fiji (ImageJ) to try it out. It worked a lot better than all the others. The noise was always different so template matching didn't work well enough unfortunately.
Another one that looked promising was the Statistical Region Merging I found in Fiji under Plugins>Segmentation.
But WEKA gave the best results.
I do hope though, that I will find something faster eventually.

Recognition of repeated pattern in an image

Consider an image which is a composite of repeated pattern of varying size and unknown topography (as shown below)
How do we find the repeated pattern (along with its location) ?
An easy way to do this is to compute the autocorrelation of the image. At least the blocks with the same size can be identified this way.
A more elaborate way is explained in this post. You first of course will need to subdivide your big image into small images.
I'd have a look at the SIFT and RANSAC algorithm, it might not be exactly what you need, but it'll lead you in the right direction. What makes this hard is that you don't know which features you're looking for ahead of time so you will need some overseeing algorithm helping you make guesses.
Open source implementation
https://robwhess.github.io/opensift/
Wikipedia with some good links at the bottom as well as descriptions of similar algorithms

How to filter a texture from an image for OCR

I'm trying to do OCR to some forms that, however, have some texture as follows:
This texture causes the OCR programs to ignore it tagging it as an image region.
I considered using morphology. A closing operation with a star ends up as follows:
This result is still not good enough for the OCR.
When I manually erase the 'pepper' and do adaptive thresholding an image as follows gives good results on the OCR:
Do you have any other ideas for the problem?
thanks
For the given image, a 5x5 median filter does a little better than the closing. From there, binarization with an adaptive threshold can remove more of the background.
Anyway, the resulting quality will depend a lot on the images and perfect results can't be achieved.
Maybe have a look at this: https://code.google.com/p/ocropus/source/browse/DIRS?repo=ocroold (see ocr-doc-clean).
Considering that you know the font size, you could also consider using connected component filtering, perhaps in combination with a morphological operation. To be able to retain the commas, just be careful if a smaller connected component is near one that has a size similar to the characters that you are trying to read.
The background pattern is very regular and directionnal, so filtering in the Fourier domain must do some pretty good job here. Try for example the Butterworth filter
A concrete example of such filtering using gimp can be found here

Algorithm for capturing machine readable zones

What method is suitable to capture (detect) MRZ from a photo of a document? I'm thinking about cascade classifier (e.g. Viola-Jones), but it seems a bit weird to use it for this problem.
If you know that you will look for text in a passport, why not try to find passport model points on it first. Match template of a passport to it by using ASM/AAM (Active shape model, Active Appearance Model) techniques. Once you have passport position information you can cut out the regions that you are interested in. This will take some time to implement though.
Consider this approach as a great starting point:
Black top-hat followed by a horisontal derivative highlights long rows of characters.
Morphological closing operation(s) merge the nearby characters and character rows together into a single large blob.
Optional erosion operation(s) remove the small blobs.
Otsu thresholding followed by contour detection and filtering away the contours which are apparently too small, too round, or located in the wrong place will get you a small number of possible locations for the MRZ
Finally, compute bounding boxes for the locations you found and see whether you can OCR them successfully.
It may not be the most efficient way to solve the problem, but it is surprisingly robust.
A better approach would be the use of projection profile methods. A projection profile method is based on the following idea:
Create an array A with an entry for every row in your b/w input document. Now set A[i] to the number of black pixels in the i-th row of your original image.
(You can also create a vertical projection profile by considering columns in the original image instead of rows.)
Now the array A is the projected row/column histogram of your document and the problem of detecting MRZs can be approached by examining the valleys in the A histogram.
This problem, however, is not completely solved, so there are many variations and improvements. Here's some additional documentation:
Projection profiles in Google Scholar: http://scholar.google.com/scholar?q=projection+profile+method
Tesseract-ocr, a great open source OCR library: https://code.google.com/p/tesseract-ocr/
Viola & Jones' Haar-like features generate many (many (many)) features to try to describe an object and are a bit more robust to scale and the like. Their approach was a unique approach to a difficult problem.
Here, however, you have plenty of constraint on the problem and anything like that seems a bit overkill. Rather than 'optimizing early', I'd say evaluate the standard OCR tools off the shelf and see where they get you. I believe you'll be pleasantly surprised.
PS:
You'll want to preprocess the image to isolate the characters on a white background. This can be done quite easily and will help the OCR algorithms significantly.
You might want to consider using stroke width transform.
You can follow these tips to implement it.

Algorithms for: printer checker

I want to make a program for checking the printed paper for errors.
PDF File: please refer to the second page, top right picture
As you see, that system could identify the errors made by printer.
I want to know how was it achieved. What are existing documents about this?
Or any ideas you have?
Thank you
This can be very easy or very difficult.
if your images are black white and your scan is quite precise you can try with a simple subtraction between the images (scanned and pattern)
if your scan will read the image with a possible deformation or translation the you will need first an image registration algorithm.
if your scan present background noise you will have some trouble with the subtraction and then it turns very difficult.
may be some image samples can help to suggest you a more specific algorithm.
I think you need to some how compare two images in a way that is robust to deformation. As mentioned before, substracting the two images can be a first step. Another more sophisticated way can be to use distance transform (or chamfering based methods for template matching) to compare how similar the two images are in the presence of some deformation. More sophisticated solutions can use methods like shape contexts.

Resources