Finding subpattern position in an image/pattern - image-processing

Lets say I have an image or two dimensional pattern similar to QRcode and call it a template. Now I have a set of subimages that I want to match with my template and what's important - find their precise location in the template. I think similar problem is being solved in 'smart papers' http://en.wikipedia.org/wiki/Anoto and in kinect's grid of infrared dot pattern.
Does anyone have some clues how something similar can be implemented (even just
keywords to look up)?
I had few ideas:
opencv template matching method - poor results when rotated, scaled, skewed
SURF feature detection and matching - it's pretty good but result is worse when subimage is a really small chunk of the template. Besides I think that specificly picked up pattern would improve location finding rather than arbitary image. Also I think SURF is an overkill and I need something efficient that can handle real time mobile camera streams.
creating an image consisting of many QRcodes that only stores coordinates as data - drawback i that QRcodes will have to pretty small to allow
fine-grained positioning but then it's difficult to recognise them. Pros - they use only black color and have many white spaces (ink conservation)
2-dimensional colorful gradient image (similar to color model map) - I think this will be sensitive to lightness

QRCodes are square. Using feature detection to find the grid, you can unproject it. Then opencv's template matching will work fine.

Related

Easy to detect shapes/patterns to put on corners of a form

I am trying to create a form which will be filled and photographed later on. An issue that I am facing is that of alignment. I came across some deep learning solutions which detect the corners of form. But this is a lot of times inaccurate in my use case where the sheet of paper is folded-reopened/crumpled. I also don't have a lot of flexibility/hard-coding options in the deeplearning process.
Are there any patterns which OpenCV can detect with ~100% accuracy no matter the orientation of the pattern? I will be putting different patterns on 4 corners of the sheet. I am thinking of using the inbuilt template matching function or other pattern recognition algorithms. There are some common patters like a big '+' sign or a star etc that I am trying to avoid. I also tried putting barcodes on the corners because they are also detected fairly easily(Not concerned with the contents of the barcode only their relative positioning). But depending on the quality of image the barcode isn't always detected.
ArUco markers sound like the best option for you, they can easily be implemented in OpenCV.
Aruco example and documentation:https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html
Python example: https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/

How can I transform an image to match a circular model in OpenCV

I'm trying to make a program that can take an image of a dartboard and read the score. So far I can get the position of each dart by comparing it to a model image as you can see here:
However this only works if the input image is practically the same. In this other case the board is slightly in a different perspective so I was thinking maybe I can transform the image to match the model image and then do the process that you can see above.
So my question is: How can I transform this last image to match the shape and pespective of the model dart board with OpenCV?
The dart board is basically planar. Thus, you can model the wanted transformation by a homography. Now you can perform a simple feature extraction and matching like here or if speed is not as important utilize an intensity based parametric alignment algorithm (more accurate).
However, as already mentioned in the comments, it will not be as simple afterwards. The dart flights will (depending on the distortion) most likely cover an area of your board which does not coincide with the actual score. Actually, even with a frontal view it is difficult to say.
I assume you will have to find the point on which the darts stick in your board. Furthermore, I think this will be easier with a view from a certain angle. Maybe, you can fit lines segments just in the area where you detected a difference beforehand.
I don't think comparing an image with the model that was captured using a different subject with a different angle is a good idea. There should be lots of small differences even after perfectly matching them geometrically - like shades, lighting, color differences, etc.
I would just capture an image every time the game begin (reference) and extract the features (straight lines seem good enough) and then after the game, capture an image, subtract the reference, and do blob analysis to find darts.

Sparse Image matching in iOS

I am building an iOS app that, as a key feature, incorporates image matching. The problem is the images I need to recognize are small orienteering 10x10 plaques with simple large text on them. They can be quite reflective and will be outside(so the light conditions will be variable). Sample image
There will be up to 15 of these types of image in the pool and really all I need to detect is the text, in order to log where the user has been.
The problem I am facing is that with the image matching software I have tried, aurasma and slightly more successfully arlabs, they can't distinguish between them as they are primarily built to work with detailed images.
I need to accurately detect which plaque is being scanned and have considered using gps to refine the selection but the only reliable way I have found is to get the user to manually enter the text. One of the key attractions we have based the product around is being able to detect these images that are already in place and not have to set up any additional material.
Can anyone suggest a piece of software that would work(as is iOS friendly) or a method of detection that would be effective and interactive/pleasing for the user.
Sample environment:
http://www.orienteeringcoach.com/wp-content/uploads/2012/08/startfinishscp.jpeg
The environment can change substantially, basically anywhere a plaque could be positioned they are; fences, walls, and posts in either wooded or open areas, but overwhelmingly outdoors.
I'm not an iOs programmer, but I will try to answer from an algorithmic point of view. Essentially, you have a detection problem ("Where is the plaque?") and a classification problem ("Which one is it?"). Asking the user to keep the plaque in a pre-defined region is certainly a good idea. This solves the detection problem, which is often harder to solve with limited resources than the classification problem.
For classification, I see two alternatives:
The classic "Computer Vision" route would be feature extraction and classification. Local Binary Patterns and HOG are feature extractors known to be fast enough for mobile (the former more than the latter), and they are not too complicated to implement. Classifiers, however, are non-trivial, and you would probably have to search for an appropriate iOs library.
Alternatively, you could try to binarize the image, i.e. classify pixels as "plate" / white or "text" / black. Then you can use an error-tolerant similarity measure for comparing your binarized image with a binarized reference image of the plaque. The chamfer distance measure is a good candidate. It essentially boils down to comparing the distance transforms of your two binarized images. This is more tolerant to misalignment than comparing binary images directly. The distance transforms of the reference images can be pre-computed and stored on the device.
Personally, I would try the second approach. A (non-mobile) prototype of the second approach is relatively easy to code and evaluate with a good image processing library (OpenCV, Matlab + Image Processing Toolbox, Python, etc).
I managed to find a solution that is working quite well. Im not fully optimized yet but I think its just tweaking filters, as ill explain later on.
Initially I tried to set up opencv but it was very time consuming and a steep learning curve but it did give me an idea. The key to my problem is really detecting the characters within the image and ignoring the background, which was basically just noise. OCR was designed exactly for this purpose.
I found the free library tesseract (https://github.com/ldiqual/tesseract-ios-lib) easy to use and with plenty of customizability. At first the results were very random but applying sharpening and monochromatic filter and a color invert worked well to clean up the text. Next a marked out a target area on the ui and used that to cut out the rectangle of image to process. The speed of processing is slow on large images and this cut it dramatically. The OCR filter allowed me to restrict allowable characters and as the plaques follow a standard configuration this narrowed down the accuracy.
So far its been successful with the grey background plaques but I havent found the correct filter for the red and white editions. My goal will be to add color detection and remove the need to feed in the data type.

Algorithm for capturing machine readable zones

What method is suitable to capture (detect) MRZ from a photo of a document? I'm thinking about cascade classifier (e.g. Viola-Jones), but it seems a bit weird to use it for this problem.
If you know that you will look for text in a passport, why not try to find passport model points on it first. Match template of a passport to it by using ASM/AAM (Active shape model, Active Appearance Model) techniques. Once you have passport position information you can cut out the regions that you are interested in. This will take some time to implement though.
Consider this approach as a great starting point:
Black top-hat followed by a horisontal derivative highlights long rows of characters.
Morphological closing operation(s) merge the nearby characters and character rows together into a single large blob.
Optional erosion operation(s) remove the small blobs.
Otsu thresholding followed by contour detection and filtering away the contours which are apparently too small, too round, or located in the wrong place will get you a small number of possible locations for the MRZ
Finally, compute bounding boxes for the locations you found and see whether you can OCR them successfully.
It may not be the most efficient way to solve the problem, but it is surprisingly robust.
A better approach would be the use of projection profile methods. A projection profile method is based on the following idea:
Create an array A with an entry for every row in your b/w input document. Now set A[i] to the number of black pixels in the i-th row of your original image.
(You can also create a vertical projection profile by considering columns in the original image instead of rows.)
Now the array A is the projected row/column histogram of your document and the problem of detecting MRZs can be approached by examining the valleys in the A histogram.
This problem, however, is not completely solved, so there are many variations and improvements. Here's some additional documentation:
Projection profiles in Google Scholar: http://scholar.google.com/scholar?q=projection+profile+method
Tesseract-ocr, a great open source OCR library: https://code.google.com/p/tesseract-ocr/
Viola & Jones' Haar-like features generate many (many (many)) features to try to describe an object and are a bit more robust to scale and the like. Their approach was a unique approach to a difficult problem.
Here, however, you have plenty of constraint on the problem and anything like that seems a bit overkill. Rather than 'optimizing early', I'd say evaluate the standard OCR tools off the shelf and see where they get you. I believe you'll be pleasantly surprised.
PS:
You'll want to preprocess the image to isolate the characters on a white background. This can be done quite easily and will help the OCR algorithms significantly.
You might want to consider using stroke width transform.
You can follow these tips to implement it.

Map image model to data

I'm working on a software that checks if some laser-cut parts were cut correctly, using AutoCAD data as reference. I have parsed the dxf-files, converted them to a bmp (and to an xml File that gives me all the information), and now I want to compare this to the real, acquired data.
I have applied enough preprocessing to get a reasonably thresholded, binary picture. This is, however, distorted (unfortunately, telecentric lenses are expensive and the user places the object into a device, causing some translation, some scalation and a tiny amount of rotation, as in 1-2degs).
I have considered Hough transform, but memory is an issue. I have played around with bounding box transformation, but the unknown shape makes this hard. I've read about TILT (no symmetry) and registration algorithms, but I'd like to get another opinion.
I'm looking for some papers, some ideas, some pointers on how to go on.
Thanks.
First step is to undistort the image ( see camera calibration - ignore the 3d part).
Then think about the shape matching. Depending on how small the error you are trying to find, this could be very easy or very very difficult, but those links should get you started
You may want to look at features that can discriminate the two. Are there simple features that can accurately distinguish a properly cut piece vs. an incorrectly cut piece? If so, you can use the same idea as the Hough transform/template matching, but reducing the template to certain distinguishing features (edges, corners, etc.) to reduce the memory required.
You may want to look at the SIFT/SURF features that aim to match images by a certain set of features while being invariant to the rotation and scale of the objects within the image. There are libraries out there that implement these features (shown on the SURF page).
This however, wont help with the distortion. If you're using the same camera for all images, then you should be able to de-skew them accordingly.

Resources