What is the best method to template match a image with noise? - opencv

I have a large image (5400x3600) that has multiple CCTVs that I need to detect.
The detection takes lot of time (4-7 minutes) with rotation. But it still fails to resolve certain CCTVs.
What is the best method to match a template like this?
I am using skImage - openCV is not an option for me, but I am open to suggestions on that too.
For example: in the images below, the template is correct matched with the second image - but the first image is not matched - I guess due to the noise created by the text "BLDG..."
Template:
Source image:
Match result:

The fastest method is probably a cascade of boosted classifiers trained with several variations of your logo and possibly a few rotations and some negative examples too (non-logos). You have to roughly scale your overall image so the test and training examples are approximately matched by scale. Unlike SIFT or SURF that spend a lot of time in searching for interest points and creating descriptors for both learning and searching, binary classifiers shift most of the burden to a training stage while your testing or search will be much faster.
In short, the cascade would run in such a way that a very first test would discard a large portion of the image. If the first test passes the others will follow and refine. They will be super fast consisting of just a few intensity comparison in average around each point. Only a few locations will pass the whole cascade and can be verified with additional tests such as your rotation-correlation routine.
Thus, the classifiers are effective not only because they quickly detect your object but because they can also quickly discard non-object areas. To read more about boosted classifiers see a following openCV section.

This problem in general is addressed by Logo Detection. See this for similar discussion.
There are many robust methods for template matching. See this or google for a very detailed discussion.
But from your example i can guess that following approach would work.
Create a feature for your search image. It essentially has a rectangle enclosing "CCTV" word. So the width, height, angle, and individual character features for matching the textual information could be a suitable choice. (Or you may also use the image having "CCTV". In that case the method will not be scale invariant.)
Now when searching first detect rectangles. Then use the angle to prune your search space and also use image transformation to align the rectangles in parallel to axis. (This should take care of the need for the rotation). Then according to the feature choosen in step 1, match the text content. If you use individual character features, then probably your template matching step is essentially a classification step. Otherwise if you use image for matching, you may use cv::matchTemplate.
Hope it helps.

Symbol spotting is more complicated than logo spotting because interest points work hardly on document images such as architectural plans. Many conferences deals with pattern recognition, each year there are many new algorithms for symbol spotting so giving you the best method is not possible. You could check IAPR conferences : ICPR, ICDAR, DAS, GREC (Workshop on Graphics Recognition), etc. This researchers focus on this topic : M Rusiñol, J Lladós, S Tabbone, J-Y Ramel, M Liwicki, etc. They work on several techniques for improving symbol spotting such as : vectorial signatures, graph based signature and so on (check google scholar for more papers).
An easy way to start a new approach is to work with simples shapes such as lines, rectangles, triangles instead of matching everything at one time.

Your example can be recognized by shape matching (contour matching), much faster than 4 minutes.
For good match , you require nice preprocess and denoise.
examples can be found http://www.halcon.com/applications/application.pl?name=shapematch

Related

use SIFT or ORB for template match

with inspire of this tutorial:
Feature Matching, I'm trying to do template matching and clustering of image set I have.
The dataset I have in most of it, the image is straight ( maybe 10-degree rotate max )
I would like to use this information to have better matches,
I have noticed that sometimes I have a false match that when I display the match I can see the match vectors are all in different angles (not straight line ) how can I check if the match it's got is a straight line or rotate?
Thanks for the help
I'm not sure to understand everything, what do you mean by straight image?
And for the matches, when you compare two images, you will probably have many features that correspond between those two images, and you cannot ensure that they all describe a straight line, you can just assume having kind of straight lines when you try to find an object in an image as in the example, but this is just a representation...
If you only want to do clustering, I advise you to compare features only without doing some matching, you'll probably find a cluster of common features for some images that you can regroup
So ORB and SIFT try to match features in a pair of images. The reason why you have mismatching is because some of the features are too similar and the system mistakes them as a match.
You will need to increase your detector's threshold the matcher's acceptable matches.

Finding similar Images

I want to find images similar to another image. So after researching i found two methods first was two represent the image by its attributes like
length = full
pattern = check
color = blue
but the limitation of this method is that I will not be able to get an exhaustive dataset with all the features marked.
The second approach I found was to extract features and do feature mapping.
So I decided to use deep convolution neural networks with caffe so that by using any of the exsisting models I could learn the features and then perform feature matching or some other operation. I just wanted to take a general advice what can be the other methods which are good and worth a try. And since I am just starting out with caffe so can anyone give a general guideline how to approach the problem with caffe?
Thanks in advance
I looked at phash just was curious that it will find the images which are same like there are minor intensity variations and some other variation wiill it also work to give the same type(semantically) like for a tshirt with blue and red stripes will it give black and white stripe as similar and would it consider things like the length of shirt, collar style etc
It's true, that it's been empirically shown, that the euclidean distance between the features extracted using ConvNets is closer for images of the same class, while farther for images of different classes - but it's important to understand what kind of similarity you're looking for.
One can define many types of similarity measures, and the type of features you use (in the case of ConvNets, the type of data it was trained on) affects the kind of similar images you'll get. For instance, maybe given an image of a dog, you want to find other pictures of dogs but not specifically that exact dog, alternatively, maybe you have a picture of a church and you want to find another image of the exact same church but from a different angle - these are two very different problems, with different methods you can use to solve them.
One particular kind of convolutional neural networks you can look at, are Siamese Network, which are built to learn similarities between two images, given a dataset of pairs of images with the labels same/not_same. You can look for implementation in Caffe for this method here.
A different method, is to take a ConvNet trained on ImageNet data (see here for options), and use the python/matlab interface to classify images, and then extract the second to last layer, and use that as the representation for that image. Now you can just take the euclidean distance of those representations and this would be your similarity measure.
Unrelated to Caffe, you can also use "old school" methods of feature matching, included in open source libraries like OpenCV (an example tutorial of such method).

object detection LEDs in simple scene

I am new to opencv, I am guessing that this problem could be somewhat simple: I am trying to detect an object which is almost 25 by 15 pixels in an image which is 470 by 590 pixels.
I am attaching a zoomed image of this object, I have several options to go with:
1 - Two close Circles Detection using hough transformation,
2 - Histogram matching
3 - SURF feature detection
Any advise on which direction should I take? Please consider speed and real-time application. Thanks
I think it should go without explicitly saying so, but there are probably hundreds of things that could be tried, and with only one example image it is quite difficult to advise. For instance are the LED always green? we don't know.
That aside, imho, two good places to start would be with the ol' faithful template matching, or blob detection.
Then if that is not robust enough, you will need to look at some alternative representations of the template/blob, like the classic HoG (good for shape, maybe a bit heavy this app.), or even your own bespoke one that encodes your own domain specific knowledge of this problem.
Then if that is not robust enough, build a dataset of representative +ve and -ve examples, as big as you can, and then train a machine like svm , or a boosted classifier.
Template Matching:
http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
Blob detection:
https://code.google.com/p/cvblob/
Machine Learning:
http://docs.opencv.org/modules/ml/doc/ml.html
TIPS:
Add as much domain knowledge as possible, i.e. if they are always green, use color in the representation, like hog on g channel for instance. If they are always circular, try to encode that, like use a log-polar grid in the template,rather than a regular grid... and so on.
Machine Learning is not magic, a linear classifier will essentially weight different points in the feature space, so you still require a good representation, so if the Template matching was a total fail, the it is unlikely that simple linear ml with help, but if the Template matching was okay, then ml may well boost the performance to a good level.
step 1: Remove the black background.
step 2: A snake algorithm can be used to find the boundaries of your object

read numbers and letters from an image using openCV

I am developing an application to read the letters and numbers from an image using opencv in c++. I first changed the given colour image and colour template to binary image, then called the method cvMatchTemplate(). This method just highlighted the areas where the template matches.. But not clear.. I just dont want to see the area.. I need to parse the characters(letters & numbers) from the image. I am new to openCV. Does anybody know any other method to get the result??
Image is taken from camera. the sample image is shown above. I need to get all the texts from the LED display(130 and Delft Tanthaf).
Friends I tried with the sample application of face detection, It detects the faces. the HaarCascade file is provided with the openCV. I just loaded that file and called the method cvHaarDetectObjects(); To detect the letters I created the xml file by using the application letter_recog.cpp provided by openCV. But when I loading this file, it shows some error(OpenCV error: UnSpecified error > in unknown function, file ........\ocv\opencv\src\cxcore\cxpersistence.cpp,line 4720). I searched in web for this error and got the information about lib files used. I did so, but the error still remains. Is the error with my xml file or calling the method to load this xml file((CvHaarClassifierCascade*)cvLoad("builded xml file name",0,0,0);)?? please HELP...
Thanks in advance
As of OpenCV 3.0 (in active dev), you can use the built-in "scene text" object detection module ~
Reference: http://docs.opencv.org/3.0-beta/modules/text/doc/erfilter.html
Example: https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/textdetection.cpp
The text detection is built on these two papers:
[Neumann12] Neumann L., Matas J.: Real-Time Scene Text Localization
and Recognition, CVPR 2012. The paper is available online at
http://cmp.felk.cvut.cz/~neumalu1/neumann-cvpr2012.pdf
[Gomez13] Gomez L. and Karatzas D.: Multi-script Text Extraction from
Natural Scenes, ICDAR 2013. The paper is available online at
http://refbase.cvc.uab.es/files/GoK2013.pdf
Once you've found where the text in the scene is, you can run any sort of standard OCR against those slices (Tesseract OCR is common). And there's now an end-to-end sample in opencv using OpenCV's new interface to Tesseract:
https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/end_to_end_recognition.cpp
Template matching tend not to be robust for this sort of application because of lighting inconsistencies, orientation changes, scale changes etc. The typical way of solving this problem is to bring in machine learning. What you are trying to do by training your own boosting classifier is one possible approach. However, I don't think you are doing the training correctly. You mentioned that you gave it 1 logo as a positive training image and 5 other images not containing the logo as negative examples? Generally you need training samples to be in the order of hundreds or thousands or more. You cannot possibly train with 6 training samples and expect it to work.
If you are unfamiliar with machine learning, here is roughly what you should do:
1) You need to collect many positive training samples (from hundred onwards but generally the more the merrier) of the object you are trying to detect. If you are trying to detect individual characters in the image, then get cropped images of individual characters. You can start with the MNIST database for this. Better yet, to train the classifier for your particular problem, get many cropped images of the characters on the bus from photos. If you are trying to detect the entire rectangular LED board panel, then use images of them as your positive training samples.
2) You will need to collect many negative training samples. Their number should be in the same order as the number of positive training samples you have. These could be images of the other objects that appear in the images you will run your detector on. For example, you could crop images of the front of the bus, road surfaces, trees along the road etc. and use them as negative examples. This is to help the classifier rule out these objects in the image you run your detector on. Hence, negative examples are not just any image containing objects you don't want to detect. They should be objects that could be mistaken for the object you are trying to detect in the images you run your detector on (at least for your case).
See the following link on how to train the cascade of classifier and produce the XML model file: http://note.sonots.com/SciSoftware/haartraining.html
Even though you mentioned you only want to detect the individual characters instead of the entire LED panel on the bus, I would recommend first detecting the LED panel so as to localize the region containing the characters of interest. After that, either perform template matching within this smaller region or run a classifier trained to recognize individual characters on patches of pixels in this region obtained using sliding window approach, and possibly at multiple scale. (Note: The haarcascade boosting classifier you mentioned above will detect characters but it won't tell you which character it detected unless you only train it to detect that particular character...) Detecting characters in this region in a sliding window manner will give you the order the characters appear so you can string them into words etc.
Hope this helps.
EDIT:
I happened to chance upon this old post of mine after separately discovering the scene text module in OpenCV 3 mentioned by #KaolinFire.
For those who are curious, this is the result of running that detector on the sample image given by the OP. Notice that the detector is able to localize the text region, even though it returns more than one bounding box.
Note that this method is not foolproof (at least this implementation in OpenCV with the default parameters). It tends to generate false-positives, especially when the input image contains many "distractors".
Here are more examples obtained using this OpenCV 3 text detector on the Google Street View dataset:
Notice that it has a tendency to find "text" between parallel lines (e.g., windows, walls etc). Since the OP's input image is likely going to contain outdoor scenes, this will be a problem especially if he/she does not restrict the region of interest to a smaller region around the LED signs.
It seems that if you are able to localize a "rough" region containing just the text (e.g., just the LED sign in the OP's sample image), then running this algorithm can help you get a tighter bounding box. But you will have to deal with the false-positives though (perhaps discarding small regions or picking among the overlapping bounding boxes using a heuristic based on knowledge about the way letters appear on the LED signs).
Here are more resources (discussion + code + datasets) on text detection.
Code
Extracting text OpenCV
http://libccv.org/doc/doc-swt/
Stroke Width Transform (SWT) implementation (Python)
https://github.com/subokita/Robust-Text-Detection
Datasets
You will find the google streetview and MSRA datasets here. Although the images in these datasets are not exactly the same as the ones for the LED signs on buses, they may be helpful either for picking the "best" performing algorithm from among several competing algorithms, or to train a machine learning algorithm from scratch.
http://www.iapr-tc11.org/mediawiki/index.php/Datasets_List
See my answer to How to read time from recorded surveillance camera video? You can/should use cvMatchTemplate() to do that.
If you are working with a fixed set of bus destinations, template matching will do.
However, if you want the system to be more flexible, I would imagine you would need some form of contour/shape analysis for each individual letter.
You can also look at EAST: Efficient Scene Text Detector - https://www.learnopencv.com/deep-learning-based-text-detection-using-opencv-c-python/
Under this link, you have examples with C++ and Python. I used this code to detect numbers of buses (after detecting that given object is a bus).

How to interpolate between data points?

I am currently developing a piece of software using opencv and qt that plots data points. I need to be able fill in an image from incomplete data. I want to interpolate between the points I have. Can anyone recommend a library or function that could help me. I thought maybe the opencv reMap method but I can't seem to get that to work.
The data is a 2-d matrix of intensity values. I want to create an image of some sort. Its a school project.
Interpolation is a complex subject. There are infinitely many ways to interpolate a set of points, and this assuming that you truly do wish to do interpolation, and not smoothing of any sort. (An interpolant reproduces the original data points exactly.) And of course, the 2-d nature of this problem makes things more difficult.
There are several common schemes for interpolation of scattered data in 2-d. Actually, for those who have access to it, a very nice paper is available (Richard Franke, "Scattered data interpolation: Tests of some methods", Mathematics of Computation, 1982.)
Perhaps the most common method used is based on a triangulation of your data. Merely build a triangulation of the domain from your data points. Then any point inside the convex hull of the data must lie inside exactly one of the triangles, or it will be on a shared edge. This allows you to interpolate linearly inside the triangle. If you are using MATLAB, then the function griddata is available for this express purpose.)
The problem when trying to populate a complete rectangular image from scattered points is that very likely the data does not extend to the 4 corners of the array. In that event, a triangulation based scheme will fail, since the corners of the array do not lie inside the convex hull of the scattered points. An alternative then is to use "radial basis functions" (often abbreviated RBF). There are many such schemes to be found, including Kriging, when used by the geostatistics community.
http://en.wikipedia.org/wiki/Kriging
Finally, inpainting is the name for a scheme of interpolation where elements are given in an array, but where there are missing elements. The name obviously refers to that done by an art conservator who needs to repair a tear or rip in a valuable piece of artwork.
http://en.wikipedia.org/wiki/Inpainting
The idea behind inpainting is typically to formulate a boundary value problem. That is, define a partial differential equation on the region where there is a hole. Using the known boundary values, fill in the hole by solving the PDE for the unknown elements. This can be computationally intensive if there are a huge number of unknown elements, since it typically requires the solution of at least a massive sparse system of linear equations. If the PDE is a nonlinear one, then it becomes a more intensive problem yet. A simple, reasonably good choice for the PDE is the Laplacian, which results in a linear system that extrapolates well. Again, I can offer a solution for a MATLAB user.
http://www.mathworks.com/matlabcentral/fileexchange/4551
Better choices for the PDE may come from nonlinear PDEs. Once such is the Navier/Stokes equation. It is well suited to modeling the types of surfaces typically seen, but it is also more difficult to deal with. As in many facets of life, you get what you pay for.
Phew! Big subject.
The "right" answer depends a lot on your problem domain and various details of what you're doing.
Interpolating in more than 1 dimension requires making some choices. I'll assume that you are plotting on a regular grid, but that some of your grid points have no data. Big question: are the missing points sparse, or do they make big blobs?
You can't add information, so you're just trying to establish something that will look OK.
Conceptually simple suggestion (but the implementation may be some work):
For each region on missing data, identify all the edge points. That is find the x's in this figure
oooxxooo
oox..xoo
oox...xo
ox..xxoo
oox.xooo
oooxoooo
where the .'s are the points missing data, and the x's and o's have data (for a single missing point, this will be the four nearest neighbors). Fill in each missing data point with an average over the edge points around this blob. To make it smooth, weight each point by 1/d where d is the taxidriver distance (delta x + delta y) between the two points..
From before we had any details:
In the absence of that kind of information, have you tried straight ahead linear interpolation? If your data is reasonably dense this might do it for you, and it is simple enough to code in-line when you need it.
Next step is usually a cubic spline, but for that you'll probably want to grab an existing implementation.
When I need something more powerful than a quick linear interpolation, I usually use ROOT (and pick one of the TSpline classes), but this may be more overhead than you need.
As noted in the comments, ROOT is big, and while it is fast, it does try to force you to do things the ROOT way, so it can have a big effect on your program.
A linear interpolation between (or indeed extrapolation from) two points (x1, y1) and (x2, y2) gives you
y_i = (x_i-x1)*(y2-y1)/(x2-x1)
Considering this is a simple school project, probably the easiest interpolation technique to implement is the "Nearest Neighbors"
For each missing data point you find the nearest "filled" data point and use that as the value.
If you want to improve the retults a little bit more, then you can lets say, find K nearest data points, and use their weighted average as the value of your missing data point.
the weight could be proportional to the distance of the point from the missing data point.
There are zillion other techniques, but nearest neighbor is probably the easiest to implement.
if I understand that your need is as follows.
I think you have a subset of x,y,Intensity for a dimension of L by W and you want to fill for all X ranging from 0 to L and Y ranging from 0 to W.
If this is your question, then solution is to get other intensities by using Filters.
I think Bayer filter or Gaussian filter would do the job for you.
You can google these filters and you will get answers to implement.
Best of luck.

Resources