comparing 2 word images using OpenCV - image-processing

I am working on comparing 2 word images like these:
I am trying to use OpenCV's inbuilt feature detectors for this purpose.Using SIFT and BruteForceMatcher isn't proving to be too effective as it shows a lot of matches for non-matching words also.What would be a good feature descriptor to use in this case.
Also can someone suggest a good way to quantify the probability that these 2 images are matching.
Edit:OCR can't be used as texts can be handwritten

I think that image (contour) moments should help you with matching selected symbol to latter from your alphabet. So after this you can match two words as sequences of letters.

Related

Get sentence vector for a K-means clustering task

I am working on a project which groups jobs posted on various job portals into clusters based on the description of the jobs using K-means.
I found the work vector using Word2Vec, but i guess this will not serve the purpose as I will need a vector of the whole job description.
I know that I can average out the word vector of a sentence to get the sentence vector but worried about the accuracy as this will loose the ordering of the words.
Is there any other way I can get the vectors ?
The most using approaches for text vectorization:
Pure TF-IDF, still can be useful, especially using n-grams.
Using Word2Vec to get vectors for the words. For the whole text using the mean value of all vectors.
Combine the first two methods: get a weighted mean of all words in the text using the coefficients from the TF-IDF.
I would suggest trying each and pick what is performed better in your case. The results can be slightly different depends on the nature of the data.
You can facilitate transfer learning by very useful sentence embedding methods such as Bert-as-service or SentenceBert or even Universal Sentence encoding. All of them are easy to use and full of tutorials on the web. They will work better then TF-IDF in most cases.
You can also try doc2vec, an extension of word2vec that builds representations of a whole document. There is an implementation in gensim available:
https://radimrehurek.com/gensim/models/doc2vec.html

use SIFT or ORB for template match

with inspire of this tutorial:
Feature Matching, I'm trying to do template matching and clustering of image set I have.
The dataset I have in most of it, the image is straight ( maybe 10-degree rotate max )
I would like to use this information to have better matches,
I have noticed that sometimes I have a false match that when I display the match I can see the match vectors are all in different angles (not straight line ) how can I check if the match it's got is a straight line or rotate?
Thanks for the help
I'm not sure to understand everything, what do you mean by straight image?
And for the matches, when you compare two images, you will probably have many features that correspond between those two images, and you cannot ensure that they all describe a straight line, you can just assume having kind of straight lines when you try to find an object in an image as in the example, but this is just a representation...
If you only want to do clustering, I advise you to compare features only without doing some matching, you'll probably find a cluster of common features for some images that you can regroup
So ORB and SIFT try to match features in a pair of images. The reason why you have mismatching is because some of the features are too similar and the system mistakes them as a match.
You will need to increase your detector's threshold the matcher's acceptable matches.

Am I using word-embeddings correctly?

Core question : Right way(s) of using word-embeddings to represent text ?
I am building sentiment classification application for tweets. Classify tweets as - negative, neutral and positive.
I am doing this using Keras on top of theano and using word-embeddings (google's word2vec or Stanfords GloVe).
To represent tweet text I have done as follows:
used a pre-trained model (such as word2vec-twitter model) [M] to map words to their embeddings.
Use the words in the text to query M to get corresponding vectors. So if the tweet (T) is "Hello world" and M gives vectors V1 and V2 for the words 'Hello' and 'World'.
The tweet T can then be represented (V) as either V1+V2 (add vectors) or V1V2 (concatinate vectors)[These are 2 different strategies] [Concatenation means juxtaposition, so if V1, V2 are d-dimension vectors, in my example T is 2d dimension vector]
Then, the tweet T is represented by vector V.
If I follow the above, then My Dataset is nothing but vectors (which are sum or concatenation of word vectors depending on which strategy I use).
I am training a deepnet such as FFN, LSTM on this dataset. But my results arent coming out to be great.
Is this the right way to use word-embeddings to represent text ? What are the other better ways ?
Your feedback/critique will be of immense help.
I think that, for your purpose, it is better to think about another way of composing those vectors. The literature on word embeddings contains examples of criticisms to these kinds of composition (I will edit the answer with the correct references as soon as I find them).
I would suggest you to consider also other possible approaches, for instance:
Using the single word vectors as input to your net (I do not know your architecture, but the LSTM is recurrent so it can deal with sequences of words).
Using a full paragraph embedding (i.e. https://cs.stanford.edu/~quocle/paragraph_vector.pdf)
Summing them doesn't make any sense to be honest, because on summing them you get another vector which i don't think represents the semantics of "Hello World" or may be it does but it won't surely hold true for longer sentences in general
Instead it would be better to feed them as sequence as in that way it at least preserves sequence in meaningful way which seems to fit more to your problem.
e.g A hates apple Vs Apple hates A this difference would be captured when you feed them as sequence into RNN but their summation will be same.
I hope you get my point!

What is the best method to template match a image with noise?

I have a large image (5400x3600) that has multiple CCTVs that I need to detect.
The detection takes lot of time (4-7 minutes) with rotation. But it still fails to resolve certain CCTVs.
What is the best method to match a template like this?
I am using skImage - openCV is not an option for me, but I am open to suggestions on that too.
For example: in the images below, the template is correct matched with the second image - but the first image is not matched - I guess due to the noise created by the text "BLDG..."
Template:
Source image:
Match result:
The fastest method is probably a cascade of boosted classifiers trained with several variations of your logo and possibly a few rotations and some negative examples too (non-logos). You have to roughly scale your overall image so the test and training examples are approximately matched by scale. Unlike SIFT or SURF that spend a lot of time in searching for interest points and creating descriptors for both learning and searching, binary classifiers shift most of the burden to a training stage while your testing or search will be much faster.
In short, the cascade would run in such a way that a very first test would discard a large portion of the image. If the first test passes the others will follow and refine. They will be super fast consisting of just a few intensity comparison in average around each point. Only a few locations will pass the whole cascade and can be verified with additional tests such as your rotation-correlation routine.
Thus, the classifiers are effective not only because they quickly detect your object but because they can also quickly discard non-object areas. To read more about boosted classifiers see a following openCV section.
This problem in general is addressed by Logo Detection. See this for similar discussion.
There are many robust methods for template matching. See this or google for a very detailed discussion.
But from your example i can guess that following approach would work.
Create a feature for your search image. It essentially has a rectangle enclosing "CCTV" word. So the width, height, angle, and individual character features for matching the textual information could be a suitable choice. (Or you may also use the image having "CCTV". In that case the method will not be scale invariant.)
Now when searching first detect rectangles. Then use the angle to prune your search space and also use image transformation to align the rectangles in parallel to axis. (This should take care of the need for the rotation). Then according to the feature choosen in step 1, match the text content. If you use individual character features, then probably your template matching step is essentially a classification step. Otherwise if you use image for matching, you may use cv::matchTemplate.
Hope it helps.
Symbol spotting is more complicated than logo spotting because interest points work hardly on document images such as architectural plans. Many conferences deals with pattern recognition, each year there are many new algorithms for symbol spotting so giving you the best method is not possible. You could check IAPR conferences : ICPR, ICDAR, DAS, GREC (Workshop on Graphics Recognition), etc. This researchers focus on this topic : M Rusiñol, J Lladós, S Tabbone, J-Y Ramel, M Liwicki, etc. They work on several techniques for improving symbol spotting such as : vectorial signatures, graph based signature and so on (check google scholar for more papers).
An easy way to start a new approach is to work with simples shapes such as lines, rectangles, triangles instead of matching everything at one time.
Your example can be recognized by shape matching (contour matching), much faster than 4 minutes.
For good match , you require nice preprocess and denoise.
examples can be found http://www.halcon.com/applications/application.pl?name=shapematch

Recognizing striked out handwritten words

I am working on handwriting recognition and related stuff on visual studio platform and using openCV libraries. Input is in the form of binary scanned .tif images.
Currently I went into a roadblock trying to figure out a way to recognize striked out words as in you strike out (cancel) words using a straight/ curved line. I am not going to do individual character recognition 'coz that will be a waste of computation power.
Is there any way to recognize such occurrences in an alternate way?
Following are two ideas I've come upon but I am not sure -
1> use a mask like < 0 0 0 , 1 1 1, 0 0 0 > that will help in finding all horizontal lines... but this will be a very big assumption. the lines can be wavy and in any orientation.
2> skeletonize the input and look for intersections. this will give me quite a few intersections - including those that occur due to the line used to strike out the word. using some approximation like least squares etc. i can get an approximate line. but there's the problem that intersections can occur at many places - eg. 2 intersections in 'b' etc.
any suggestions?
Have you considered using the Hough transform to detect the strike lines?
Here's an illustration of the use of hough transform in handwriting, that will give you the intuition of the approach:
You can quickly test it with openCV. The function is called cvHoughLines2.
Why not processing contours? you could take advantage of Poly (Ten-Chin) approximation and analyze only the few vectors resulting from the chain reconstruction. If you want to do more, then use a mixed pyramid/contour scheme, in order to get vectors approximations with different Level of Detail, starting from rough resolution up to finest.
Stop the refinement when you get a "reasonable" number of unique segments, apply normalization (see Moments - Hu's Moments) to make a fingerprint of your sample, and finally adopt a strong classification system.
I suggest you to look at ML (Machine Learning) part of OpenCV suite, for better reference on this latter part. For raster data, Haar's wavelets + Hidden Markovian Models work well, for vectors maybe you could use something less hard to setup (SOM, KNN, KMeans).
I would go with the individual character recognition. It may be a waste of computing power but it could give the best results. Just find a way to get a value from the character recognition that shows how good the character was recognized, then find a threshold for things that aren't characters. I think the canceling will destroy the char in a way that the recognition will have it problems finding something and maybe you can use this fact to find the canceled characters. To improve the results look for many characters that are badly recognized in the same region of the text, often whole words are canceled and therefore the bad recognition results will cluster.
If your performance is very bad in the end you can always come back and improve the algorithm later on.

Resources