Open CV Features Extraction and Image Matching - image-processing

I want to develop an automatic image annotator and an image search application. I have tried using Color Histogram from Open CV's tutorials. It is not giving good results . I used Color Histogram so that I can easily distinguish between night scene or a day scene .
I also want to incorporate shape and texture features for matching images . I did not find anything for extracting shape and texture features from images using Open CV .Please let me know how to extract these features using Open CV or if there is any other library that can hep me in extraction of these features .
I have tried SURF features but for dissimilar images they are not giving good matches .Like two images of Horse in completely different context .
I have a training set of 15K annotated images from the Mir Flick Data Sets and I have a set of around 100 tags . I have read many research papers that give the theoretical approach to this problem but I am unable to implement it.
Thanks in advance .

You want to do a lot of difficult things without knowing exactly which approach will you take. I suggest you to start reading this tutorial on the topic.
You need to extract features, get descriptors of those features in order to be recognizable. Then do the matching. From matches you can get 3D position. That is the begining. Once you are able to do that you can focus on more difficult problems.

Related

Processing 20x20 image before training for machine learning

I have 10,000 examples 20x20 png image (binary image) about triangle. My mission is build program, which predict new image is whether triangle. I think I should convert these image to 400 features example, but I don't know how convert fastest.
Can you show me the way?
Here are a image .
Your question is too broad as you dont specify which technologies you are using , but in general you need to create a vector from an array , that depends on your tools , for example if you use python(and the numpy library) you could use flatten().
image_array.flatten();
If you want to do it manually you just need to move every row to a single row.
The previous answer is correct. Yet I want to add something to it:
The example image that you provided is noisy. This is rather problematic as you are working with only binary images. Therefore I want to suggest preprocessing, such as gaussian filter or edge detection. Denoising will improve your clustering algorithms accuracy stronlgy (to my knowledge).
One important question:
What are the other pictures showing? Do you have to seperate triangles from circles? You will get much better answers if you provide more information.
Anyhow, my key message is: Preprocessing is vital for image-processing.

How can I identify / classify objects in a low-resolution image?

What image recognition technology is good at identifying a low resolution object?
Specifically I want to match a low resolution object to a particular item in my database.
Specific Example:
Given a picture with bottles of wine, I want to identify which wines are in it. Here's an example picture:
I already have a database of high resolution labels to match to.
Given a high-res picture of an individual bottle of wine - it was very easy to match it to its label using a Vuforia (service for some image recongition). However the service doesn't work well for lower resolution matching, like the bottles in the example image.
Research:
I'm new to this area of programming, so apologies for any ambiguities or obvious answers to this question. I've been researching, but theres a huge breadth of technologies out there for image recognition. Evaluating each one takes a significant amount of time, so I'll try to keep this question updated as I research them.
OpenCV: seems to be the most popular open source computer vision library. Many modules, not sure which are applicable yet.
haar-cascade feature detection: helps with pre-processing an image by orienting a component correctly (e.g. making a wine label vertical)
OCR: good for reading text at decent resolutions - not good for low-resolution labels where a lot of text is not visible
Vuforia: a hosted service that does some types of image recognition. Mostly meant for augmented reality developers. Doesn't offer control over algorithm. Doesn't work for this kind of resolution

Searching interesting places on image for visual recognition process?

Imagine a huge 360° panoramic shot from security camera. I need to find people on this image. Suppose I already have some great classifier to tell if some picture is a picture of a single human being.
What I don't understand - is how to apply this classifier to the panoramic shot, which can contain multiple people? Should I apply it to all possible regions of the image? Or there is some way to search for "interesting points" and feed only regions around this points to my classifier?
Which keywords to google / algorythms to read about to find out ways to search regions of image, that may (or may not) contain information for following classification?

Extract numbers from Image

I have an image for mobile phone credit recharge card and I want to extract the recharge number only (the gray area) as a sequence of number that can be used to recharge the phone directly
This is a sample photo only and cannot be considered as standard, thus the rectangle area may differ in position , in the background and the card also may differ in size .The scratch area may not be fully scratched , the camera's depth and position may differ too . I read a lots and lots of papers on the internet but i can't find any thing that could be interesting and most of papers discuss detection of handwritten numbers .
Any links or algorithms names could be very useful .
You can search the papers on vehicle plate number detection with machine learning methods. Basically you need to extract the number first, you may use sobel filter to extract the vertical edges , then threshold (binary image) and morphologic operations (remove blank spaces between each vertical edge line, and connect all regions that have a high number of edges). Finally retrieve the contour and fill in the connected components with mask.
After you extract the numbers , you can use machine learning method such as neural network and svm to recognize them.
Hope it helps.
Extract the GRAY part from image and then Use Tesseract(OCR) to extract the text written on the gray image.
I think you may not find the algorithm to read from the image on the internet. Nobody will disclose that. I think, if you are a hardcore programmer you can crack that using your own code. I tried from the screenshots where the fonts were clearer and the algorithm was simple. For this, the algorithm should be complex since you are reading from photo source instead of a screenshot.
Follow the following steps:
Load the image.
Select the digits ( By contour finding and applying constraints on area and height of letters to avoid false detections). This will split the image and thus modularise the OCR operation you want to perform.
A simple K - nearest neighbour algorithm for performing the identification and classification.
If the end goal was just to make a bot, you could probably pull the text directly from the app rather than worrying about OCR, but if you want to learn more about machine learning and you haven't done them already the MNIST and CIFAR-10 datasets are fantastic places to start.
If you preprocessed your image so that yellow pixels are black and all others are white you would have a much cleaner source to work with.
If you want to push forward with Tesseract for this and the preprocessing isn't enough then you will probably have to retrain it for this font. You will need to prepare a corpus, process it similarly to how you expect your source data to look, and then use something like qt-box-editor to correct the data. This guide should be able to walk you through the basic steps of retraining.

Sparse Image matching in iOS

I am building an iOS app that, as a key feature, incorporates image matching. The problem is the images I need to recognize are small orienteering 10x10 plaques with simple large text on them. They can be quite reflective and will be outside(so the light conditions will be variable). Sample image
There will be up to 15 of these types of image in the pool and really all I need to detect is the text, in order to log where the user has been.
The problem I am facing is that with the image matching software I have tried, aurasma and slightly more successfully arlabs, they can't distinguish between them as they are primarily built to work with detailed images.
I need to accurately detect which plaque is being scanned and have considered using gps to refine the selection but the only reliable way I have found is to get the user to manually enter the text. One of the key attractions we have based the product around is being able to detect these images that are already in place and not have to set up any additional material.
Can anyone suggest a piece of software that would work(as is iOS friendly) or a method of detection that would be effective and interactive/pleasing for the user.
Sample environment:
http://www.orienteeringcoach.com/wp-content/uploads/2012/08/startfinishscp.jpeg
The environment can change substantially, basically anywhere a plaque could be positioned they are; fences, walls, and posts in either wooded or open areas, but overwhelmingly outdoors.
I'm not an iOs programmer, but I will try to answer from an algorithmic point of view. Essentially, you have a detection problem ("Where is the plaque?") and a classification problem ("Which one is it?"). Asking the user to keep the plaque in a pre-defined region is certainly a good idea. This solves the detection problem, which is often harder to solve with limited resources than the classification problem.
For classification, I see two alternatives:
The classic "Computer Vision" route would be feature extraction and classification. Local Binary Patterns and HOG are feature extractors known to be fast enough for mobile (the former more than the latter), and they are not too complicated to implement. Classifiers, however, are non-trivial, and you would probably have to search for an appropriate iOs library.
Alternatively, you could try to binarize the image, i.e. classify pixels as "plate" / white or "text" / black. Then you can use an error-tolerant similarity measure for comparing your binarized image with a binarized reference image of the plaque. The chamfer distance measure is a good candidate. It essentially boils down to comparing the distance transforms of your two binarized images. This is more tolerant to misalignment than comparing binary images directly. The distance transforms of the reference images can be pre-computed and stored on the device.
Personally, I would try the second approach. A (non-mobile) prototype of the second approach is relatively easy to code and evaluate with a good image processing library (OpenCV, Matlab + Image Processing Toolbox, Python, etc).
I managed to find a solution that is working quite well. Im not fully optimized yet but I think its just tweaking filters, as ill explain later on.
Initially I tried to set up opencv but it was very time consuming and a steep learning curve but it did give me an idea. The key to my problem is really detecting the characters within the image and ignoring the background, which was basically just noise. OCR was designed exactly for this purpose.
I found the free library tesseract (https://github.com/ldiqual/tesseract-ios-lib) easy to use and with plenty of customizability. At first the results were very random but applying sharpening and monochromatic filter and a color invert worked well to clean up the text. Next a marked out a target area on the ui and used that to cut out the rectangle of image to process. The speed of processing is slow on large images and this cut it dramatically. The OCR filter allowed me to restrict allowable characters and as the plaques follow a standard configuration this narrowed down the accuracy.
So far its been successful with the grey background plaques but I havent found the correct filter for the red and white editions. My goal will be to add color detection and remove the need to feed in the data type.

Resources