Quick image search using histograms of colors - opencv

I want to search images using their histograms of colors. For extracting these histograms I will use OpenCV, I also found examples which describes how to compare two images using histograms of colors. But I have some issues:
Google and another search-engines uses these histograms for searching by image, but I do not think that they iteratively compare described image with images in the database (as it done in the OpenCV examples). So how can I implement quick image search using histograms?
Can I use for this purpose and another image searching purposes common RDBMS like MySQL?

Related

How to segment ROI using SIFT/SURF

SIFT is used for feature extraction. Most of the tutorials that I have seen out there only show the features detected using SIFT. I need to identify ROI using SIFT. Images look like this but in worse condition (taken from different angles, some are blur, with more texts and numbers in other places too)
I need to extract this and then perform digit recognition:
What are the ways to segment this part? I was going for SIFT/SURF but couldn't find any tutorial to segment out the ROI. If there are any other suggestions then please provide the link.
Edit: Images that I have are grayscale
Edit1: this is just an example image I got from Google, My dataset only has grayscale images not colored

Is Image Stitching and Image overlapping same concept?

I am working on a project, which will intake multiple images (lets say 2 for the moment) and combine them to generate a better image. The resultant image will be a combination of those input images. As a requirement I want to achieve this by using OpenCV. I read about Image Stitching and saw some example images in the process and now I am confused whether image overlapping is equal to image stitching, or can the Stitcher class in OpenCV do Image overlapping? A little clarity as to how can I achieve the above project problem thru OpenCV.
"Image overlapping" is not really a term used in the CV literature. The general concept of matching images via transformations is most often called image registration. Image registration is taking many images and inserting them all into one shared coordinate system. Image stitching relies on that same function, but additionally concerns itself with how to blend multiple images. Furthermore, image stitching tries to take into account multiple images at once and makes small adjustments to the paired image registrations.
But it seems you're interested in producing higher quality images from multiple images of the same space (or from video feed of the space for example). The term for that is not image overlapping but super-resolution; specifically, super-resolution from multiple images. You'll want to look into specialized filters (after warping to the same coordinates) to combine those multiple views into a high resolution image. There are many papers on this topic (e.g.). Even mean or median filters (that is, taking the mean or median at every pixel location across the images) can work well, assuming your transformations are very good.

Image processing: Recognise multiple instances of the same objects in an image

I am working on a project where I have to recognize objects in a grocery shells. You can see the sample image below:
I need to find what products exists in an image. The example of result image is shown below:
OpenCV tools like SURF, SIFT, ORB detects only one occurrence of the object in an image. Can you suggest some papers or tools to solve this problem.
Normally there are multiple techniques to detect multiple instances of the same object in an image.
The most primitive way to do that is template matching. So you create a database of training images at multiple scales and rotations to be able to detect such objects in varying conditions. But there are many techniques that are better than such legacy technique.
Some other techniques uses texture features that is invariant over scale, rotation, or both. For example, GLCM, LBP, HOG, SIFT, ORB and others.
Your statement OpenCV tools like SURF, SIFT, ORB detects only one occurrence of the object in an image. needs more enhancement.
The listed tools are not intended to detect objects but they are means to extract texture features.
You are the one to adjust them to detect multiple objects.
There is a more fine way to solve your problem. It seems that all of your objects that are required to be detected contains the text TASSAY.
you can easily extract that text using a group of morphological operations and then using a blob detector detect the location of the text.
After returning the text, it can be easily to measure the text location.
The object bounding box can be easily inferred from the text location.
Hope it helps.

Compare similar images using OpenCV feature matching

I am a new in OpenCV. Currently, I want to write the demo using OpenCV feature matching to find all the similar images from the set of images.
Firstly, I want to try to compare two images first. And I use this code to do:
openCV: creating a match of features, meaning of output array, java
So I tested this code by input two similar images but has one is the rotation. And then I try to input two totally different images such as lena.jpeg vs car.png (any car).
I see that the algorithm still return matches matrix between two those different images.
My question here is how can I point that one case is similar image and one case is not after I got this matrix:
flannDescriptorMatcher.match(descriptor1,
descriptor2, matches.get(0));
because I don't want to draw matching point between two images, I want factor to distinguish those case.
Thank!

Image transformation by clustering

I am doing a project on image binarisation where I am required to transform an image such that its divided into individual color layers using clustering. What I mean to say is that there will be no shades in the image, instead the shades of the input image will be converted into a layer dividing the two colors.
The input and output images are given:
I am trying to implement this using opencv, but not able to figure out how to do that.
Thanks in advance.
Try using k-means clustering.
http://aishack.in/tutorials/kmeans-clustering-opencv/
You get as many colours as you have means.
Here is an example being implemented using the Accord.NET C# library.
http://crsouza.blogspot.com.au/2010/10/k-means-clustering.html

Resources