OpenCV Histogram Backprojection Alternative - opencv

I start with creating an initial mask of an object in an image. Using this mask, a histogram is created which is then used to process subsequent images.
I use the calcBackProject function to find pixels in the image that belong to the histogram. The problem I am having is that too much of the image is being accepted because certain objects are similar to the color of the initial object. Is there any alternative to calcBackProject? In my application, I can't afford to get objects that do not belong. All of this assumes that I have a perfect initial mask.

There are many ways to track an object, and it can be very difficult. Within OpenCV you may want to try the meanshift/camshift tracker to see if these are any better. If not then you may have to stray out of the opencv world and try tracking-learning-detection frameworks.
Meanshift/Camshift/etc in OpenCV
http://docs.opencv.org/modules/video/doc/video.html
http://docs.opencv.org/trunk/doc/py_tutorials/py_video/py_meanshift/py_meanshift.html
Tracking-Learning-Detection in C++:
STRUCK: http://www.samhare.net/research/struck (uses opencv)
Tracking-Learning-Detection in Matlab:
Preditor: http://personal.ee.surrey.ac.uk/Personal/Z.Kalal/tld.html

Related

Image processing: Recognise multiple instances of the same objects in an image

I am working on a project where I have to recognize objects in a grocery shells. You can see the sample image below:
I need to find what products exists in an image. The example of result image is shown below:
OpenCV tools like SURF, SIFT, ORB detects only one occurrence of the object in an image. Can you suggest some papers or tools to solve this problem.
Normally there are multiple techniques to detect multiple instances of the same object in an image.
The most primitive way to do that is template matching. So you create a database of training images at multiple scales and rotations to be able to detect such objects in varying conditions. But there are many techniques that are better than such legacy technique.
Some other techniques uses texture features that is invariant over scale, rotation, or both. For example, GLCM, LBP, HOG, SIFT, ORB and others.
Your statement OpenCV tools like SURF, SIFT, ORB detects only one occurrence of the object in an image. needs more enhancement.
The listed tools are not intended to detect objects but they are means to extract texture features.
You are the one to adjust them to detect multiple objects.
There is a more fine way to solve your problem. It seems that all of your objects that are required to be detected contains the text TASSAY.
you can easily extract that text using a group of morphological operations and then using a blob detector detect the location of the text.
After returning the text, it can be easily to measure the text location.
The object bounding box can be easily inferred from the text location.
Hope it helps.

Texture recognition within a specific area in the pic

I'm new in the texture recognition field, and I would like to know which are the possible ways to approach a texture problem in opencv.
I need to identify the texture within a region in the pic, and tell if it is uniform, homogeneous in the whole area, or not.
More in depth, I need to be able to tell if a possible fallen person is a person (with many different kind of textures) or something wrong like a pillow, or a blanket.
Could anyone suggest a solution, please?
Is there some already made opencv code to adapt?
Thanks in advance!
Why don't use haralick features? I other words they are called texture features. The base idea is to compute coocurence matrix from given gray-scaled image on base which the haralick features are computed. You can pick between different features like contrast, correlation, entropy etc. which can describe your texture. I guess for the same texture given feature should have the same (similar) value, so that might be the way for distinguishing textures.
Here some links can be helpful:
Coocurence matrix tutorial
Haralik features summary
Coocurence matrix in scikit image
So far as I know, there is no implementation of haralick features in opencv, but you can use python with scikit-image (of course you can use opencv with python if you don't mind using something different than c++).

Image Registration by Manual marking of corresponding points using OpenCV

I have a processed binary image of dimension 300x300. This processed image contains few object(person or vehicle).
I also have another RGB image of the same scene of dimensiion 640x480. It is taken from a different position
note : both cameras are not the same
I can detect objects to some extent in the first image using background subtraction. I want to detect corresponding objects in the 2nd image. I went through opencv functions
getAffineTransform
getPerspectiveTransform
findHomography
estimateRigidTransform
All these functions require corresponding points(coordinates) in two images
In the 1st binary image, I have only the information that an object is present,it does not have features exactly similar to second image(RGB).
I thought conventional feature matching to determine corresponding control points which could be used to estimate the transformation parameters is not feasible because I think I cannot determine and match features from binary and RGB image(am I right??).
If I am wrong, what features could I take, how should I proceed with Feature matching, find corresponding points, estimate the transformation parameters.
The solution which I tried more of Manual marking to estimate transformation parameters(please correct me if I am wrong)
Note : There is no movement of both cameras.
Manually marked rectangles around objects in processed image(binary)
Noted down the coordinates of the rectangles
Manually marked rectangles around objects in 2nd RGB image
Noted down the coordinates of the rectangles
Repeated above steps for different samples of 1st binary and 2nd RGB images
Now that I have some 20 corresponding points, I used them in the function as :
findHomography(src_pts, dst_pts, 0) ;
So once I detect an object in 1st image,
I drew a bounding box around it,
Transform the coordinates of the vertices using the above found transformation,
finally draw a box in 2nd RGB image with transformed coordinates as vertices.
But this doesnt mark the box in 2nd RGB image exactly over the person/object. Instead it is drawn somewhere else. Though I take several sample images of binary and RGB and use several corresponding points to estimate the transformation parameters, it seems that they are not accurate enough..
What are the meaning of CV_RANSAC and CV_LMEDS option, ransacReprojecThreshold and how to use them?
Is my approach good...what should I modify/do to make the registration accurate?
Any alternative approach to be used?
I'm fairly new to OpenCV myself, but my suggestions would be:
Seeing as you have the objects identified in the first image, I shouldn't think it would be hard to get keypoints and extract features? (or maybe you have this already?)
Identify features in the 2nd image
Match the features using OpenCV FlannBasedMatcher or similar
Highlight matching features in 2nd image or whatever you want to do.
I'd hope that because all your features in the first image should be positives (you know they are the features you want), then it'll be relatively straight forward to get accurate matches.
Like I said, I'm new to this so the ideas may need some elaboration.
It might be a little late to answer this and the asker might not see this, but if the 1st image is originally a grayscale then this could be done:
1.) 2nd image ----> grayscale ------> gray2ndimg
2.) Point to Point correspondences b/w gray1stimg and gray2ndimg by matching features.

Shape context matching in OpenCV

Have OpenCV implementation of shape context matching? I've found only matchShapes() function which do not work for me. I want to get from shape context matching set of corresponding features. Is it good idea to compare and find rotation and displacement of detected contour on two different images.
Also some example code will be very helpfull for me.
I want to detect for example pink square, and in the second case pen. Other examples could be squares with some holes, stars etc.
The basic steps of Image Processing is
Image Acquisition > Preprocessing > Segmentation > Representation > Recognition
And what you are asking for seems to lie within the representation part os this general algorithm. You want some features that descripes the objects you are interested in, right? Before sharing what I've done for simple hand-gesture recognition, I would like you to consider what you actually need. A lot of times simplicity will make it a lot easier. Consider a fixed color on your objects, consider background subtraction (these two main ties to preprocessing and segmentation). As for representation, what features are you interested in? and can you exclude the need of some of these features.
My project group and I have taken a simple approach to preprocessing and segmentation, choosing a green glove for our hand. Here's and example of the glove, camera and detection on the screen:
We have used a threshold on defects, and specified it to find defects from fingers, and we have calculated the ratio of a rotated rectangular boundingbox, to see how quadratic our blod is. With only four different hand gestures chosen, we are able to distinguish these with only these two features.
The functions we have used, and the measurements are all available in the documentation on structural analysis for OpenCV, and for acces of values in vectors (which we've used a lot), can be found in the documentation for vectors in c++
I hope you can use the train of thought put into this; if you want more specific info I'll be happy to comment, Enjoy.

OpenCV shape matching

I'm new to OpenCV (am actually using Emgu CV C# wrapper) and am attempting to do some object detection.
I'm attempting to determine if an object matches a predefined set of objects (that I will have to define). The background is well lit and does not move. My objects that I am starting with are bottles and cans.
My current approach is:
Do absDiff with a previously taken background image to separate the background.
Then dilate 4x to make the lighter areas (in labels) shrink.
Then I do a binary threshold to get a big blog, followed by finding contours in this image.
I then take the largest contour and draw it, which becomes my shape to either save to the accepted set or compare with the accepted set.
Currently I'm using cvMatchShapes, but the double return value seems to vary widely. I'm guessing it is because it doesn't take into account rotation.
Is this approach a good one? It isn't working well for glass bottles since the edges are hard to find...
I've read about haar classifiers, but thinking that might be overkill for my task.
Maybe this link is useful too. You have the code and library for SIFT, you just need to compile it. Good luck.
http://blogs.oregonstate.edu/hess/sift-library-places-2nd-in-acm-mm-10-ossc/#more-176

Resources