How to know if matchTemplate found an object or not? - opencv

I used this answer and wrote my own program, but I have a specific problem.
If the image does not have the object, matchTemplate does not throw an error, and I do not know of any method to check if matchTemplate found the object or not, can anyone give me advice, or give me a function name which checks this.

matchTemplate() returns a matrix whose values indicate the probability that your object is centered in that pixel. If you know the object (and only one object) is there, all you have to do is look for the location of the maximum value.
If you don't know, you have to find the max value, and if it is above a certain threshold, your object should be there.
Now, selection of that threshold is tricky - it's up to you to find the good threshold specifically for your app. And of course you'll have some false positives (when there is no object, but the max is bigger than threshold), and some false negatives (your object does not create a big enough peak)
The way to choose the threshold is to collect a fairly large database of images with and without your object inside, and make a statistic of how big is the peak when object is inside, and how big is when it isn't, and choose the threshold that best separates the two classes

Related

Is there any chance to detect object using only morphology operations?

As in question. Is there any chance to "create" algorithm using only funtions: morphologyEx, threshold, bitwise_xor, bitwise_or, bitwise_and, bitwise_not with different parameters to detect objects (shapes) in image?
I wrote MEP program to find algorithm and i use only this function in function set. Sometimes i can find "not ideal" solution, but it is not, but it only work on trained image
EDIT:
Example:
Input image:
Reference image (what I want to achieve):
My result (it isnt great but close):
But it should find solution for different shape not exactly car.
Is any chance that it can find algorithm that find the same shape in other (not trained) picture. I checked that when i increase size or rotate car found algorithm also work, but it doesnt work on other picture of similar car.
Which operation (using opencv library) can i add to function set to achieve success?

Evaluating the confidence of an image registration process

Background:
Assuming there are two shots for the same scene from two different perspective. Applying a registration algorithm on them will result in Homography Matrix that represents the relation between them. By warping one of them using this Homography Matrix will (theoretically) result in two identical images (if the non-shared area is ignored).
Since no perfection is exist, the two images may not be absolutely identical, we may find some differences between them and this differences can be shown obviously while subtracting them.
Example:
Furthermore, the lighting condition may results in huge difference while subtracting.
Problem:
I am looking for a metric that I can evaluate the accuracy of the registration process. This metric should be:
Normalized: 0->1 measurement which does not relate to the image type (natural scene, text, human...). For example, if two totally different registration process on totally different pair of photos have the same confidence, let us say 0.5, this means that the same good (or bad) registeration happened. This should applied even one of the pair is for very details-reach photos and the other of white background with "Hello" in black written.
Distinguishing between miss-registration accuracy and different lighting conditions: Although there is many way to eliminate this difference and make the two images look approximately the same, I am looking of measurement that does not count them rather than fixing them (performance issue).
One of the first thing that came in mind is to sum the absolute differences of the two images. However, this will result in a number that represent the error. This number has no meaning when you want to compare it to another registration process because another images with better registration but more details may give a bigger error rather than a smaller one.
Sorry for the long post. I am glad to provide any further information and collaborating in finding the solution.
P.S. Using OpenCV is acceptable and preferable.
You can always use invariant (lighting/scale/rotation) features in both images. For example SIFT features.
When you match these using typical ratio (between nearest and next nearest), you'll have a large set of matches. You can calculate the homography using your method, or using RANSAC on these matches.
In any case, for any homography candidate, you can calculate the number of feature matches (out of all), which agree with the model.
The number divided by the total matches number gives you a metric of 0-1 as to the quality of the model.
If you use RANSAC using the matches to calculate the homography, the quality metric is already built in.
This problem is given two images decide how misaligned they are.
Thats why we did the registration. The registration approach cannot answer itself how bad a job it did becasue if it knew it it would have done it.
Only in the absolute correct case do we know the result: 0
You want a deterministic answer? you add deterministic input.
a red square in a given fixed position which can be measured how rotated - translated-scaled it is. In the conditions of lab this can be achieved.

Pedestrian Detection with unique identifier

Hi I am currently using OpenCV implementation of HOG and Haar Cascade to perform pedestrian detection and bounding them on a video feed.
However, I want to assign an unique id (number) for every pedestrian entering the video feed with the id remains the same until the pedestrian leaves the video feed. Since frames are processed one after another without regard of previous frame I wasn't sure how to implement this in the simplest but effective way possible.
Do I really need to use tracking algorithm like camshift or Kalman in which I have no knowledge about and could really use some help. Or is there any simpler way to achieve what I want?
P/S: This video is what I wanted to achieve. In fact I posted a similar question here before but that was more towards the detection techniques and this is towards the next step of assigning the unique identifier.
A simple solution:
Keep Track of your Objects in a Vector.
If you compute a new frame, for every Object: search for the nearest Object stored in your Vector. If the distance between the stored object and your current Object is below a certain threshold it is the same Object.
If no Match is found the Object is new. At the end delete all Objects in your Vector that are not associated with an Object of the current frame.
When you will use detectMultiScale to get the matches, you will have a std:Vector<cv:Rect> structure which will have all the detected pedestrians. While iterating through them for drawing, you can assign a number to each unique cv::Rect being detected (you may need to write a slightly deeper test for this, to check for overlapping rectangles) which you can then draw (let's say on the top) of the corresponding rectangle.
HTH

How to Detect which object is present in image.?

I have given a task to create an application, in which the image was given and i have to detect which object (out of list of finite objects) is present in that image..
Only one object is present in one image or no object in image.
the application should able to identify the object if present(any of the listed objects)
It would also be suffice if application(program) can calculate that what is probability that particular object is present in image (from the list of objects).
Can anyone suggest how to approach this problem ? opencv ?
Actually the task was to identify the logo(of some company like coke, pepsi, dell etc) from the image(if present any from the list of logos(which is finite say 100))
How can i do this project ? please help.!!!!
There are many ways of doing that but the one I like the most is building a feature set for each object and then match it in the image.
You can use SIFT for building the keypoints vector for each object. By aplying SIFT to each picture yo will get a set of descriptors for each picture (say picture, object,...).
When you get the image you want to process, use FAST for detecting points, and do cvMatchTemplate() for each different set of descriptors. The one with highest probability will tell you which objected you detected. If all probabilities are too low, then you probably don't have any object on the image.
This is just one approach I like, but it is quite state-of-the-art, precise, fast.
I recommend you googling and reading on the subject before trying to do stuff.
You want to perform object recognition, or logo recognition. There are already SO questions about this.
Here is a starting point for Opencv
The whole process took me half a minute to search for. Perhaps this is what you should start searching for

Detect the two highest Peaks from Histogram

I was trying to understand on how to detect the two peaks from the histogram. There can be multiple but I need to pick the two highest. Basically what I need to to is that although I will have these peaks shifted left or right, I need to get hold of them. Their spread can vary and their PEAK values might change so I have to find a way to get hold of these two peaks in Matlab.
What I have done so far is to create a 5 value window. This window is populated with values from the histogram and a scan is performed. Each time I move 5-steps ahead to the next value and compare the previous window value with current. Which ever is greater is kept.
Is there a better way of doing this?
The simplest way to do this would be to first smooth the data using a gaussian kernel to remove the high frequency variations.
Then use the function localmax to find the local maximums.
Return data from hist (or histc) function to a variable (y = hist(x,bin);) and use PEAKFINDER FileExchange submission to find local maximums.
I have also used PEAKDET function from Eli Billauer. Works great. You can check my answer here with code example.

Resources