Opencv FindCornerSubPix precision - image-processing

I need to detect corners on grayscale images with the highest possible accuracy. Currently I am using the OpenCV function: cvFindCornerSubPix().
I prepared a simple test: got an image with a corner of black/white edges:
and then a series of the same object, moved 1/16pixel each. I did check pixel values manually, test images are just fine.
Detection results were disappointing:
Even though in TermCrit the condition is set to 100 iterations or 0.005 threshold, detection error gets as big as 0.08 pixel.
The graph shows error as a function of position within a pixel. Does not look random at all. Another thing worth a note: for other anular positions of the corner (when edges are not horizontal/vertical) results are better, but still not perfect.
Any idea, how to make this function work properly, why it does not, or what to use instead?
I would greatly appreciate any advice

Less than 10% of a pixel really isn't bad performance at all. For reference, a correlation peak detector suitable for the production of 3D model from satellite images will have the same order of magnitude of error.
As pointed out in the comments, the exact error pattern will depend on the interpolation method that you use to generate the subpixel pattern. In order to avoid the non-monotonicity introduced by higher-order interpolation methods (beyond order 2), I would suggest the following protocol:
Generate you input image in a high-res, 16 times bigger;
Move your target by 1 pixel at a time in this HR image;
Produce your test images by downsampling (careful: apply an appropriate blurring function like a PSF first if you go for brutal downsampling in order to avoid aliasing) to the correct size.
Finally, it is often not desirable to go to a smaller error magnitude. The subpixel corner detector was designed to be used in images where many (typically between 20 and 100) points are detected. These points are then used in a robust estimation process that should remove outliers and average the error on the valid point sets.

Related

which algorithm to choose for object detection?

I am interested in detecting single object more precisely a fire extinguisher which has no inter class variability (all fire extinguisher looks same). However, The application is supposedly realtime i.e a robot is exploring the environment and whenever it sees the object of interest it should be able to detect it and give pixel coordinates of it.
My question is which algorithm will be good choice for this task?
1. Is this a classification problem and should we use features(sift/surf etc) + bow +svm?
2. some other solution (no idea yet).
Any kind of input will be appreciated.
Thanks.
(P.S bear with me i am newbie to computer vision and stack over flow)
update1:
Height varies all are mounted on the wall but with different height. I tried with SIFT features and bow but it is expensive to extract bow descriptors in testing part. Moreover I have no idea how to locate the object(pixel coordinates) inside the image after its been classified positive.
update 2:
I finally used sift + bow + svm and am able to classify the object. But using this technique, i only get output interms of whether the object is present in the scene or not?
How can i detect the object i.e getting the bounding box or centre of the object. what is the compatible approach with the above method for achieving these results.
Thank you all.
I would suggest using color as the main feature to look for, and only try other features as needed. The fire extinguisher red is very distinctive, and should not occur too often elsewhere in an office environment. Other, more computationally expensive tests can then be performed only in regions of the right color.
Here is a good tutorial for color detection that also explains how to find good thresholds for your desired color.
I would suggest the following approach:
denoise your image with a median filter
convert the image to HSV format (Hue, Saturation, Value)
select pixels close to that particular shade of red with InRange()
Now you have a binary image image that contains only the pixels that are red.
count the number of red pixels with CountNonZero()
If that number is too small, abort
remove noise from the binary image by morphological opening / closing
find contours of all blobs in your picture with findContours or the CvBlob library
check if there are blobs of the correct width, correct height and correct width/height ratio
since your fire extinguishers are vertical cylinders, the width/height ratio will be constant from every angle. The width and height will of course vary somewhat with distance to the camera.
if the width and height do not match, abort
repeat these steps to find the black-colored part on the bottom of the extinguisher,
abort if there is no black region with correct width/height below the red region
(perhaps also repeat these steps for the metallic top and the yellow rectangle)
These tests should all be very fast. If they are too slow, you could reduce the resolution of your input images.
Depending on your environment, it is possible that this is already a robust enough test. If not, you can proceed with sift/surf feature matching, but only in a small region around the blobs with the correct color. You also do not necessarily have to do that for each frame, each n-th frame should be be enough for confirmation.
This is a old question .. but will still like to give my recommendation to use YOLO algorithm to solve this problem.
YOLO fits very well to this scenario.

Faster Alternatives to cvFindContour()

Contour detection takes up the majority of my time in my computer vision, and it needs to be faster. I've so optimized everything else via NEON instructions and vectorizing that really, the contour detection dominates the profile. Unfortunately, it isn't obvious to me how to optimize this.
I'm doing the classic rectangle detection process to find fiducial markers, i.e. cvFindContours(), followed by approximating squares from the contours. In cases with many, many markers visible (or catastrophically, when a dense grid of rectangles that aren't markers is visible), the call to cvFindContours() alone can take >30ms on an iPhone.
I've already replaced the incredibly expensive C++ cv::FindContours() with cvFindContours(). Particularly if passed a vector >, the C++ version spent longer allocating and populating vectors than than its internal cvFindContours() took!
Now, I'm bound completely by the time in cvFindContours, or more specifically in cvFindNextContour(). The code inside cvFindNextContour is branch-heavy, and not obviously easy to vectorize. It also implements a complex algorithm that I don't trust myself not to get wrong in any attempt to optimize.
I have already looked at cvBlobLib (for disambiguition, I mean this one: http://code.google.com/p/cvblob/) to see if it provided alternative algorithms that could do the same thing faster. A base download of the source is incredibly slow because it records the contours into a std::list(), and spends almost all of its time in memory allocation. Replacing that list with a std::vector pre-sized to 256 elements to eliminate the initial copies on push_back() still leaves you with a function that takes 3x longer than cvFindContours(), 66% of that directly in cvb::cvLabel(). So it doesn't seem viable to go this way.
Does anyone have any ideas for how I can optimize detection of many rectangles. My vague handwaving includes:
Are there any fast implementations equivalent to cvFindContour(), ideally as source code as I'm multiplatform, out there?
The majority of contours are not required, only the "successful" rectangles are useful. In particular, their internal contours are then not useful. Theoretically, could I not call cvFindContours at all, and instead call cvStartFindContours/cvFindNextContour, testing each contour as found, and not recursing if I found a rectangle I'm looking for, as subrectangles are then guaranteed to be useless?
Is there a completely different rectangle detection algorithm I can use from the classic FindContours()/ApproxPoly() approach?
IS there a way to "prime" cvFindContours with useful regions of interest? E.g. a FAST corner detection almost always returns my fiducial marker corners even with a very agressive threshold. Is there a way to use that point set to limit detection? (Unfortunately, I'm not sure how much this helps, again in the case of many markers, or dense gridlines unrelated to the markers, which happens often in my app.)
In the same vein as above, since Blob detection can (if I understand correctly) be implemented as recursive flood-filling, are there any fast vectorized implementations of this that can then be used to somehow pull out interesting Blob rectangles, and seed contour detection from there?
Any ideas would be welcome!
Since your goal is rectangle detection and not contour detection, I would suggest making use of integral images for computation. An explanation of integral images can be found here. After computing the integral image of your desired image, calculating the pixel sum of a rectangle can be done with three operations.
Assuming you want to draw rectangles around every non-black object you can use a method as follows. Recursively keep dividing the image and its subimages to 4 and discard rectangles with a pixel sum below your desired threshold. You will be left with many small rectangles approximating your objects. Mergeing the neighbouring rectangles will yield a fast approximation of your detected objects.

Detecting garbage homographies from findHomography in OpenCV?

I'm using findHomography on a list of points and sending the result to warpPerspective.
The problem is that sometimes the result is complete garbage and the resulting image is represented by weird gray rectangles.
How can I detect when findHomography sends me bad results?
There are several sanity tests you can perform on the output. On top of my head:
Compute the determinant of the homography, and see if it's too close to zero for comfort.
Even better, compute its SVD, and verify that the ratio of the first-to-last singular
value is sane (not too high). Either result will tell you whether the matrix is close to
singular.
Compute the images of the image corners and of its center (i.e. the points you get when
you apply the homography to those corners and center), and verify that they make sense,
i.e. are they inside the image canvas (if you expect them to be)? Are they well separated
from each other?
Plot in matlab/octave the output (data) points you fitted the homography to, along
with their computed values from the input ones, using the homography, and verify that they
are close (i.e. the error is low).
A common mistake that leads to garbage results is incorrect ordering of the lists of input and output points, that leads the fitting routine to work using wrong correspondences. Check that your indices are correct.
Understanding the degenerate homography cases is the key. You cannot get a good homography if your points are collinear or close to collinear, for example. Also, huge gray squares may indicate extreme scaling. Both cases may arise from the fact that there are very few inliers in your final homography calculation or the mapping is wrong.
To ensure that this never happens:
1. Make sure that points are well spread in both images.
2. Make sure that there are at least 10-30 correspondences (4 is enough if noise is small).
3. Make sure that points are correctly matched and the transformation is a homography.
To find bad homographies apply found H to your original points and see the separation from your expected points that is |x2-H*x1| < Tdist, where Tdist is your threshold for distance error. If there are only few points that satisfy this threshold your homography may be bad and you probably violated one of the above mentioned requirements.
But this depends on the point-correspondences you use to compute the homography...
Just think that you are trying to find a transformation that maps lines to lines (from one plane to another), so not any possible configuration of point-correspondences will give you an homography that creates nice images.
It is even possible that the homography maps some of the points to the infinity.

EMGU OpenCV disparity only on certain pixels

I'm using the EMGU OpenCV wrapper for c#. I've got a disparity map being created nicely. However for my specific application I only need the disparity values of very few pixels, and I need them in real time. The calculation is taking about 100 ms now, I imagine that by getting disparity for hundreds of pixel values rather than thousands things would speed up considerably. I don't know much about what's going on "under the hood" of the stereo solver code, is there a way to speed things up by only calculating the disparity for the pixels that I need?
First of all, you fail to mention what you are really trying to accomplish, and moreover, what algorithm you are using. E.g. StereoGC is a really slow (i.e. not real-time), but usually far more accurate) compared to both StereoSGBM and StereoBM. Those last two can be used real-time, providing a few conditions are met:
The size of the input images is reasonably small;
You are not using an extravagant set of parameters (for instance, a larger value for numberOfDisparities will increase computation time).
Don't expect miracles when it comes to accuracy though.
Apart from that, there is the issue of "just a few pixels". As far as I understand, the algorithms implemented in OpenCV usually rely on information from more than 1 pixel to determine the disparity value. E.g. it needs a neighborhood to detect which pixel from image A map to which pixel in image B. As a result, in general it is not possible to just discard every other pixel of the image (by the way, if you already know the locations in both images, you would not need the stereo methods at all). So unless you can discard a large border of your input images for which you know that you'll never find your pixels of interest there, I'd say the answer to this part of your question would be "no".
If you happen to know that your pixels of interest will always be within a certain rectangle of the input images, you can specify the input image ROIs (regions of interest) to this rectangle. Assuming OpenCV does not contain a bug here this should speedup the computation a little.
With a bit of googling you can to find real-time examples of finding stereo correspondences using EmguCV (or plain OpenCV) using the GPU on Youtube. Maybe this could help you.
Disclaimer: this may have been a more complete answer if your question contained more detail.

Fast and quick pixel matching algorithm

I am stuck in a pixel matching algorithm for finding symbols in an image. I have two images of symbols that I intend to find in an image that has big resolution.
Instead of a pixel by pixel matching algorithm, is there a fast algorithm that gives the same result as that of pixel matching algorithm. The result should be similar to: (percentage of pixel matched) divide by (total pixels).
My problem is that I wish to find certain symbols in a 1 bit image. The symbol appear with exact similarity in the target image and 95% of total pixel match with the target block in the image. but it takes hours to do iterations. The image is 10k X 10k and the symbol size is 20 X 20, so it will 10 power of 10 calculations which is too much to handle. Is there any filter/NN combination or any other algorithm that can give same results as that of pixel matching in a few minutes?
The point here is that pixels are almost same in the but problem is that size is very large. I do not want complex features for noise handling or edges, fuzzy etc. just a simple algorithm to do pixel matching quickly and the result should be similar to: (percentage of pixel matched) divide by (total pixels)
object recognition is tricky in that any simple algorithm is generally going to be way too slow, as you've apparently realized.
Luckily, if you have a rather large collection of these images on hand that are already correctly labeled, then I have a very simply solution for you.
Simply make 3 layer feedforward network with one input unit per pixel, all of which connect to a much smaller hidden layer, and then those in turn connect to 1 output unit (representing which symbol is present in the image). Then just run the backpropagation algorithm on your dataset until the network learns to identify the symbols.
Unfortunately, this doesn't scale very well, so you might have to look into convolutional NNs for better performance.
Additionally, if you don't have any training data (i.e. labeled examples), then your best bet is probably to decompose your symbols into features and then sweep the image for those. If you can decompose them into lines, then a hough transform can do this quite rapidly.
Maybe an (Adaptive Resonance Theory) ART-1 network could help.
The algorithm can also be written that all Prototypes are checked in parallel in the same time and it can be blazing fast because it esentially uses binary math a lot.

Resources