I'm using the EMGU OpenCV wrapper for c#. I've got a disparity map being created nicely. However for my specific application I only need the disparity values of very few pixels, and I need them in real time. The calculation is taking about 100 ms now, I imagine that by getting disparity for hundreds of pixel values rather than thousands things would speed up considerably. I don't know much about what's going on "under the hood" of the stereo solver code, is there a way to speed things up by only calculating the disparity for the pixels that I need?
First of all, you fail to mention what you are really trying to accomplish, and moreover, what algorithm you are using. E.g. StereoGC is a really slow (i.e. not real-time), but usually far more accurate) compared to both StereoSGBM and StereoBM. Those last two can be used real-time, providing a few conditions are met:
The size of the input images is reasonably small;
You are not using an extravagant set of parameters (for instance, a larger value for numberOfDisparities will increase computation time).
Don't expect miracles when it comes to accuracy though.
Apart from that, there is the issue of "just a few pixels". As far as I understand, the algorithms implemented in OpenCV usually rely on information from more than 1 pixel to determine the disparity value. E.g. it needs a neighborhood to detect which pixel from image A map to which pixel in image B. As a result, in general it is not possible to just discard every other pixel of the image (by the way, if you already know the locations in both images, you would not need the stereo methods at all). So unless you can discard a large border of your input images for which you know that you'll never find your pixels of interest there, I'd say the answer to this part of your question would be "no".
If you happen to know that your pixels of interest will always be within a certain rectangle of the input images, you can specify the input image ROIs (regions of interest) to this rectangle. Assuming OpenCV does not contain a bug here this should speedup the computation a little.
With a bit of googling you can to find real-time examples of finding stereo correspondences using EmguCV (or plain OpenCV) using the GPU on Youtube. Maybe this could help you.
Disclaimer: this may have been a more complete answer if your question contained more detail.
Related
The situation is kind of unique from anything I have been able to find asked already, and is as follows: If I took a photo of two similar images, I'd like to be able to highlight the differing features in the two images. For example the following two halves of a children's spot the difference game:
The differences in the images will be bits missing/added and/or colour change, and the type of differences which would be easily detectable from the original image files by doing nothing cleverer than a pixel-by-pixel comparison. However the fact that they're subject to the fluctuations of light and imprecision of photography, I'll need a far more lenient/clever algorithm.
As you can see, the images won't necessarily line up perfectly if overlaid.
This question is tagged language-agnostic as I expect answers that point me towards relevant algorithms, however I'd also be interested in current implementations if they exist, particularly in Java, Ruby, or C.
The following approach should work. All of these functionalities are available in OpenCV. Take a look at this example for computing homographies.
Detect keypoints in the two images using a corner detector.
Extract descriptors (SIFT/SURF) for the keypoints.
Match the keypoints and compute a homography using RANSAC, that aligns the second image to the first.
Apply the homography to the second image, so that it is aligned with the first.
Now simply compute the pixel-wise difference between the two images, and the difference image will highlight everything that has changed from the first to the second.
My general approach would be to use an optical flow to align both images and perform a pixel by pixel comparison once they are aligned.
However, for the specifics, standard optical flows (OpenCV etc.) are likely to fail if the two images differ significantly like in your case. If that indeed fails, there are recent optical flow techniques that are supposed to work even if the images are drastically different. For instance, you might want to look at the paper about SIFT flows by Ce Liu et al that addresses this problem with sparse correspondences.
I have images of mosquitos similar to these ones and I would like to automatically circle around the head of each mosquito in the images. They are obviously in different orientations and there are random number of them in different images. some error is fine. Any ideas of algorithms to do this?
This problem resembles a face detection problem, so you could try a naïve approach first and refine it if necessary.
First you would need to recreate your training set. For this you would like to extract small images with examples of what is a mosquito head or what is not.
Then you can use those images to train a classification algorithm, be careful to have a balanced training set, since if your data is skewed to one class it would hit the performance of the algorithm. Since images are 2D and algorithms usually just take 1D arrays as input, you will need to arrange your images to that format as well (for instance: http://en.wikipedia.org/wiki/Row-major_order).
I normally use support vector machines, but other algorithms such as logistic regression could make the trick too. If you decide to use support vector machines I strongly recommend you to check libsvm (http://www.csie.ntu.edu.tw/~cjlin/libsvm/), since it's a very mature library with bindings to several programming languages. Also they have a very easy to follow guide targeted to beginners (http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf).
If you have enough data, you should be able to avoid tolerance to orientation. If you don't have enough data, then you could create more training rows with some samples rotated, so you would have a more representative training set.
As for the prediction what you could do is given an image, cut it using a grid where each cell has the same dimension that the ones you used on your training set. Then you pass each of this image to the classifier and mark those squares where the classifier gave you a positive output. If you really need circles then take the center of the given square and the radius would be the half of the square side size (sorry for stating the obvious).
So after you do this you might have problems with sizes (some mosquitos might appear closer to the camera than others) , since we are not trained the algorithm to be tolerant to scale. Moreover, even with all mosquitos in the same scale, we still might miss some of them just because they didn't fit in our grid perfectly. To address this, we will need to repeat this procedure (grid cut and predict) rescaling the given image to different sizes. How many sizes? well here you would have to determine that through experimentation.
This approach is sensitive to the size of the "window" that you are using, that is also something I would recommend you to experiment with.
There are some research may be useful:
A Multistep Approach for Shape Similarity Search in Image Databases
Representation and Detection of Shapes in Images
From the pictures you provided this seems to be an extremely hard image recognition problem, and I doubt you will get anywhere near acceptable recognition rates.
I would recommend a simpler approach:
First, if you have any control over the images, separate the mosquitoes before taking the picture, and use a white unmarked underground, perhaps even something illuminated from below. This will make separating the mosquitoes much easier.
Then threshold the image. For example here i did a quick try taking the red channel, then substracting the blue channel*5, then applying a threshold of 80:
Use morphological dilation and erosion to get rid of the small leg structures.
Identify blobs of the right size to be moquitoes by Connected Component Labeling. If a blob is large enough to be two mosquitoes, cut it out, and apply some more dilation/erosion to it.
Once you have a single blob like this
you can find the direction of the body using Principal Component Analysis. The head should be the part of the body where the cross-section is the thickest.
Contour detection takes up the majority of my time in my computer vision, and it needs to be faster. I've so optimized everything else via NEON instructions and vectorizing that really, the contour detection dominates the profile. Unfortunately, it isn't obvious to me how to optimize this.
I'm doing the classic rectangle detection process to find fiducial markers, i.e. cvFindContours(), followed by approximating squares from the contours. In cases with many, many markers visible (or catastrophically, when a dense grid of rectangles that aren't markers is visible), the call to cvFindContours() alone can take >30ms on an iPhone.
I've already replaced the incredibly expensive C++ cv::FindContours() with cvFindContours(). Particularly if passed a vector >, the C++ version spent longer allocating and populating vectors than than its internal cvFindContours() took!
Now, I'm bound completely by the time in cvFindContours, or more specifically in cvFindNextContour(). The code inside cvFindNextContour is branch-heavy, and not obviously easy to vectorize. It also implements a complex algorithm that I don't trust myself not to get wrong in any attempt to optimize.
I have already looked at cvBlobLib (for disambiguition, I mean this one: http://code.google.com/p/cvblob/) to see if it provided alternative algorithms that could do the same thing faster. A base download of the source is incredibly slow because it records the contours into a std::list(), and spends almost all of its time in memory allocation. Replacing that list with a std::vector pre-sized to 256 elements to eliminate the initial copies on push_back() still leaves you with a function that takes 3x longer than cvFindContours(), 66% of that directly in cvb::cvLabel(). So it doesn't seem viable to go this way.
Does anyone have any ideas for how I can optimize detection of many rectangles. My vague handwaving includes:
Are there any fast implementations equivalent to cvFindContour(), ideally as source code as I'm multiplatform, out there?
The majority of contours are not required, only the "successful" rectangles are useful. In particular, their internal contours are then not useful. Theoretically, could I not call cvFindContours at all, and instead call cvStartFindContours/cvFindNextContour, testing each contour as found, and not recursing if I found a rectangle I'm looking for, as subrectangles are then guaranteed to be useless?
Is there a completely different rectangle detection algorithm I can use from the classic FindContours()/ApproxPoly() approach?
IS there a way to "prime" cvFindContours with useful regions of interest? E.g. a FAST corner detection almost always returns my fiducial marker corners even with a very agressive threshold. Is there a way to use that point set to limit detection? (Unfortunately, I'm not sure how much this helps, again in the case of many markers, or dense gridlines unrelated to the markers, which happens often in my app.)
In the same vein as above, since Blob detection can (if I understand correctly) be implemented as recursive flood-filling, are there any fast vectorized implementations of this that can then be used to somehow pull out interesting Blob rectangles, and seed contour detection from there?
Any ideas would be welcome!
Since your goal is rectangle detection and not contour detection, I would suggest making use of integral images for computation. An explanation of integral images can be found here. After computing the integral image of your desired image, calculating the pixel sum of a rectangle can be done with three operations.
Assuming you want to draw rectangles around every non-black object you can use a method as follows. Recursively keep dividing the image and its subimages to 4 and discard rectangles with a pixel sum below your desired threshold. You will be left with many small rectangles approximating your objects. Mergeing the neighbouring rectangles will yield a fast approximation of your detected objects.
I'm using OpenCV 2.3 for keypoints detection and matching. But I am a bit confused with the size and response parameters given by the detection algorithm. What do they exactly mean?
Based on the OpenCV manual, I can't figure it out:
float size: diameter of the meaningful keypoint neighborhood
float response: the response by which the most strong keypoints have
been selected. Can be used for further sorting or subsampling
I thought the best point to track would be the one with the highest response but it seems that it is not the case. So how could I subsample the set of key points returned by the surf detector to keep only the best one in term of trackability?
Size and response
SURF is a blob detector, in short, the size of a feature is the size of the blob. To be more precise, the returned size by OpenCV is half the length of the approximated Hessian operator. The size is also known as scale, this is due to the way the blob detectors work, i.e., being functionally equal to first blurring the image with the Gaussian filter at several scales and then downsampling the images and finally detecting blobs with a fixed size. See the image below showing the the size of the SURF features. The size of each feature is the radius of the drawn circle. The lines going out from the center of the features to the circumference show the angles or orientations. In this image, the response strength of the blob detection filter is color coded. You can see the majority of the detected features have a weak response. (see the full size image here)
This histogram shows the distribution of the response strengths of the features in the above image:
What features to track?
The most robust feature tracker tracks all the detected features. The more features the more robustness. But it's impractical to track a large number of features as often we want to limit the computation time. The number of features to track often should be empirically tuned for each application. Often the image is divided into regular sub-regions and in each one the n strongest features are kept to be tracked. n is usually chosen such that in total about 500~1000 features are detected per frame.
References
Reading the journal paper describing SURF definitely will give you a good idea of how it works. Just try not to get stuck in the details, specially if your background isn't in machine/computer vision or image processing. The SURF detector may seem extremely novel at the first glance but the whole idea is estimating the Hessian operator (a well established filter) using integral images (which have been used by other methods long before SURF). If you want to understand SURF very well and you're not familiar with image processing, you need to go back and read some introductory material. Recently I came across a new and free book, whose chapter 13 has a good and brief introduction to feature detection. Not everything said in there is technically correct, but it's a good starting point. Here you can find another good description of SURF with several images showing how each step works. On that page you see this image:
You can see the white and black blobs, these are the blobs that SURF detects at several scales and estimates their sizes (radius in the OpenCV code).
"size" is the size of the area covered by the descriptor in the original image (it is obtained by downsampling the original image in the scale space, hence it varies from key point to key point based on their scale).
"reponse" is indeed an indicator of "how good" (roughly speaking, in terms of corner-ness) a point is.
Good points are stable for static scene retrieval (this is the main purpose of SIFT/SURF descriptors). In the case of tracking, you can have good points appearing because the tracked object is on a well formed background, of half in the shadow... then disappearing because this condition has changed (change of light, occlusion...). So there is no guarantee for tracking tasks that a good point will always be there.
I'm working on a software that checks if some laser-cut parts were cut correctly, using AutoCAD data as reference. I have parsed the dxf-files, converted them to a bmp (and to an xml File that gives me all the information), and now I want to compare this to the real, acquired data.
I have applied enough preprocessing to get a reasonably thresholded, binary picture. This is, however, distorted (unfortunately, telecentric lenses are expensive and the user places the object into a device, causing some translation, some scalation and a tiny amount of rotation, as in 1-2degs).
I have considered Hough transform, but memory is an issue. I have played around with bounding box transformation, but the unknown shape makes this hard. I've read about TILT (no symmetry) and registration algorithms, but I'd like to get another opinion.
I'm looking for some papers, some ideas, some pointers on how to go on.
Thanks.
First step is to undistort the image ( see camera calibration - ignore the 3d part).
Then think about the shape matching. Depending on how small the error you are trying to find, this could be very easy or very very difficult, but those links should get you started
You may want to look at features that can discriminate the two. Are there simple features that can accurately distinguish a properly cut piece vs. an incorrectly cut piece? If so, you can use the same idea as the Hough transform/template matching, but reducing the template to certain distinguishing features (edges, corners, etc.) to reduce the memory required.
You may want to look at the SIFT/SURF features that aim to match images by a certain set of features while being invariant to the rotation and scale of the objects within the image. There are libraries out there that implement these features (shown on the SURF page).
This however, wont help with the distortion. If you're using the same camera for all images, then you should be able to de-skew them accordingly.