What are the possible fast ways to detect circle in an image? - image-processing

What are the possible fast ways to detect circle in an image ?
For ex:
i have an image with one Big Circle and has 6 small circles inside big Circle.
I need to find a big circle without using Hough Circles(OpencV).

Standard algorithms to find circles are Hough (which jamk mentioned in the comments) and RANSAC. Parameterizing these algorithms will set a baseline speed for your application.
http://en.wikipedia.org/wiki/Hough_transform
http://en.wikipedia.org/wiki/RANSAC
To speed up these algorithms, you can look at your collection of images and decide whether limiting the search ranges will help speed up the search. That's straightforward enough: only search within a reasonable range for the radius. Since they take edge points as inputs, you can also look at methods to reduce the number of edge points checked.
However, there are a few other tricks to speed up processing.
Carefully set the range or ranges over which radii are checked. For example, you might not simply check from the smallest possible radius to the largest possible radius, but instead you might split the search into two different ranges: from radius R1 to R2, and then from radius R3 to R4.
Ditch the Canny edge detection in favor of the fastest possible edge detection your application can tolerate. (You can ditch Canny for lots of applications.)
Preprocess your image of edge points to eliminate outliers. The appropriate algorithm to eliminate outliers will be specific to your image set, but you'll probably be able to find an algorithm that eliminates obvious outliers and thereby saves some search time in the more expensive circle fit algorithms.
If your circles are very well defined, and all or nearly all points are present, figure out how you might match only a quarter circle or semicircle instead of a full circle.
Long story short: start with a complete implementation and benchmark it, then gradually tighten up parameter settings and limit search ranges while ensuring that you can still find circles for your application and your image set.
If your images are amenable to scaling, then one possibility is to create an image pyramid of images at different scales: 1/2 scale, 1/4 scale, 1/8 scale, etc. You'll need an edge-preserving scaling method at smaller scales.
Once you have your image pyramid, try the following:
Find circles at the very smallest scale. The image will be small and
the range of possible radii will be limited, so this should be a
quick operation.
If you find a circle using the initial fit at the small scale, improve the fit by testing in the next larger scale image -OR- go ahead and search in the full scale image.
Check the next largest scale. Circles that weren't visible in the smaller scale image may suddenly "appear" in the current scale.
Repeat the steps above through all scales in the image.
Image scaling will be a fast operation, and you can see that if at least one of your circles is present in a smaller scale image you should be able to reduce the total number of cycles by performing a rough circle fit in the small scale image and then optimizing the fit for those edge points alone in the full scale image.
Edge-preserving scaling can also make it possible to use correlation-type tools to find circles, but being able to do so depends on the content of your images, including the noise, how completely edge points represent circles, and so on.

Maybe, detect contours and check their properties, e.g. try to use cv::isContourConvex or another way could be to use the eigenvalues of the covariance matrix and check if contour's representative ellipse first eccentricity is ~0.

Related

which algorithm to choose for object detection?

I am interested in detecting single object more precisely a fire extinguisher which has no inter class variability (all fire extinguisher looks same). However, The application is supposedly realtime i.e a robot is exploring the environment and whenever it sees the object of interest it should be able to detect it and give pixel coordinates of it.
My question is which algorithm will be good choice for this task?
1. Is this a classification problem and should we use features(sift/surf etc) + bow +svm?
2. some other solution (no idea yet).
Any kind of input will be appreciated.
Thanks.
(P.S bear with me i am newbie to computer vision and stack over flow)
update1:
Height varies all are mounted on the wall but with different height. I tried with SIFT features and bow but it is expensive to extract bow descriptors in testing part. Moreover I have no idea how to locate the object(pixel coordinates) inside the image after its been classified positive.
update 2:
I finally used sift + bow + svm and am able to classify the object. But using this technique, i only get output interms of whether the object is present in the scene or not?
How can i detect the object i.e getting the bounding box or centre of the object. what is the compatible approach with the above method for achieving these results.
Thank you all.
I would suggest using color as the main feature to look for, and only try other features as needed. The fire extinguisher red is very distinctive, and should not occur too often elsewhere in an office environment. Other, more computationally expensive tests can then be performed only in regions of the right color.
Here is a good tutorial for color detection that also explains how to find good thresholds for your desired color.
I would suggest the following approach:
denoise your image with a median filter
convert the image to HSV format (Hue, Saturation, Value)
select pixels close to that particular shade of red with InRange()
Now you have a binary image image that contains only the pixels that are red.
count the number of red pixels with CountNonZero()
If that number is too small, abort
remove noise from the binary image by morphological opening / closing
find contours of all blobs in your picture with findContours or the CvBlob library
check if there are blobs of the correct width, correct height and correct width/height ratio
since your fire extinguishers are vertical cylinders, the width/height ratio will be constant from every angle. The width and height will of course vary somewhat with distance to the camera.
if the width and height do not match, abort
repeat these steps to find the black-colored part on the bottom of the extinguisher,
abort if there is no black region with correct width/height below the red region
(perhaps also repeat these steps for the metallic top and the yellow rectangle)
These tests should all be very fast. If they are too slow, you could reduce the resolution of your input images.
Depending on your environment, it is possible that this is already a robust enough test. If not, you can proceed with sift/surf feature matching, but only in a small region around the blobs with the correct color. You also do not necessarily have to do that for each frame, each n-th frame should be be enough for confirmation.
This is a old question .. but will still like to give my recommendation to use YOLO algorithm to solve this problem.
YOLO fits very well to this scenario.

Get rectangle out of array of points

Using GPUImage, I am able to detect corners of a book/page in an image. But sometimes, it will pass more than 4 points, in which case I will need to process and figure out the best rectangle out of these points. Here's an example:
What's the most efficient way to figure out the best rectangle in this case?
Thanks
If you're using a corner detection algorithm, then you can filter results based on the relative strength of the detected corner. The contrast at the book corners relative to your current background appears to be much stronger than the contrast at the point found in the wood grain. Are there relative magnitudes associated with each point, or do you just get the points? Setting thresholds for edge strengths can mean a lot of fiddling unless the intensities of the foreground and background are relatively constant.
Your sample image could be blurred or morphed. For example, the right morphological "close" on light pixels could eliminate the texture in the wood grain without having an effect on the size and shape of the book. (http://en.wikipedia.org/wiki/Mathematical_morphology)
Another possibility is to shrink the image to a much smaller size and then perform detection on that. Resizing the image will tend to wipe out tiny details such as whatever wood grain pattern is currently being detected.
Picking the right lens and lighting can make the image easier to process. Try to simplify the image as much as possible before processing it. As mentioned above, "dark field" lighting that would illuminate just the book edges would present a much simpler image for processing. Writing down the constraints can make it more obvious which solution will be most robust and simplest to implement. Finding any rectangle anywhere in an image is very difficult; it's much easier to find a light rectangle on a dark background if the rectangle is at least 100 x 100 pixels in size, rotated no more than 15 degrees from square to the image edges, etc.
More involved solutions can be split into two approaches:
Solving the problem using given only 4 or more (x,y) points.
Using a different image processing technique altogether for the sample image.
1. Solving the program given only the points
If you generally only have 5 or 6 points, and if you are confident that 4 of those points will belong to the corners of the rectangles that you want, then you can try this:
Find the convex hull of all points. The convex hull is the N-gon that completely encompasses all points. If the points were pegs sticking up, and if you stretched a rubber band around them and let it snap into place, then the final shape of the rubber band is a convex hull. Algorithms that find convex hulls typically return a list of points that ordered counterclockwise from the bottom leftmost point.
Make a copy of your point list and remove points from the copy until only four points remain. These four remaining points will still be ordered counterclockwise.
Calculate the angle formed by each set of three successive points: points 1, 2, 3, then 2, 3, 4, then 3, 4, 1, and so on.
If an angle is outside a reasonable tolerance--less than 70 degrees or greater than 110 degrees--skip back to step 2 and remove the next point (or set of points).
Store the min and max angles for each set of 4 points.
Repeat steps 2 - 6, removing a different point (or points) each time.
Track the set of points for which the min and max angles are closest to 90 degrees.
http://en.wikipedia.org/wiki/Convex_hull
There are a number of other checks and constraints that could be introduced. For example, if the point-to-point distances for 3 successive points in the convex hull (pts N to N+1, and N+1 to N+2) are close to the expected width and height of the book, then you might mark these as known good points and only test the remaining points to see which is the fourth point.
The technique above can get unwieldy if you get quite a few points, but it may work if two or three of the book corner points are expected to be found on the convex hull.
For any geometric problem, I always recommend checking out GeometricTools.com, which has a lot of great, optimized source code for all sorts of problems. It's very handy to have the book as well, especially if you can find a cheap copy using AddAll.com.
http://www.geometrictools.com/
2. Other image processing techniques for your sample image
Although I could be wrong, it appears that GPUImage doesn't have many general-purpose image processing algorithms. Some other image processing algorithms could make this problem much simpler to solve.
Though there isn't space to go into it here, one of the keys to successful image processing is appropriate lighting. Make sure you're lighting is consistent. A diffuse light that evenly illuminates the book and the background would work well. You can simplify the problem using funkier lighting: if you have four lights (or a special ring light), you can provide horizontal illumination from the top, bottom, left, and right that will cause the edges of the book to appear bright and other surfaces to appear dark.
http://www.benderassoc.com/mic/lighting/nerlite/Darkfield.htm
If you can use some other GPU libraries to do image processing, then one of the following techniques could work nicely:
Connected component labeling (a.k.a. finding blobs). It shouldn't be too hard to use either binary thresholding or a watershed algorithm to separate the white blob that is the book from the rest of the background. Once the blob for the book is identified, finding the corners is easier. (http://en.wikipedia.org/wiki/Connected-component_labeling) In OpenCV you can find the "contours."
Generate an list of edge points, then have four separate line-fitting tools search from top to bottom, right to left, bottom to top, and left to right to find the four strong (and mostly straight) edges associated with the book. In your sample image, though, either the book cover is slightly warped or the camera lens has introduced barrel distortion.
Use a corner detector designed to find light corners on a dark background. If you will always be looking for a white book on a wood grain background, you can create a detector to find white corners on a brown background.
Use a Hough technique to find the four strongest lines in the image. (http://en.wikipedia.org/wiki/Hough_transform)
The algorithmic technique that works best will depend on your constraints: are you looking for rectangles only of a certain size? is the contrast between foreground and background consistent? can you introduce lighting to simplify the appearance of the image? and so on.

Shape/Pattern Matching Approach in Computer Vision

I am currently facing a, in my opinion, rather common problem which should be quite easy to solve but so far all my approached have failed so I am turning to you for help.
I think the problem is explained best with some illustrations. I have some Patterns like these two:
I also have an Image like (probably better, because the photo this one originated from was quite poorly lit) this:
(Note how the Template was scaled to kinda fit the size of the image)
The ultimate goal is a tool which determines whether the user shows a thumb up/thumbs down gesture and also some angles in between. So I want to match the patterns against the image and see which one resembles the picture the most (or to be more precise, the angle the hand is showing). I know the direction in which the thumb is showing in the pattern, so if i find the pattern which looks identical I also have the angle.
I am working with OpenCV (with Python Bindings) and already tried cvMatchTemplate and MatchShapes but so far its not really working reliably.
I can only guess why MatchTemplate failed but I think that a smaller pattern with a smaller white are fits fully into the white area of a picture thus creating the best matching factor although its obvious that they dont really look the same.
Are there some Methods hidden in OpenCV I havent found yet or is there a known algorithm for those kinds of problem I should reimplement?
Happy New Year.
A few simple techniques could work:
After binarization and segmentation, find Feret's diameter of the blob (a.k.a. the farthest distance between points, or the major axis).
Find the convex hull of the point set, flood fill it, and treat it as a connected region. Subtract the original image with the thumb. The difference will be the area between the thumb and fist, and the position of that area relative to the center of mass should give you an indication of rotation.
Use a watershed algorithm on the distances of each point to the blob edge. This can help identify the connected thin region (the thumb).
Fit the largest circle (or largest inscribed polygon) within the blob. Dilate this circle or polygon until some fraction of its edge overlaps the background. Subtract this dilated figure from the original image; only the thumb will remain.
If the size of the hand is consistent (or relatively consistent), then you could also perform N morphological erode operations until the thumb disappears, then N dilate operations to grow the fist back to its original approximate size. Subtract this fist-only blob from the original blob to get the thumb blob. Then uses the thumb blob direction (Feret's diameter) and/or center of mass relative to the fist blob center of mass to determine direction.
Techniques to find critical points (regions of strong direction change) are trickier. At the simplest, you might also use corner detectors and then check the distance from one corner to another to identify the place when the inner edge of the thumb meets the fist.
For more complex methods, look into papers about shape decomposition by authors such as Kimia, Siddiqi, and Xiaofing Mi.
MatchTemplate seems like a good fit for the problem you describe. In what way is it failing for you? If you are actually masking the thumbs-up/thumbs-down/thumbs-in-between signs as nicely as you show in your sample image then you have already done the most difficult part.
MatchTemplate does not include rotation and scaling in the search space, so you should generate more templates from your reference image at all rotations you'd like to detect, and you should scale your templates to match the general size of the found thumbs up/thumbs down signs.
[edit]
The result array for MatchTemplate contains an integer value that specifies how well the fit of template in image is at that location. If you use CV_TM_SQDIFF then the lowest value in the result array is the location of best fit, if you use CV_TM_CCORR or CV_TM_CCOEFF then it is the highest value. If your scaled and rotated template images all have the same number of white pixels then you can compare the value of best fit you find for all different template images, and the template image that has the best fit overall is the one you want to select.
There are tons of rotation/scaling independent detection functions that could conceivably help you, but normalizing your problem to work with MatchTemplate is by far the easiest.
For the more advanced stuff, check out SIFT, Haar feature based classifiers, or one of the others available in OpenCV
I think you can get excellent results if you just compute the two points that have the furthest shortest path going through white. The direction in which the thumb is pointing is just the direction of the line that joins the two points.
You can do this easily by sampling points on the white area and using Floyd-Warshall.

Remove high frequency vertical shear noise from image

I have a some scanned images, where the scanner appears to have introduced a certain kind of noise that I've not encountered before. I would like to find a way to remove it automatically. The noise looks like high frequency vertical shear. In other words, a horizontal line that should look like ------------ shows up as /\/\/\/\/\/\/\/\/\, where the amplitude and frequency of the shear seem pretty regular.
Can someone suggest a way of doing the following steps?
Given an image, identify the frequency and amplitude of the shear noise. One can assume that it is always vertical and the characteristic frequency is higher than other frequencies that naturally appear in the image.
Given the above parameters, apply an opposite, vertical, periodic shear to the image to cancel this noise.
It would also be helpful to know how these could be implemented using the tools implemented by a freely available image processing package. (Netpbm, ImageMagick, Gimp, some Python library are some examples.)
Update: Here's a sample from an image with this kind of distortion. Actually, this sample shows that the shear amplitude need not be uniform throughout the image. :-(
The original images are higher resolution (600 dpi).
My solution to the problem would be to convert the image to frequency domain using FFT. The result will be two matrices: the image signal amplitude and the image signal phase. These two matrices should have the same dimensions of the input image.
Now, you should use the amplitude matrix to detect a spike in the area tha corresponds to the noise frequency. Note that the top left of this corner of this matrix should correspond to low frequency components and bottom right to high frequencies.
After you have indentified the spike, you should set the corresponding coefficients (amplitude matrix entries) to zero. After you apply the inverse FFT you should get the input image without the noise.
Please provide an example image for a more concrete (a practical) solution to your problem.
You could use a Hough fit or RANSAC to fit lines first. For Hough to work you may need to "smear" the points using Gaussian blur or morphological dilation so that you get more hits for a given (rho, theta) line in parameter space.
Once you have line fits, you can determine the relative distance of the original points to each line. From that spatial information you can use FFT to find help find a "best fit" spatial frequency and then shift pixels up/down accordingly.
As a first take, you might even skip FFT and use more of a brute force method:
Find the best fit lines using Hough or RANSAC.
Determine the orientation of the lines.
Sampling perpendicular to the (nominally) horizontal lines, find the points along that column with respect to the closest best fit lines.
If the points along one sample are on average a distance +N away from their best fit lines, shift all the pixels in that column (or along that perpendicular sample) by -N.
This sort of technique should work if the shear is consistent along a vertical sample, but not necessarily from left to right. If the shear is always exactly vertical, then finding horizontal lines should be relatively easy.
Judging from your sample image, it looks as though the shear may be consistent across a horizontal line segment between a 3-way or 4-way intersection with a nominally vertical line segment. You could use corner detectors or other methods to find these intersections to limit the extent over which a pixel shifting operation takes place.
A technique I posted here is another way to find horizontal stretches of dark pixels in case they don't fall on a line:
Is there an efficient algorithm for segmentation of handwritten text?
All that aside, is there a chance you could have the scanner fixed?

Correlating a vector with edges in an image

I'm trying to implement user-assisted edge detection using OpenCV.
Assume you have an image in which we need to find a polygonal shape. For the sake of discussion, let's say we need to find the top of a rectangular table in a picture. The user will click on the four corners of the table to help us narrow things down. Connecting those four points gives us a polygon, or four vectors.
But the user is not very accurate when clicking on those corners. So I'd like to use edge information from the image to increase the accuracy.
I'm using a Canny edge detector with a fairly high treshold to determine important edges in my image. (more precisely, I'm scaling down, blurring, converting to grayscale, then run Canny). How can I compute whether a vector aligns with an edge in my image? If I have a way to compute "alignment", my overal algorithm comes down to perturbating the location of the four edge points, computing the total "alignment" of my polygon with the edges in the image, until I find an optimum.
What is a good way to define and compute this "alignment" metric?
You may want to try to use FindContours to detect your table or any other contour. Then build a contour also from the user input points. After this you can read about Contour Moments by which you can compare contours. You can compare all the contours from the image with the one built from the user points and then select the closest match.

Resources