I'm trying to distinguish circular and rectangular objects from an image. I've labeled the round ones, using contours and the formula:
metric = 4*pie*contour.Area / (contour.perimeter^2)
Since I'm almost done, I wanted to know the alternate of this formula, which works for Rectangles. I'm bit weak in mathematics.
There is a simple tutorial from Emgu cv on finding shapes in an image.
It lets you find circles, triangles and squares.
You can check it out here shape tutorial.
Hope this helps you with your problem.
Related
I have been coding with OpenCV lately,
I have drawn on a paper few shapes (circle, triangle , rectangle, square, Parallelogram),
and would like to classify the shapes according to their characteristics.
I have searched for contours and drew them, and moreover drew the minAreaRect for each shape.
I tried using the function matchShapes but it didn't give me the result I had expected,
I have tried using Polygon Approximations but it seems it to fail here too.
(I went through all questions here before posting this question)
Original image
After some processing
What should be the correct approach for classifying hand-drawn shapes?
Any tips/suggestions would be great, I'm stuck :/.
I am am currently working on a method to extract colors from a macbeth color chart. So far I have had moderate success by using thresholding and then extracting square contours. Sadly through, colors that are too close to each other either mix together or do no get detected.
The code in it's current form:
<script src="https://pastebin.com/embed_js/mNi0TcDE"></script>
The image before any processing
After thresholding, you can see that there are areas where lines are incomplete due to too small differences in color. I have tried to use dilation to midigate these issues and it does work to a degree. But not enough to detect all squares.
Image after thresholding
This results in the following contours being detected
Detected contours
I have tried using:
Hough lines, sadly no lines here detected.
Centroids of contours, but I was unable to find a way to use centroids to draw lines and detect the centers of the missing contours
Corner detection, corners where found. But I was unsuccessful in finding a real way to put them to use.
Can anyone point me in the right direction?
Thaks in advance,
Emil
Hum, if your goal is color calibration, you really do not need to detect the squares in their entirety. A 10x10 sample near the center of the image of each physical square will give you 100 color samples, which is plenty for any reasonable calibration procedure.
There are many ways to approach this problem. If you can guarantee that the chart will cover the image, you could even just do k-means clustering, since you know in advance the exact number of clusters you seek.
If you insist on using geometry, I'd do template matching in scale+angle space - it is reasonable to assume that the chart will be mostly facing, and only slightly rotated, so you only need to estimate scale and a small rotation about the axis orthogonal to the chart.
I'm working on an algorithm that counts patterns (bars) in a specific image. It seemed to me very simple at the first look, but I realized the complexity quickly.
I have tried simple thresholding, template matching (small sliding windows), edge detection...
I have just few images like this one. so I think that a machine learning algorithm can't give better results! but I still need suggestions.
I think you have enough data from your images. You need to crop from your images only the bars. You would get several dozens of small images for each image. After that you can resize all the images to some predefined size (for example 24X24 pixels) use a descriptor like HOG and SVM for the learning. For the false just use any other areas from your images.
This may not work in all cases, but since these are round bars, you can also try using circle detection. Both matlab(find circles) and opencv(hough circle transform) support this hough circle transformation. One issue is that you have to play with the parameters a bit (matlab is more simplistic than open cv) but that is true of almost any method.
These methods work better with larger images so I resized yours. You also need to know the radius of the circles to look for. If your camera position is constant, this shouldn't change much. This code is taken from the matlab documentation page I linked. It doensn't find all the circles, but some tuning may help
im = imread('http://i.stack.imgur.com/NRwUq.jpg');
%find circles doesn't work well on small images, I made the image
%three times larger, if you have larger images you should use those for
%better results
bim = imresize(im, 3*size(im));
%find and display circles
[centers, radii] = imfindcircles(bim,[8 20],'ObjectPolarity','bright',...
'Sensitivity',0.9);
imshow(bim);
h = viscircles(centers,radii);
number_of_bars = numel(centers)
I added green dots to circles the detector missed and blue X's over incorrect detection. I did these by hand, but the red circles were located by matlab.
I am looking for an efficient way to detect the small boxes around the numbers (see images)?
I already tried to use hough transformation with no success. Any ideas? I need some hints! I am using opencv...
For inspiration, you can have a look at the
Matlab video sudoku solver demo and explanation
Sudoku Grab, an Iphone App, whose author explains the computer vision part on his blog
Alternatively, if you are always hunting for the same grid you could deploy something like this:
Make a perfect artificial template of the grid and detect or save all coordinates from all corners.
In the target image, do the same thing, for example with Harris points. Be creative, you might also be able to use the distinct triangles that can be found in your images.
Using the coordinates from the template and the found harris points, determine the affine transformation x = Ax' between the template and the target image. That transformation can then be used to map the template grid onto the target image. At the very least this will give you some prior information to help guide further segmentation.
The gist of the idea and examples of the estimation of affine matrix A can be found on the site of Zissermans book Multiple View Geometry in Computer Vision and Peter Kovesi
I'd start by trying to detect the rectangular boundary of the overall sheet, then applying a perspective transform to make it truly rectangular. Crop that portion of the image out. If possible, then try to make the alternating white and grey sub-rectangles have an equal background brightness - maybe try adaptive histogram equalization.
Then the Hough transform might perform better. Alternatively, you could then take an approach that's broadly similar to this demonstration by Robert Bemis on MATLAB Central (it's analysing a DNA microarray image rather than Lotto cards, but it's essentially finding bounding boxes of items arranged in a grid). At a high level, the approach is to calculate the autocorrelation along columns and rows of pixels to detect the periodicity of the items in the grid, and use that to impose a bounding box on each item.
Sorry the above advice is mostly MATLAB-based; I'm afraid I'm not an opencv user, but hopefully it will give you some ideas at least.
Having a match-3 game screenshot (for example http://www.gameplay3.com/images/games/jewel-quest-ii-01S.jpg), what would be the correct way to find the bound box for the grid (table with tiles)? The board doesn't have to be a perfect rectangle (as can be seen in the screenshot), but each cell is completely square.
I've tried several games, and found that there are some per-game image transformations that can be done to enhance the tiles inside the grid (for example in this game it's enough to take the V channel out of HSV color space). Then I can enlarge the tiles so that they overlap, find the largest contour of the image and get the bound box from it.
The problem with above approach is that every game (or even level inside the same game) may need a different transformation to get hold of the tiles. So the question is - is there a standard way to enhance either tiles inside the grid or grid's lines (I've tried finding lines with Hough transform, but, although the grid seems pretty visible to the eye, Hough doesn't find it)?
Also, what if the screenshot is obtained using the phone camera instead of taking a screenshot of a desktop? From my experience, captured images have less defined colors (which depends on lighting), and also can be distorted a little, as there is no way to hold the phone exactly in front of the screen.
I would go with the following approach for a screenshot:
Find corners in the image using for example a canny like edge detector.
Perform a hough line transform. This should work quite nicely on the edge image.
If you have some information about size of the tiles you could eliminate false positive lines using some sort of spatial model of the grid (eg. lines only having a small angle to x/y axis of the image and/or distance/angle of tile borders.
Identifiy tile borders under the found hough lines by looking for edges found by canny under/next to the lines.
Which implementation of the hough transform did you use? How did you preprocess the image?
Another approach would be to use some sort of machine learning approach. As you are working in OpenCV you could use either a Haar like feature detector. An example for face detection using Haar like features can be found here:
OpenCV Haar Face Detector example
Another machine learning approach would be to follow a Histogram of Oriented Gradients (Hog) approach in combination with a Support Vector Machine (SVM). An example is located here:
HOG example
You can find general information about HoG detection at:
Hog detection