After segmentation of Objects in noisy data, I need to fit the best possible retangulat fit.
currently I just use opencv findContours and minAreaRect which will give me all around. I know that those objects will always be horizontal in the image with a maximum small angle like in this image.
This can be seen as the green rectanlge in the images, however I would need something like the red drawn rectangles, or even just the middle line (blue) since thats what I do need in the end.
Further, I also do have some conjunctions, like seen in this image:
Here I want to only detect the horizontal part and maybe know that there could be a junction.
Any idea how to solve this problem? I need some fast approach and have not found anything feasable yet.
Got much better results using distance transform (as mentioned from #Micka) on the masked Image, afterwars find the Line with the biggest distance as the middle of the rectangle (using some Filters, cuting off the curve) and in the End fitting a Line on the middle estimate.
Related
I am trying to detect a no-stop box which is shaped something like below (this picture is taken and cleaned up after applying edge detector), but can vary the size of it (sometimes it wider in length, sometimes in breath). All have similar pattern and It seem like a box. i have tried to use findcontour as well as hough line detector to detect it.
however the result do not seem well aat all mainly because the bottom of the box is not closed ( this happened when the box reaches a certain length )
Looking for suggestion
Some of other ideas i have
1) Look for intersection points and determine if intersection is 90 degree
2) look at HU moment and use a template image of the image for comparison
regards
If your filtered data is always this clean, you could look into computing oriented bounding boxes with cv::minAreaRect. cv::boundingRect could also work, although it appears to only give upright bounding rectangles, rather than oriented ones. Here's opencv's tutorial for oriented bounding boxes.
I'm just learning OpenCV, and have a question about line detection. I have a situation where I need to detect a horizontal black line on a white background. I am guaranteed that the line will always show up horizontally (within a few degrees) and need to detect where it is in the images from the camera.
My thought is, since it is always horizontal, I can just search vertically for the "edge" through a few columns on the image, and call it good. Maybe even narrow the amount of pixels I'm capturing from the camera as an extra boost in speed.
Is there a builtin function for this type of line detection though?
I don't need the extra power, and cannot afford the processing time of Canny or Hough, I just want to find a guaranteed horizontal line as fast as possible.
The images (with my solution running) look like this:
If you provide the type of image you talking about then suggesting solution will be better.
Also I wonder, why you can't use Canny Detector which is not so computationally expensive. Further more you can downside your image, compute the edges and filter horizontal edges.
On the other hand, knowing that your horizontal edges are always horizontal on the image plane, you can use Template matching.
The method I ended up going with is a for loop. After thresholding the image, I search along two columns to find all the "edges" or changes in value. Then I process this list to find only horizontal pairs of edges.
I then find all lines that are close enough together, and have a desired infill (boolean comparison against thresholded image), which effectively only finds the strips of tape I am interested in tracking.
This takes about 1/50th the time of JUST the Canny call, not including the findContours, etc that are also necessary. I have not tested against Hough however, but I believe this will still be significantly faster.
Since the biggest issue was processing speed, I made several other optimizations as well.
Code can be found here (It's very well commented, I promise): Code as a gist
First: here is a little sketch
I'm working on a project where I get an image with a quadrilateral on it (the red one). I know the positions of all four points.
Now I want to deskew this quadrilateral to a rectangle with the same size like the original image. Its only allowed to move the corners of the whole image (marked with a blue cycle) each one independently.
I tried with a little bit of math. I created a system of linear equations, but I never get a solution.
I tried to move the edges of the image a little bit and recalculate the edges of the quadrilateral. This isn't working by now but consumed a lot of time.
Now I thought there has to be a algorithm to solve this problem more efficient.
I hope you know a algorithm or have an idea for me.
sincerely,
Xean
P.S.: Only core frameworks should be used. Not something like OpenCV.
following up on my other question, do you guys know a good example in OpenCV, with a simple Black/White-Calibration Picture and appropriate detection-algorithms?
I just want to show some B&W-image on a screen, take a picture of that image from afar and calculate the size of the shown image, to calculate the distance to said screen.
Before I invent the wheel again, I recon this is so easy that it could be achieved through many different ways in OpenCV, yet I thought I'd ask if there's a preferred way around, possibly with some sample code.
(I got some face-detection code running using haarcascade-xml files already)
PS: I already have the resolution/dpi-part of my screen covered, so I know how big a picture would be in cm on my screen.
EDIT:
I'll make it real simple, I need:
A pattern, that is easily recognizable in an Image. Right now I'm experimenting with a checkerboard. The people who made ARDefender used this.
An appropriate algorithm to tell me the exact pixel coordinates of pattern 1) in a picture using OpenCV.
Well, it's hard to say which image is the best for recognition - in different illumination any color could be interpret as another color. Simple example:
As you can see both traffic signs have red color border but even on one image upper sign border is obviously not red.
So in my opinion you should use image with many different colors (like a rainbow). And also you said that it should be easy recognizable in different angles. That's why circle shape is the best for it.
That's why your image should look like this:
So idea of detection such object is the following:
Make different color segmentation (blue, red, green etc). For this use HSV color space.
Detect circles of specific color on image.
That area which has the biggest count of circles seems to be your object.
you just have to take pictures of your B&W object from several known distances (1m, 2m, 3m, ...) and then for each distance check the size of your object in the corresponding image.
From those datas, you will be able to create a linear function giving you the distance from the size in pixels (y = ax + b should do ;) ), translate it into your code and you're done.
Cheers
I am currently facing a, in my opinion, rather common problem which should be quite easy to solve but so far all my approached have failed so I am turning to you for help.
I think the problem is explained best with some illustrations. I have some Patterns like these two:
I also have an Image like (probably better, because the photo this one originated from was quite poorly lit) this:
(Note how the Template was scaled to kinda fit the size of the image)
The ultimate goal is a tool which determines whether the user shows a thumb up/thumbs down gesture and also some angles in between. So I want to match the patterns against the image and see which one resembles the picture the most (or to be more precise, the angle the hand is showing). I know the direction in which the thumb is showing in the pattern, so if i find the pattern which looks identical I also have the angle.
I am working with OpenCV (with Python Bindings) and already tried cvMatchTemplate and MatchShapes but so far its not really working reliably.
I can only guess why MatchTemplate failed but I think that a smaller pattern with a smaller white are fits fully into the white area of a picture thus creating the best matching factor although its obvious that they dont really look the same.
Are there some Methods hidden in OpenCV I havent found yet or is there a known algorithm for those kinds of problem I should reimplement?
Happy New Year.
A few simple techniques could work:
After binarization and segmentation, find Feret's diameter of the blob (a.k.a. the farthest distance between points, or the major axis).
Find the convex hull of the point set, flood fill it, and treat it as a connected region. Subtract the original image with the thumb. The difference will be the area between the thumb and fist, and the position of that area relative to the center of mass should give you an indication of rotation.
Use a watershed algorithm on the distances of each point to the blob edge. This can help identify the connected thin region (the thumb).
Fit the largest circle (or largest inscribed polygon) within the blob. Dilate this circle or polygon until some fraction of its edge overlaps the background. Subtract this dilated figure from the original image; only the thumb will remain.
If the size of the hand is consistent (or relatively consistent), then you could also perform N morphological erode operations until the thumb disappears, then N dilate operations to grow the fist back to its original approximate size. Subtract this fist-only blob from the original blob to get the thumb blob. Then uses the thumb blob direction (Feret's diameter) and/or center of mass relative to the fist blob center of mass to determine direction.
Techniques to find critical points (regions of strong direction change) are trickier. At the simplest, you might also use corner detectors and then check the distance from one corner to another to identify the place when the inner edge of the thumb meets the fist.
For more complex methods, look into papers about shape decomposition by authors such as Kimia, Siddiqi, and Xiaofing Mi.
MatchTemplate seems like a good fit for the problem you describe. In what way is it failing for you? If you are actually masking the thumbs-up/thumbs-down/thumbs-in-between signs as nicely as you show in your sample image then you have already done the most difficult part.
MatchTemplate does not include rotation and scaling in the search space, so you should generate more templates from your reference image at all rotations you'd like to detect, and you should scale your templates to match the general size of the found thumbs up/thumbs down signs.
[edit]
The result array for MatchTemplate contains an integer value that specifies how well the fit of template in image is at that location. If you use CV_TM_SQDIFF then the lowest value in the result array is the location of best fit, if you use CV_TM_CCORR or CV_TM_CCOEFF then it is the highest value. If your scaled and rotated template images all have the same number of white pixels then you can compare the value of best fit you find for all different template images, and the template image that has the best fit overall is the one you want to select.
There are tons of rotation/scaling independent detection functions that could conceivably help you, but normalizing your problem to work with MatchTemplate is by far the easiest.
For the more advanced stuff, check out SIFT, Haar feature based classifiers, or one of the others available in OpenCV
I think you can get excellent results if you just compute the two points that have the furthest shortest path going through white. The direction in which the thumb is pointing is just the direction of the line that joins the two points.
You can do this easily by sampling points on the white area and using Floyd-Warshall.