Let's say I have a 16x16 black & white bitmap image
Here white pixels indicate empty space and black pixels indicate filled space.
I want to extract all of it's contour lines that surround black pixels, including holes and nested contour lines. (see the second image)
Let's define a coordinate space for pixels
top-left pixel -> index (0,0)
top-right pixel -> index (15,0)
bottom-left pixel -> index (0,15)
bottom-right pixel -> index (15,15)
Contour lines also have their coordinate space
top-left corner of top-left pixel -> index (0,0)
top-right corner of top-right pixel -> index (16,0)
bottom-left corner of bottom-left pixel -> index (0,16)
bottom-right corner of bottom-right pixel -> index (16,16)
Finally, contour lines are defined as a sequence of points in that coordinate space.
On the second image I marked 3 contours to demonstrate what the desired output should look like.
Path1 (RED): 1(1,0) 2(2,0) 3(2, 3) 4(3,3) 5(0,3) ... 23(4,4) 24(1, 4)
Hole1 of Path1 (BLUE): 1(7,5) 2(7,6) 3(6,6) ... 13(11,6) 14(11,5)
Path2 (RED again): 1(8,6) 2(10,6) 3(10,8) 4(8,8)
...
Note that the order of points in contour is important. Winding difference for holes is not that important, but we should somehow indicate "hole" property of that contour.
I solved this problem using ClipperLib, but it is more like a brute-force approach in my opinion, if we ignore what happens inside the ClipperLib.
Here's a brief description of the algorithm.
First, define a 16x16 subject polygon from which we will be subtracting all white pixels
Scan the image matrix row by row
On each row extract all contiguous white rectangle shapes as a clipping polygon
Do the polygon clipping by subtracting all collected white rectangular polygons from initial 16x16 subject polygon
Extract path data (including holes) from ClipperLib's PolyTree solution
I'm wondering if there is a better way to solve this problem?
Using ClipperLib seems overkill here, as it addresses general polygons by means of complex intersection detection and topological reconstruction algorithms, whereas your problem is more "predictable".
You can proceed in two steps:
use a standard contouring algorithm, such as used by cv.findContours. (It is an implementation of "Satoshi Suzuki and others. Topological structural analysis of digitized binary images by border following. Computer Vision, Graphics, and Image Processing, 30(1):32–46, 1985.")
from the contours, which link pixel centers to pixel centers, derive the contours that follow the pixel edges. This can probably be achieved by studying the different configurations of sequences of three pixels along the outline.
You can use boundary tracing algorithms for this. I personally use Moore-Neighbor tracing, because it's intuitive and straightforward to implement. You first find the boundary contours, and then come up with a hole searching algorithm (you may need to combine parts of scanline fill algorithm). Once you find a hole, you can apply the same boundary tracing algorithm, but in opposite direction.
You can definitely use libraries like OpenCV to find contours, but it my experience, it may produce degenerate output incompatible with other libraries, such as poly2tri used to decompose polygons into triangles.
If we take your input sample image, then the red path could be considered self-intersecting (vertices 7 and 23 are touching), which may lead to failed polygon decomposition. You may need to figure out a way to find and treat those objects as separate, if that's a problem. However, the newest Clipper2 is going to have triangulation unit that could handle such degenerate input, if you ever need to solve this problem down the road.
I have the following image as a test image:
I attempt to find the shapes on the image (and other images). My approch right now is the following:
Gaussian blur with a 3x3 kernel
Canny edge detection using
list (to get all shapes)
Morphology with MorphOp.Close to close
the edges
FindContours to find contours
Iteration of each contour:
Find ApproxPolyDP
Find ConvexHull
Discard if
hull size < 2, approx area < 200 or hull size > 50000, or arclength
of the approx < 100
Draw convexhull
This method yields the following images where the convex hulls are drawn:
This is almost perfect, but notice that the lines are seen as a contours events->suppliers and events->documents). When looking at the edge information, it becomes apparent why this is so:
The lines are detected as a contour. How could I prepare/find the shapes so the lines are not detected? I though of some thinning algorithm, but since I also work on real life images it is difficult to find a threshold that works. Here is an example of a real life image where thinning is difficult to do because thinning typically requires the images to be monochrome in black and white.
How would you do it? Is there some method to determine if the contour/convex hull is a line, rectangle or something like this?
I ended up using a mix of overlapping test and convexity scan. The convexity scans for the error between the convex hull and the actual contour. If this error exceeds a certain amount, the hull is ignored. The overlapping simply use bitwise and to detech if two convex hull's overlap. If they overlap more than 95% percent, one of them is ignored.
I am trying to detect ROI for a fixed repetitive pattern in an image using opencv C++.
The ROI which I am trying to find - is shown with red boundary as shown in the pic:
I tried canny edge detection after blurring but it detects edge of the vertical/horizontal black and white lines. This is not something I am trying to detect.
What is the best approach to my problem?
Since you're starting with a binary image you could use
findContours()
to get the contours for the individual strips. Since there are a couple of solitary pixels from noise you should then filter for size using
contourArea(contour)
and merge the points of all contours meeting your size criteria into a combined contour. Then get the bounding box for the combined contour:
boundingRect(combinedContour)
I have to remove some lines from the sides of hundreds of grayscale images.
In this image lines appear in three sides.
The lines are not consistent though, i.e, they appear above, below, left and/or right side of the image. And they are of unequal length and width.
If you could assume that the borders are free of important information, you may crop the photo like this:
C++ code:
cv::Mat img;
//load your image into img;
int padding=MAX_WIDTH_HEIGHT_OF_THE LINEAS_AREA
img=img(cv::Rect(padding,padding,img.cols-padding,img.rows-padding));
If not, you have to find a less dumb solution like this for example:
Findcontours
Delete contours that are far from the borders.
Draw contours on blank image
Apply hough line with suitable thresholds.
Delete contours that intersect with lines inside the image border.
Another solution, assuming the handwritten shape is connected:
Findcontours
Get the contour with the biggest area.
Draw it on a blank image with -1(fill) flag in the strock argument.
bitwise_and between the original image and the one you made
Another solution, asuming that the handwritten shape could be discontinuity :
Findcontours
Delete any contour that its all points are very close to the border (using euclidian distance with a threshold)
Draw all remaining contours on a blank image with -1(fill) flag in the strock argument.
bitwise_and between the original image and the one you made
P.S. I did not touch HoughLine transform since I do not about the shapes. I assume that some of them may contain very straight lines.
I'm trying to create app for applying Affine transform only for some part of image(non rectangular).
http://s29.postimg.org/k45fwbmsn/Untitled.png
Is there exist any way, to transform only selected(visible) part of the image?
I'm certain the overall transformation you described (only on part of an image) is not affine. So it isn't as easy as applying a matrix multiplication to some vectors.
But of course, there are ways to define algorithms that detect black rectangles and apply an affine transformation to the coordinates of detected rectangles. With the transformed coordinates you can draw a new quadrangle. Note: After an affine transformation it does not need to be a rectangle anymore.
Btw. you're contradicting yourself:
transform only for some part of image(non rectangular).
vs
to transform only black rectangle
I'd propose you clarify the following points about your input and expected output:
Which of the contradicting transformations do you want: Only rectangles or everything but rectangles?
Is it binary black-and-white, gray-level or colour image? This is a question of simple to complex input with quite some impact on the algorithm.
Is the image noiseless, i.e. is it true black or all sorts of really dark colours? For true black you might be able to apply a simple heuristic to detect the rectangles. If it's a noisy image you need to think about image filters/improvements and colour space transitions.
Are the rectangles the only "black" areas in your image?
Are the rectangles in parallel to x and y axis? Again this is simple heuristic vs pattern recognition.
Is the number of rectangles known? Are multiple rectangles related (in size, proportions, parallel) to each other?
What is to happen on borders or with image parts revealed by moving/shrinking the rectangles?
I'll edit the answer, when you provide the required information in your question.