Extract contour path from an image - image-processing

Let's say I have a 16x16 black & white bitmap image
Here white pixels indicate empty space and black pixels indicate filled space.
I want to extract all of it's contour lines that surround black pixels, including holes and nested contour lines. (see the second image)
Let's define a coordinate space for pixels
top-left pixel -> index (0,0)
top-right pixel -> index (15,0)
bottom-left pixel -> index (0,15)
bottom-right pixel -> index (15,15)
Contour lines also have their coordinate space
top-left corner of top-left pixel -> index (0,0)
top-right corner of top-right pixel -> index (16,0)
bottom-left corner of bottom-left pixel -> index (0,16)
bottom-right corner of bottom-right pixel -> index (16,16)
Finally, contour lines are defined as a sequence of points in that coordinate space.
On the second image I marked 3 contours to demonstrate what the desired output should look like.
Path1 (RED): 1(1,0) 2(2,0) 3(2, 3) 4(3,3) 5(0,3) ... 23(4,4) 24(1, 4)
Hole1 of Path1 (BLUE): 1(7,5) 2(7,6) 3(6,6) ... 13(11,6) 14(11,5)
Path2 (RED again): 1(8,6) 2(10,6) 3(10,8) 4(8,8)
...
Note that the order of points in contour is important. Winding difference for holes is not that important, but we should somehow indicate "hole" property of that contour.
I solved this problem using ClipperLib, but it is more like a brute-force approach in my opinion, if we ignore what happens inside the ClipperLib.
Here's a brief description of the algorithm.
First, define a 16x16 subject polygon from which we will be subtracting all white pixels
Scan the image matrix row by row
On each row extract all contiguous white rectangle shapes as a clipping polygon
Do the polygon clipping by subtracting all collected white rectangular polygons from initial 16x16 subject polygon
Extract path data (including holes) from ClipperLib's PolyTree solution
I'm wondering if there is a better way to solve this problem?

Using ClipperLib seems overkill here, as it addresses general polygons by means of complex intersection detection and topological reconstruction algorithms, whereas your problem is more "predictable".
You can proceed in two steps:
use a standard contouring algorithm, such as used by cv.findContours. (It is an implementation of "Satoshi Suzuki and others. Topological structural analysis of digitized binary images by border following. Computer Vision, Graphics, and Image Processing, 30(1):32–46, 1985.")
from the contours, which link pixel centers to pixel centers, derive the contours that follow the pixel edges. This can probably be achieved by studying the different configurations of sequences of three pixels along the outline.

You can use boundary tracing algorithms for this. I personally use Moore-Neighbor tracing, because it's intuitive and straightforward to implement. You first find the boundary contours, and then come up with a hole searching algorithm (you may need to combine parts of scanline fill algorithm). Once you find a hole, you can apply the same boundary tracing algorithm, but in opposite direction.
You can definitely use libraries like OpenCV to find contours, but it my experience, it may produce degenerate output incompatible with other libraries, such as poly2tri used to decompose polygons into triangles.
If we take your input sample image, then the red path could be considered self-intersecting (vertices 7 and 23 are touching), which may lead to failed polygon decomposition. You may need to figure out a way to find and treat those objects as separate, if that's a problem. However, the newest Clipper2 is going to have triangulation unit that could handle such degenerate input, if you ever need to solve this problem down the road.

Related

How to get the pixel coordinates of the edge position after the image is processed by the Canny operator

After a picture is processed by Canny, it will get more obvious edge features.How can I find the position coordinates of the topmost, bottommost, rightmost part of the body part?The grabcut algorithm requires a matrix to frame the foreground part, and the coordinates are obtained to determine the rectangle.
I need to separate the images and extract the foreground, but the grabcut algorithm needs to manually enter the coordinates of the rectangle. The coordinates of the rectangle are such (x, y, w, x), x and y are the coordinates of the upper left corner of the foreground, w is wide, h is high
I tried to get the coordinates by clicking with the mouse, but this is too inefficient.
I painted it on the edge
The two points pointed by the arrow need to be ignored because they are not part of the body (in most cases, only a small part of a picture after canny processing does not belong to the subject, or all are subjects)
One approach that might work is :-
Make a copy of the edge information.
Add lots and lots and lots of blur
Dilate X a large number of times e.g. 30
Threshold the image
Contract a large number of times e.g. 35
Despeckle/Denoise the image
Use the result as a template for the original.
Might work. Or maybe use the blob detector https://www.learnopencv.com/blob-detection-using-opencv-python-c/

Straightening a curved contour

Given a contour with an easily-identifiable edge, how would one straighten it and its contents, as pictured?
Detect the black edge and fit a spline curve to it.
From that spline you will be able to draw normals, and mark points regularly along it. This forms a (u, v) mesh that is easy to straighten.
To compute the destination image, draw horizontal rows, which correspond to particular normals in the source. Then sampling along the horizontal corresponds to some fractional (x, y) coordinates in the source. You can perform bilinear interpolation around the neighboring pixels to achieve good quality resampling.

Opencv divide a contour in two sections

I have a contour in Opencv with a convexity defect (the one in red) and I want to cut that contour in two parts, horizontally traversing that point, is there anyway to do it, so I just get the contour marked in yellow?
Image describing the problem
That's an interesting question. There are some solutions based on how the concavity points are distributed in your image.
1) If such points does not occur at the bottom of the contour (like your simple example). Then here is a pseudo-code.
Find convex hull C of the image I.
Subtract I from C, that will give you the concavity areas (like the black triangle between two white triangles in your example).
The point with the minimum y value in that area gives you the horizontal line to cut.
2) If such points can occur anywhere, you need a more intelligent algorithm which has cut lines that are not constrained by only being horizontal (because the min-y point of that difference will be the min-y of the image). You can find the "inner-most" corner points, and connect them to each other. You can recursively cut the remainder in y-,x+,y+,x- directions. It really depends on the specs of your input.

Detecting incomplete rectangles (missing corners/ short endges) in OpenCV

I've been working off a variant of the opencv squares sample to detect rectangles. It's working fine for closed rectangles, but I was wondering what approaches I could take to detect rectangles that have openings ie missing corners, lines that are too short.
I perform some dilation, which closes small gaps but not these larger ones.
I considered using a convex hull or bounding rect to generate a contour for comparison but since the edges of the rectangle are disconnected, each would read as a separate contour.
I think the first step is to detect which lines are candidates for forming a complete rectangle, and then perform some sort of line extrapolation. This seems promising, but my rectangle edges won't lie perfectly horizontally or vertically.
I'm trying to detect the three leftmost rectangles in this image:
Perhaps this paper is of interest? Rectangle Detection based on a Windowed Hough Transform
Basically, take the hough line transform of the image. You will get maximums at the locations in (theta, rho) space which relate to the places where there are lines. The larger the value, the longer/straighter the line. Maybe do a threshold to only get the best lines. Then, we are trying to look for pairs of lines which are
1) parallel: the maximums occur at similar theta values
2) similar length: the values of the maximums are similar
3) orthogonal to another pair of lines: theta values are 90 degrees away from other pairs' theta values
There are some more details in the paper, such as doing the transform in a sliding window, and then using an error metric to consolidate multiple matches.

Finding Area that CGPath Intersects CGRect

I'm working on an iOS app where I need to be able to see how much of a CGPath is within the screen bounds to ensure that there is enough for the user to still touch. The problem is, when the shape is in the corner, all the methods I would normally use (and everything I can think to try) fails when the path is in the corner.
Here's a pic:
How can I calculate how much of that shape is on screen?
Obvious answers are to do it empirically by pixel painting or to do it analytically by polygon clipping.
So to proceed empirically you'd create a CGBitmapContext the size of your viewport, clear it to a known colour such as (0, 0, 0), paint on your polygon in another known colour, like say (1, 1, 1) then just run through all the pixels in the bitmap context and add up the total number you can find. That's probably quite expensive, but you can use lower resolution contexts to get more approximate results if that helps.
To proceed analytically you'd run a polygon clipping algorithm such as those described here to derive a new polygon from the original, which is just that portion of it that is actually on screen. Then you'd get the area of that using any of the normal formulas.
It's actually a lot easier to clip a convex polygon than a concave one, so if your polygons are of a fixed shape then you might consider using a triangulation algorithm, such as ear clipping or decomposition to monotone edges and then performing the clipping and area calculation on those rather than the original.
You could get an approximation using this approach:
Let reg be the intersection of the screen with the bounding box of the shape
Choose N random points in reg and check if they are contained in the shape
The area of the shape can be estimated by (area of reg)*(number of points contained in shape)/N
You can choose the parameter N based on the size of reg to get a good and fast approximation

Resources