Given a contour with an easily-identifiable edge, how would one straighten it and its contents, as pictured?
Detect the black edge and fit a spline curve to it.
From that spline you will be able to draw normals, and mark points regularly along it. This forms a (u, v) mesh that is easy to straighten.
To compute the destination image, draw horizontal rows, which correspond to particular normals in the source. Then sampling along the horizontal corresponds to some fractional (x, y) coordinates in the source. You can perform bilinear interpolation around the neighboring pixels to achieve good quality resampling.
Related
Let's say I have a 16x16 black & white bitmap image
Here white pixels indicate empty space and black pixels indicate filled space.
I want to extract all of it's contour lines that surround black pixels, including holes and nested contour lines. (see the second image)
Let's define a coordinate space for pixels
top-left pixel -> index (0,0)
top-right pixel -> index (15,0)
bottom-left pixel -> index (0,15)
bottom-right pixel -> index (15,15)
Contour lines also have their coordinate space
top-left corner of top-left pixel -> index (0,0)
top-right corner of top-right pixel -> index (16,0)
bottom-left corner of bottom-left pixel -> index (0,16)
bottom-right corner of bottom-right pixel -> index (16,16)
Finally, contour lines are defined as a sequence of points in that coordinate space.
On the second image I marked 3 contours to demonstrate what the desired output should look like.
Path1 (RED): 1(1,0) 2(2,0) 3(2, 3) 4(3,3) 5(0,3) ... 23(4,4) 24(1, 4)
Hole1 of Path1 (BLUE): 1(7,5) 2(7,6) 3(6,6) ... 13(11,6) 14(11,5)
Path2 (RED again): 1(8,6) 2(10,6) 3(10,8) 4(8,8)
...
Note that the order of points in contour is important. Winding difference for holes is not that important, but we should somehow indicate "hole" property of that contour.
I solved this problem using ClipperLib, but it is more like a brute-force approach in my opinion, if we ignore what happens inside the ClipperLib.
Here's a brief description of the algorithm.
First, define a 16x16 subject polygon from which we will be subtracting all white pixels
Scan the image matrix row by row
On each row extract all contiguous white rectangle shapes as a clipping polygon
Do the polygon clipping by subtracting all collected white rectangular polygons from initial 16x16 subject polygon
Extract path data (including holes) from ClipperLib's PolyTree solution
I'm wondering if there is a better way to solve this problem?
Using ClipperLib seems overkill here, as it addresses general polygons by means of complex intersection detection and topological reconstruction algorithms, whereas your problem is more "predictable".
You can proceed in two steps:
use a standard contouring algorithm, such as used by cv.findContours. (It is an implementation of "Satoshi Suzuki and others. Topological structural analysis of digitized binary images by border following. Computer Vision, Graphics, and Image Processing, 30(1):32–46, 1985.")
from the contours, which link pixel centers to pixel centers, derive the contours that follow the pixel edges. This can probably be achieved by studying the different configurations of sequences of three pixels along the outline.
You can use boundary tracing algorithms for this. I personally use Moore-Neighbor tracing, because it's intuitive and straightforward to implement. You first find the boundary contours, and then come up with a hole searching algorithm (you may need to combine parts of scanline fill algorithm). Once you find a hole, you can apply the same boundary tracing algorithm, but in opposite direction.
You can definitely use libraries like OpenCV to find contours, but it my experience, it may produce degenerate output incompatible with other libraries, such as poly2tri used to decompose polygons into triangles.
If we take your input sample image, then the red path could be considered self-intersecting (vertices 7 and 23 are touching), which may lead to failed polygon decomposition. You may need to figure out a way to find and treat those objects as separate, if that's a problem. However, the newest Clipper2 is going to have triangulation unit that could handle such degenerate input, if you ever need to solve this problem down the road.
I am trying to find corners of a square, potentially rotated shape, to determine the direction of its primary axes (horizontal and vertical) and be able to do a perspective transform (straighten it out).
From a prior processing stage I obtain the coordinates of a point (red dot in image) belonging to the shape. Next I do a flood-fill of the shape on a thresholded version of the image to determine its center (not shown) and area, by summing up X and Y of all filled pixels and dividing them by the area (number of pixels filled).
Given this information, what is an easy and reliable way to determine the corners of the shape (blue arrows)?
I was thinking about keeping track of P1, P2, P3, P4 where P1 is (minX, minY), P2 is (minX, maxY), P3 (maxY, minY) and P4 (maxY, maxY), so P1 is the point with the smallest value of X encountered, and of all those P, the one where Y is smallest too. Then sort them to get a clock-wise ordering. But I'm not sure if this is correct in all cases and efficient.
PS: I can't use OpenCV.
Looking your image, direction of 2 axes of the 2D pattern coordinate system will be able to be estimated from histogram of gradient direction.
When creating such histogram, 4 peeks will be found clearly.
If the image captured from front (image without perspective, your image looks like this case), Ideally, the angles between adjacent peaks are all 90 degrees.
directions of 2 axes of the pattern coordinate system will be directly estimated from those peaks.
After that, 4 corners can be simply estimated from "Axis aligned bounding box" (along the estimated axis, of course).
If not (when image is a picture with perspective), 4 peaks indicates which edge line is along the axis of the pattern coordinates.
So, for example, you can estimate corner location as intersection of 2 lines that along edge.
What I eventually ended up doing is the following:
Trace the edges of the contour using Moore-Neighbour Tracing --> this gives me a sequence of points lying on the border of rectangle.
During the trace, I observe changes in rectangular distance between the first and last points in a sliding window. The idea is inspired by the paper "The outline corner filter" by C. A. Malcolm (https://spie.org/Publications/Proceedings/Paper/10.1117/12.939248?SSO=1).
This is giving me accurate results for low computational overhead and little space.
I want to map a texture image of rectangular shape into a curved area. The curved area has a axis defined by bezier curve and fixed width.
I can map the points on the axis to the texture by percentile, and get a stripe of pixels to fill the region. But this way the left side of the region is "stretched", and I get unfilled gaps.
How can I map the texture to the curved area "smoothly"? Is there an algorithm for such a task?
To answer my own question:
My own naive solution is to fill the gaps(trianglar area in the image) with pixel values by interpolating between the adjacent points on normal vectors.
Later I found a more mathematical solution to this problem in a paper:
http://www.stat.ucla.edu/~sczhu/papers/Conf_2011/portrait-painting-preprint.pdf
It map the grids of the rectanglar texture to the spline-shaped area with a method called thin-plate spline (TPS) transformation:
we compute a thin-plate spline (TPS) transformation [Barrodale
et al. 1993] between the pairs of source and target dot positions
(e.g., between the corresponding backbone control points in Figs.4a
and 4b), and apply the transformation to the vertices of a quadrilateral
mesh covering the source brush stroke to get the deformed
mesh. Finally, we compute a texture mapping using the mesh, with
a bilinear interpolation inside each quadrilateral.
I am thinking maybe the same transformation can be done with bezier curves.
Hope this is helpful.
I am currently working on cell contour detection with OpenCV. So far, I have been able to detect the cell contours and I want to find and draw the longest axis parallel to the y-axis of the contour.
What I did was create a bounding rectangle from the contour which finds the center and the height and width and use this information to draw the axes. As it turns out, the major axis does not necessarily run through the center, so at times it peeks over the cell contour.
My line of approach is to split the contour into a semi-circle along the y-axis, aquire the perpendicular distance from each contour point to the y-axis and then select the longest on each side, but I suppose this is computationally expensive.
Is there an easy way to find the longest axes of a contour (not a bounding rectangle), that are parallel to the x- or y- coordinate axis?
Here's an image - My cell contour is in thin black, major and minor axes are in red, and the blue "axes" are what I want to find.
I have written algorithm to extract the points shown in the image. They form convex shape and I know order of them. How do I extract corners (top 3 and bottom 3) from such points?
I'm using opencv.
if you already have the convex hull of the object, and that hull includes the corner points, then all you need to to do is simplify the hull until it only has 6 points.
There are many ways to simplify polygons, for example you could just use this simple algorithm used in this answer: How to find corner coordinates of a rectangle in an image
do
for each point P on the convex hull:
measure its distance to the line AB _
between the point A before P and the point B after P,
remove the point with the smallest distance
repeat until 6 points left
If you do not know the exact number of points, then you could remove points until the minimum distance rises above a certain threshold
you could also do Ramer-Douglas-Peucker to simplify the polygon, openCV already has that implemented in cv::approxPolyDP.
Just modify the openCV squares sample to use 6 points instead of 4
Instead of trying to directly determine which of your feature points correspond to corners, how about applying an corner detection algorithm on the entire image then looking for which of your feature points appear close to peaks in the corner detector?
I'd suggest starting with a Harris corner detector. The OpenCV implementation is cv::cornerHarris.
Essentially, the Harris algorithm applies both a horizontal and a vertical Sobel filter to the image (or some other approximation of the partial derivatives of the image in the x and y directions).
It then constructs a 2 by 2 structure matrix at each image pixel, looks at the eigenvalues of that matrix, and calls points corners if both eigenvalues are above some threshold.