removing the excess tails from contours - opencv

I am trying to find contours in grayscale images. My code is based on persistent homology and it is irrelevant here. But the contours I am picking up are coming with some tails.
So, I need to post-process these contours by removing the tails. I came up with a method to do that by flood filling the outside of the contour then removing the contour pixels that is not a boundary of the original loop I am trying to capture.
#####################################
# Post-process the cycles(Get rid of the tails)
#####################################
def fill_mask(self,data, start_coords, fill_value):
"""
Flood fill algorithm
Parameters
----------
data : (M, N) ndarray of uint8 type
Image with flood to be filled. Modified inplace.
start_coords : tuple
Length-2 tuple of ints defining (row, col) start coordinates.
fill_value : int
Value the flooded area will take after the fill.
Returns
-------
None, ``data`` is modified inplace.
"""
xsize, ysize = data.shape
orig_value = data[start_coords[0], start_coords[1]]
stack = set(((start_coords[0], start_coords[1]),))
if fill_value == orig_value:
raise ValueError("Filling region with same value "
"already present is unsupported. "
"Did you already fill this region?")
while stack:
x, y = stack.pop()
if data[x, y] == orig_value:
data[x, y] = fill_value
if x > 0:
stack.add((x - 1, y))
if x < (xsize - 1):
stack.add((x + 1, y))
if y > 0:
stack.add((x, y - 1))
if y < (ysize - 1):
stack.add((x, y + 1))
def remove_non_boundary(self,good_cycles):
#Helper function to remove tails from the contours
#if plot=True, it allows to see individual cycles as a matrix
#we use fill_mask to floodfill everywhere on the mask except the hole bounded by the loop.
#we start floodfilling from (0,0), so we need to use 2 pixels bigger image along left-right and up-down just in case there is a
#cycle whose coordinates go through (0,0)
#"input:cycles with tails to be removed"
#"Returns:coordinates of the clean cycles and the correponding matrix representation 1-pixel bigger than the original image"
#"from all four directions"
good_cycles_cleaned=[]
masks=[]
for k in range(len(good_cycles)):
mask=self.overlay(good_cycles[[k]])
self.fill_mask(mask[:,:,0],(0,0),0.5)
for i in self.cycle2pixel(good_cycles[k]):
if mask[i[0]+2,i[1]+1,0]==0:pass#break
elif mask[i[0]+1,i[1]+2,0]==0:pass#break
elif mask[i[0],i[1]+1,0]==0:pass#break
elif mask[i[0]+1,i[1],0]==0:pass#break
else: mask[i[0]+1,i[1]+1,0]=0.5
if mask[:,:,0].all()==0.5: good_cycles_cleaned.append(good_cycles[k]);mask=self.overlay(good_cycles[[k]]);masks.append(mask)
else: self.fill_mask(mask[:,:,0],(0,0),0); cycle=np.transpose(np.nonzero(mask[:,:,0])) ; good_cycles_cleaned.append(cycle) ; masks.append(mask)
pixels = np.vstack([cycle for cycle in good_cycles_cleaned])
mask_good_clean = np.zeros((self.image.shape[0]+2, self.image.shape[1]+2, 4))
mask_good_clean[pixels[:,0]+1, pixels[:,1]+1,0] = 1
mask_good_clean[pixels[:,0]+1, pixels[:,1]+1,3] = 1
return good_cycles_cleaned,mask_good_clean,masks
However, this method takes ages and I need a quicker method. I've tried to use almost everything in opencv but nothing gives me exactly what I want. cv2.approxPolyDP misdraws the contours and cv2.convexHull traces the tails and gives me a bigger contour than I need. Should be an easy task but what am I missing?

Related

Simulating a simple optical flow

I am currently trying to simulate an optical flow using the following equation:
Below is a basic example where I have a 7x7 image where the central pixel is illuminated. The velocity I am applying is a uniform x-velocity of 2.
using Interpolations
using PrettyTables
# Generate grid
nx = 7 # Image will be 7x7 pixels
x = zeros(nx, nx)
yy = repeat(1:nx, 1, nx) # grid of y-values
xx = yy' # grid of x-values
# In this example x is the image I in the above equation
x[(nx-1)÷2 + 1, (nx-1)÷2 + 1] = 1.0 # set central pixel equal to 1
# Initialize velocity
velocity = 2;
vx = velocity .* ones(nx, nx); # vx=2
vy = 0.0 .* ones(nx, nx); # vy=0
for t in 1:1
# create 2d grid interpolator of the image
itp = interpolate((collect(1:nx), collect(1:nx)), x, Gridded(Linear()));
# create 2d grid interpolator of vx and vy
itpvx = interpolate((collect(1:nx), collect(1:nx)), vx, Gridded(Linear()));
itpvy = interpolate((collect(1:nx), collect(1:nx)), vy, Gridded(Linear()));
∇I_x = Array{Float64}(undef, nx, nx); # Initialize array for ∇I_x
∇I_y = Array{Float64}(undef, nx, nx); # Initialize array for ∇I_y
∇vx_x = Array{Float64}(undef, nx, nx); # Initialize array for ∇vx_x
∇vy_y = Array{Float64}(undef, nx, nx); # Initialize array for ∇vy_y
for i=1:nx
for j=1:nx
# gradient of image in x and y directions
Gx = Interpolations.gradient(itp, i, j);
∇I_x[i, j] = Gx[2];
∇I_y[i, j] = Gx[1];
Gvx = Interpolations.gradient(itpvx, i, j) # gradient of vx in both directions
Gvy = Interpolations.gradient(itpvy, i, j) # gradient of vy in both directions
∇vx_x[i, j] = Gvx[2];
∇vy_y[i, j] = Gvy[1];
end
end
v∇I = (vx .* ∇I_x) .+ (vy .* ∇I_y) # v dot ∇I
I∇v = x .* (∇vx_x .+ ∇vy_y) # I dot ∇v
x = x .- (v∇I .+ I∇v) # I(x, y, t+dt)
pretty_table(x)
end
What I expect is that the illuminated pixel in x will shift two pixels to the right in x_predicted. What I am seeing is the following:
where the original illuminated pixel's value is moved to the neighboring pixel twice rather than being shifted two pixels to the right. I.e. the neighboring pixel goes from being 0 to 2 and the original pixel goes from a value of 1 to -1. I'm not sure if I'm messing up the equation or if I'm thinking of velocity in the wrong way here. Any ideas?
Without looking into it too deeply, I think there are a couple of potential issues here:
Violation of the Courant Condition
The code you originally posted (I've edited it now) simulates a single timestep. I would not expect a cell 2 units away from your source cell to be activated in a single timestep. Doing so would voilate the Courant condition. From wikipedia:
The principle behind the condition is that, for example, if a wave is moving across a discrete spatial grid and we want to compute its amplitude at discrete time steps of equal duration, then this duration must be less than the time for the wave to travel to adjacent grid points.
The Courant condition requires that uΔt/Δx <= 1 (for an explicit time-marching solver such as the one you've implemented). Plugging in u=2, Δt=1, Δx=1 gives 2, which is greater than 1, so you have a mathematical problem. The general way of fixing this problem is to make Δt smaller. You probably want something like:
x = x .- Δt*(v∇I .+ I∇v) # I(x, y, t+dt)
Missing gradients?
I'm a little concerned about what's going on here:
Gvx = Interpolations.gradient(itpvx, i, j) # gradient of vx in both directions
Gvy = Interpolations.gradient(itpvy, i, j) # gradient of vy in both directions
∇vx_x[i, j] = Gvx[2];
∇vy_y[i, j] = Gvy[1];
You're able to pull two gradients out of both Gvx and Gvy, but you're only using one from each of them. Does that mean you're throwing information away?
https://scicomp.stackexchange.com/ is likely to provide better help with this.

Image Processing: Mapping a scanned image on a template image with many identical features

Problem description
We are trying to match a scanned image onto a template image:
Example of a scanned image:
Example of a template image:
The template image contains a collection of hearts varying in size and contour properties (closed, open left and open right). Each heart in the template is a Region of Interest for which we know the location, size, and contour type. Our goal is to match a scanned onto the template so that we can extract these ROIs in the scanned image. In the scanned image, some of these hearts are crossed, and they will be presented to a classifier that decides if they are crossed or not.
Our approach
Following a tutorial on PyImageSearch, we have attempted to use ORB to find matching keypoints (code included below). This should allow us to compute a perspective transform matrix that maps the scanned image on the template image.
We have tried some preprocessing steps such as thresholding and/or blurring the scanned image. We have also tried to increase the maximum number of features as much as possible.
The problem
The method fails to work for our image set. This can be seen in the following image:
It appears that a lot of keypoints are mapped to the wrong part of the template image, so the transform matrix is not calculated correctly.
Is ORB the right technique to use here, or are there parameters of the algorithm that could be fine-tuned to improve performance? It feels like we are missing out on something simple that should make it work, but we really don't know how to go forward with this approach :).
We are trying out an alternative technique where we cross-correlate the scan with individual heart shapes. This should give an image with peaks at the heart locations. By drawing a bounding box around these peaks we hope to map that bounding box on the bounding box of the template (I can elaborat on this upon request)
Any suggestions are greatly appreciated!
import cv2 as cv
import matplotlib.pyplot as plt
import numpy as np
# Preprocessing parameters
THRESHOLD = True
BLUR = False
# ORB parameters
MAX_FEATURES = 4048
KEEP_PERCENT = .01
SHOW_DEBUG = True
# Convert both the input image and template to grayscale
scan_file = r'scan.jpg'
template_file = r'template.jpg'
scan = cv.imread(scan_file)
template = cv.imread(template_file)
scan_gray = cv.cvtColor(scan, cv.COLOR_BGR2GRAY)
template_gray = cv.cvtColor(template, cv.COLOR_BGR2GRAY)
if THRESHOLD:
_, scan_gray = cv.threshold(scan_gray, 127, 255, cv.THRESH_BINARY)
_, template_gray = cv.threshold(template_gray, 127, 255, cv.THRESH_BINARY)
if BLUR:
scan_gray = cv.blur(scan_gray, (5, 5))
template_gray = cv.blur(template_gray, (5, 5))
# Use ORB to detect keypoints and extract (binary) local invariant features
orb = cv.ORB_create(MAX_FEATURES)
(kps_template, desc_template) = orb.detectAndCompute(template_gray, None)
(kps_scan, desc_scan) = orb.detectAndCompute(scan_gray, None)
# Match the features
#method = cv.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING
#matcher = cv.DescriptorMatcher_create(method)
#matches = matcher.match(desc_scan, desc_template)
bf = cv.BFMatcher(cv.NORM_HAMMING)
matches = bf.match(desc_scan, desc_template)
# Sort the matches by their distances
matches = sorted(matches, key = lambda x : x.distance)
# Keep only the top matches
keep = int(len(matches) * KEEP_PERCENT)
matches = matches[:keep]
if SHOW_DEBUG:
matched_visualization = cv.drawMatches(scan, kps_scan, template, kps_template, matches, None)
plt.imshow(matched_visualization)
Based on the clarifications provided by #it_guy, I have attempted to find all the crossed hearts using just the scanned image. I would have to try the algorithm on more images to check whether this approach will generalize or not.
Binarize the scanned image.
gray_image = cv2.cvtColor(rgb_image, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray_image, 180, 255, cv2.THRESH_BINARY_INV)
Perform dilation to close small gaps in the outline of the hearts, and the curves representing crosses. Note - The structuring element np.ones((1,2), np.uint8 can be changed by running the algorithm through multiple images and finding the most suitable structuring element.
closing_original = cv2.morphologyEx(original_binary, cv2.MORPH_DILATE, np.ones((1,2), np.uint8)).
Find all the contours in the image. The contours include all hearts and the triangle at the bottom. We eliminate other contours like dots by placing constraints on the height and width of contours to filter them. Further, we also use contour hierachies to eliminate inner contours in cross hearts.
contours_original, hierarchy_original = cv2.findContours(closing_original, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
We iterate through each of the filtered contours.
Contour with normal heart -
Contour with crossed heart -
Let us observe the difference between these two types of hearts. If we look at the transition from white-to-black pixel and black-to-white pixel ( from top to bottom ) inside the normal heart, we see that for majority of the image columns the number of such transitions are 4. ( Top border - 2 transitions, bottom border - 2 transitions )
white-to-black pixel - (255, 255, 0, 0, 0)
black-to-white pixel - (0, 0, 255, 255, 255)
But, in the case of the crossed heart, the number of transitions for majority of the columns must be 6, since the crossed curve / line adds two more transitions inside the heart (black-to-white first, then white-to-black). Hence, among all image columns which have greater than or equal to 4 such transitions, if more than 40% of the columns have 6 transitions, then the given contour represents a crossed contour. Result -
Code -
import cv2
import numpy as np
def convert_to_binary(rgb_image):
gray_image = cv2.cvtColor(rgb_image, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray_image, 180, 255, cv2.THRESH_BINARY_INV)
return gray_image, thresh
original = cv2.imread('original.jpg')
height, width = original.shape[:2]
original_gray, original_binary = convert_to_binary(original) # Get binary image
cv2.imwrite("binary.jpg", original_binary)
closing_original = cv2.morphologyEx(original_binary, cv2.MORPH_DILATE, np.ones((1,2), np.uint8)) # Close small gaps in the binary image
cv2.imwrite("closed.jpg", closing_original)
contours_original, hierarchy_original = cv2.findContours(closing_original, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE) # Get all the contours
bounding_rects_original = [cv2.boundingRect(c) for c in contours_original] # Get all contour bounding boxes
orig_boxes = list()
all_contour_image = original.copy()
for i, (x, y, w, h) in enumerate(bounding_rects_original):
if h > height / 2 or w > width / 2: # Eliminate extremely large contours
continue
if h < w / 2 or w < h / 2: # Eliminate vertical / horuzontal lines
continue
if w * h < 200: # Eliminate small area contours
continue
if hierarchy_original[0][i][3] != -1: # Eliminate contours created by heart crosses
continue
orig_boxes.append((x, y, w, h))
cv2.rectangle(all_contour_image, (x,y), (x + w, y + h), (0, 255, 0), 3)
# cv2.imshow("warped", closing_original)
cv2.imwrite("all_contours.jpg", all_contour_image)
final_image = original.copy()
for x, y, w, h in orig_boxes:
cropped_image = closing_original[y - 2 :y + h + 2, x: x + w] # Get the heart binary image
col_pixel_diffs = np.abs(np.diff(cropped_image.T.astype(np.int16))/255) # Obtain all consecutive pixel differences in all the columns
column_sums = np.sum(col_pixel_diffs, axis=1) # Get the sum of each column's transitions. This results in an array of size equal
# to the number of columns, each element representing the number of black-white and white-black transitions.
percent_crosses = np.sum(column_sums >= 6)/ np.sum(column_sums >= 4) # Percentage of columns with 6 transitions among columns with 4 transitions
if percent_crosses > 0.4: # Crossed heart criterion
cv2.rectangle(final_image, (x,y), (x + w, y + h), (0, 255, 0), 3)
cv2.imwrite("crossed_heart.jpg", cropped_image)
else:
cv2.imwrite("normal_heart.jpg", cropped_image)
cv2.imwrite("all_crossed_hearts.jpg", final_image)
This approach can be tested on more images to find its accuracy.

Tables' contours finding by OpenCV

I am trying to detect table's border by opencv, i choose the findContour function, here's the minimal demo.
def contour_proposal(rgb_matrix, weight_threshold, height_threshold):
"""
:return: list of proposal region, each region is a tuple (ltx, lty, rbx, rby)
"""
gray = cv.cvtColor(rgb_matrix, cv.COLOR_RGB2GRAY)
_, binary = cv.threshold(gray, 0, 255, cv.THRESH_BINARY + cv.THRESH_OTSU)
# binary = cv.adaptiveThreshold(gray, 255, cv.ADAPTIVE_THRESH_MEAN_C, cv.THRESH_BINARY, 11, 2)
# binary = cv.adaptiveThreshold(gray, 255, cv.ADAPTIVE_THRESH_GAUSSIAN_C, cv.THRESH_BINARY, 11, 2)
new_image_matrix, contours, hierarchy = cv.findContours(binary, cv.RETR_LIST, cv.CHAIN_APPROX_SIMPLE)
proposals = list(filter(
lambda x: x[2] > weight_threshold and x[3] > height_threshold,
map(cv.boundingRect, contours)
))
res = []
for p in proposals:
x, y, w, h = p
res.append((x, y, x+w, y+h))
return res
Here are two images for test
[]
[]
Here are their finding result's visualization. I specific the contours result by blue rectangles.
[]
[]
The border of the table are correctly detected by findcontours in the second image while not in the first image. However, these two tables seem to have similar feature, both of them have the top and bottom borders and don't have the left and white borders. My question is, why is the first table cannot be detected while the second one can be detected? I've check the binary image of the first image, it's border information is not be abandoned.
So, how to retrieve the best detection results by opencv. Besides, the different kinds of threshold methods seems have no performance influence in this specific task(table detection), so which one should i choose?

How to check if a point is inside a set of contours

Hello to everyone. The above image is sum of two images in which i did feature matching and draw all matching points. I also found the contours of the pcb parts in the first image (half left image-3 contours). The question is, how could i draw only the matching points that is inside those contours in the first image instead this blue mess? I'm using python 2.7 and opencv 2.4.12.
I wrote a function for draw matches cause in opencv 2.4.12 there isn't any implemented method for that. If i didn't include something please tell me. Thank you in advance!
import numpy as np
import cv2
def drawMatches(img1, kp1, img2, kp2, matches):
# Create a new output image that concatenates the two images
# (a.k.a) a montage
rows1 = img1.shape[0]
cols1 = img1.shape[1]
rows2 = img2.shape[0]
cols2 = img2.shape[1]
# Create the output image
# The rows of the output are the largest between the two images
# and the columns are simply the sum of the two together
# The intent is to make this a colour image, so make this 3 channels
out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')
# Place the first image to the left
out[:rows1,:cols1] = np.dstack([img1, img1, img1])
# Place the next image to the right of it
out[:rows2,cols1:] = np.dstack([img2, img2, img2])
# For each pair of points we have between both images
# draw circles, then connect a line between them
for mat in matches:
# Get the matching keypoints for each of the images
img1_idx = mat.queryIdx
img2_idx = mat.trainIdx
# x - columns
# y - rows
(x1,y1) = kp1[img1_idx].pt
(x2,y2) = kp2[img2_idx].pt
# Draw a small circle at both co-ordinates
# radius 4
# colour blue
# thickness = 1
cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)
cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)
# Draw a line in between the two points
# thickness = 1
# colour blue
cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255,0,0), 1)
# Show the image
cv2.imshow('Matched Features', out)
cv2.imwrite("shift_points.png", out)
cv2.waitKey(0)
cv2.destroyWindow('Matched Features')
# Also return the image if you'd like a copy
return out
img1 = cv2.imread('pic3.png', 0) # Original image - ensure grayscale
img2 = cv2.imread('pic1.png', 0) # Rotated image - ensure grayscale
sift = cv2.SIFT()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# Create matcher
bf = cv2.BFMatcher()
# Perform KNN matching
matches = bf.knnMatch(des1, des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
# Add first matched keypoint to list
# if ratio test passes
good.append(m)
# Show only the top 10 matches - also save a copy for use later
out = drawMatches(img1, kp1, img2, kp2, good)
Based on what you are asking I am assuming you mean you have some sort of closed contour outlining the areas you want to bound your data point pairs to.
This is fairly simple for polygonal contours and more math is required for more complex curved lines but the solution is the same.
You draw a line from the point in question to infinity. Most people draw out a line to +x infinity, but any direction works. If there are an odd number of line intersections, the point is inside the contour.
See this article:
http://www.geeksforgeeks.org/how-to-check-if-a-given-point-lies-inside-a-polygon/
For point pairs, only pairs where both points are inside the contour are fully inside the contour. For complex contour shapes with concave sections, if you also want to test that the linear path between the points does not cross the contour, you perform a similar test with just the line segment between the two points, if there are any line intersections the direct path between the points crosses outside the contour.
Edit:
Since your contours are rectangles, a simpler approach will suffice for determining if your points are inside the rectangle.
If your rectangles are axis aligned (they are straight and not rotated), then you can use your values for top,left and bottom,right to check.
Let point A = Top,Left, point B = Bottom,Right, and point C = your test point.
I am assuming an image based coordinate system where 0,0 is the left,top of the image, and width,height is the bottom right. (I'm writing in C#)
bool PointIsInside(Point A, Point B, Point C)
{
if (A.X <= C.X && B.X >= C.X && A.Y <= C.Y && B.Y >= C.Y)
return true;
return false;
}
if your rectangle is NOT axis aligned, then you can perform four half-space tests to determine if your point is inside the rectangle.
Let Point A = Top,Left, Point B = Bottom,Right, double W = Width, double H = Height, double N = rotation angle, and Point C = test point.
for an axis aligned rectangle, Top,Right can be calculated by taking the vector (1,0) , multiplying by Width, and adding that vector to Top,Left. For Bottom,Right We take the vector (0,1), multiply by height, and add to Top,Right.
(1,0) is the equivalent of a Unit Vector (length of 1) at Angle 0. Similarly, (0,1) is a unit vector at angle 90 degrees. These vectors can also be considered the direction the line is pointing. This also means these same vectors can be used to go from Bottom,Left to Bottom,Right, and from Top,Left to Bottom,Left as well.
We need to use different unit vectors, at the angle provided. To do this we simply need to take the Cosine and Sine of the angle provided.
Let Vector X = direction from Top,Left to Top,Right, Vector Y = direction from Top,Right to Bottom,Right.
I am using angles in degrees for this example.
Vector X = new Vector();
Vector Y = new Vector();
X.X = Math.Cos(R);
X.Y = Math.Sin(R);
Y.X = Math.Cos(R+90);
Y.Y = Math.Sin(R+90);
Since we started with Top,Left, we can find Bottom,Right by simply adding the two vectors to Top,Left
Point B = new Point();
B = A + X + Y;
We now want to do a half-space test using the dot product for our test point. The first two test will use the test point, and Top,Left, the other two will use the test point, and Bottom,Right.
The half-space test is inherently based on directionality. Is the point in front, behind, or perpendicular to a given direction? We have the two directions we need, but they are directions based on the top,left point of the rectangle, not the full space of the image, so we need to get a vector from the top,left, to the point in question, and another from the bottom, right, since those are the two points we test against.
This is simple to calculate, as it is just Destination - Origin.
Let Vector D = Top,Left to test point C, and Vector E = Bottom,Right to test point.
Vector D = C - A;
Vector E = C - B;
The dot product is x1 * x2 + y1*y2 of the two vectors. if the result is positive, the two directions have an absolute angle of less than 90 degrees, or are roughly going in the same direction, a result of zero means they are perpendicular. In our case it means the test point is directly on a side of the rectangle we are testing against. Less than zero means an absolute angle of greater than 90 degrees, or they are roughly going opposite directions.
If a point is inside the rectangle, then the dot products from top left will be >= 0, while the dot products from bottom right will be <= 0. In essence the test point is closer to bottom right when testing from top left, but when taking the same directions when we are already at bottom right, it will be going away, back toward top,left.
double DotProd(Vector V1, Vector V2)
{
return V1.X * V2.X + V1.Y * V2.Y;
}
and so our test ends up as:
if( DotProd(X, D) >= 0 && DotProd(Y, D) >= 0 && DotProd(X, E) <= 0 && DotProd(Y, E) <= 0)
then the point is inside the rectangle. Do this for both points, if both are true then the line is inside the rectangle.

OpenCV: detect flawed rectangle

currently I'm working on a project where I try to find the corners of the rectangle's surface in a photo using OpenCV (Python, Java or C++)
I've selected desired surface by filtering color, then I've got mask and passed it to the cv2.findContours:
cnts, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnt = sorted(cnts, key = cv2.contourArea, reverse = True)[0]
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.02*peri, True)
if len(approx) == 4:
cv2.drawContours(mask, [approx], -1, (255, 0, 0), 2)
This gives me an inaccurate result:
Using cv2.HoughLines I've managed to get 4 straight lines that accurately describe the surface. Their intersections are exactly what I need:
edged = cv2.Canny(mask, 10, 200)
hLines = cv2.HoughLines(edged, 2, np.pi/180, 200)
lines = []
for rho,theta in hLines[0]:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(mask, (x1,y1), (x2,y2), (255, 0, 0), 2)
lines.append([[x1,y1],[x2,y2]])
The question is: is it possible to somehow tweak findContours?
Another solution would be to find coordinates of intersections. Any clues for this approach are welcome :)
Can anybody give me a hint how to solve this problem?
Finding intersection is not so trivial problem as it seems to be, but before the intersection points will be found, following problems should be considered:
The most important thing is to choose the right parameters for the HoughLines function, since it can return from 0 to an infinite numbers of lines (we need 4 parallel)
Since we do not know in what order these lines go, they need to be compared with each other
Because of the perspective, parallel lines are no longer parallel, so each line will have a point of intersection with the others. A simple solution would be to filter the coordinates located outside the photo. But it may happen that an undesirable intersection will be within the photo.
The coordinates should be sorted. Depending on the task, it could be done in different ways.
cv2.HoughLines will return an array with the values of rho and theta for each line.
Now the problem becomes a system of equations for all lines in pairs:
def intersections(edged):
# Height and width of a photo with a contour obtained by Canny
h, w = edged.shape
hl = cv2.HoughLines(edged,2,np.pi/180,190)[0]
# Number of lines. If n!=4, the parameters should be tuned
n = hl.shape[0]
# Matrix with the values of cos(theta) and sin(theta) for each line
T = np.zeros((n,2),dtype=np.float32)
# Vector with values of rho
R = np.zeros((n),dtype=np.float32)
T[:,0] = np.cos(hl[:,1])
T[:,1] = np.sin(hl[:,1])
R = hl[:,0]
# Number of combinations of all lines
c = n*(n-1)/2
# Matrix with the obtained intersections (x, y)
XY = np.zeros((c,2))
# Finding intersections between all lines
for i in range(n):
for j in range(i+1, n):
XY[i+j-1, :] = np.linalg.inv(T[[i,j],:]).dot(R[[i,j]])
# filtering out the coordinates outside the photo
XY = XY[(XY[:,0] > 0) & (XY[:,0] <= w) & (XY[:,1] > 0) & (XY[:,1] <= h)]
# XY = order_points(XY) # obtained points should be sorted
return XY
here is the result:
It is possible to:
select the longest contour
break it into segments and group them by gradient
Fit lines to the largest four groups
Find intersection points
But then, Hough transform does nearly the same thing. Is there any particular reason for not using it?
Intersection points of lines are very easy to calculate. A high-school coordinate geometry lesson can provide you with the algorithm.

Resources