I have a processed image with text in it and I want to find the coordinates of lines which would touch the edges of the text field, but would not cross it and would strech through the whole side of text. Image below shows what I need (the red lines I drew show the example of what coordinates I want to find on a raw image):
It is not so straightforward, I can't just find the edges of processed field of text (upper left, upper right and so on), because it may be, f.e. a start of a paragraph (this is just an example of the possible scenario):
The sides of the text form a straight line, it is the top and bottom edges may be curved, so that could make things easier.
What is the best way to do this?
Any method I can think of is either not practical, inneficient or may usually give false results.
The raw image in case someone needs for processing:
The idea is to find the convex hull of all of the text. After we find the convex hull we find its sides. If the side Has a big change in its y coordinate and a small change in the x coordinate (i.e. the line has a high slope) we will consider it as a side line.
The resulted image:
the code:
import cv2
import numpy as np
def getConvexCoord(convexH, ind):
yLines = []
xLine = []
for index in range(len(ind[0])):
convexIndex = ind[0][index]
# Get point
if convexIndex == len(convexH) - 1:
p0 = convexH[0]
p1 = convexH[convexIndex]
else:
p0 = convexH[convexIndex]
p1 = convexH[convexIndex + 1]
# Add y corrdinate
yLines.append(p0[0, 1])
yLines.append(p1[0, 1])
xLine.append(p0[0, 0])
xLine.append(p1[0, 0])
return yLines,xLine
def filterLine(line):
sortX = sorted(line)
# Find the median
xMedian = np.median(sortX)
while ((sortX[-1] - sortX[0]) > I.shape[0]):
# Find out which is farther from the median and discard
lastValueDistance = np.abs(xMedian - sortX[-1])
firstValueDistance = np.abs(xMedian - sortX[0])
if lastValueDistance > firstValueDistance:
# Discard last
del sortX[-1]
else:
# Discard first
del sortX[0]
# Now return mixX and maxX
return max(sortX),min(sortX)
# Read image
Irgb = cv2.imread('text.jpg')
I = Irgb[:,:,0]
# Threshold
ret, Ithresh = cv2.threshold(I,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Find the convex hull of the text
textPixels = np.nonzero(Ithresh)
textPixels = zip(textPixels[1],textPixels[0])
convexH = cv2.convexHull(np.asarray(textPixels))
# Find the side edges in the convex hull
m = []
for index in range((len(convexH))-1):
# Calculate the angle of the line
point0 = convexH[index]
point1 = convexH[index+1]
if(point1[0,0]-point0[0,0]) == 0:
m.append(90)
else:
m.append(float((point1[0,1]-point0[0,1]))/float((point1[0,0]-point0[0,0])))
# Final line
point0 = convexH[index+1]
point1 = convexH[0]
if(point1[0,0]-point0[0,0]) == 0:
m.append(90)
else:
m.append(np.abs(float((point1[0,1]-point0[0,1]))/float((point1[0,0]-point0[0,0]))))
# Take all the lines with the big m
ind1 = np.where(np.asarray(m)>1)
ind2 = np.where(np.asarray(m)<-1)
# For both lines find min Y an max Y
yLines1,xLine1 = getConvexCoord(convexH,ind1)
yLines2,xLine2 = getConvexCoord(convexH,ind2)
yLines = yLines1 + yLines2
# Filter xLines. If we the difference between the min and the max are more than 1/2 the size of the image we filter it out
minY = np.min(np.asarray(yLines))
maxY = np.max(np.asarray(yLines))
maxX1,minX1 = filterLine(xLine1)
maxX2,minX2 = filterLine(xLine2)
# Change final lines to have minY and maxY
line1 = ((minX1,minY),(maxX1,maxY))
line2 = ((maxX2,minY),(minX2,maxY))
# Plot lines
IrgbWithLines = Irgb
cv2.line(IrgbWithLines,line1[0],line1[1],(0, 0, 255),2)
cv2.line(IrgbWithLines,line2[0],line2[1],(0, 0, 255),2)
Remarks:
The algorithm assumes that the y coordinate change is bigger than the x coordinate change. This will not be true for very high perspective distortions (45 degrees). In this case maybe you should use k-means on the slopes and take the group with the higher slopes as the vertical lines.
The lines marked in red colour on the sides could be found using image closing operation.
Please find below the matlab output after imclose operation with structuring element of type square and size 4.'
The matlab code is as follow:
I = rgb2gray(imread('image.jpg'));
imshow(I); title('image');
Ibinary = im2bw(I);
figure,imshow(Ibinary);
se = strel('square',4);
Iclose = imclose(Ibinary,se);
figure,imshow(Iclose); title('side lines');
Related
(There is a solid line at C and a faint line at T)
I want to detect the line at T. Currently I am using opencv to locate the qr code and rotate the image until the qr code is upright. Then I calculate the approximate location of the C and T mark by using the coordinates of the qr code. Then my code will scan along the y axis down and detect there are difference in the Green and Blue values.
My problem is, even if the T line is as faint as shown, it should be regarded as positive. How could I make a better detection?
I cropped out just the white strip since I assume you have a way of finding it already. Since we're looking for red, I changed to the LAB colorspace and looked on the "a" channel.
Note: all images of the strip have been transposed (np.transpose) for viewing convenience, it's not that way in the code.
the A channel
I did a linear reframe to improve the contrast
The image is super noisy. Again, I'm not sure if this is from the camera or the jpg compression. I averaged each row to smooth out some of the nonsense.
I graphed the intensities (x-vals were the row index)
Use a mean filter to smooth out the graph
I ran a mountain climber algorithm to look for peaks and valleys
And then I filtered for peaks with a climb greater than 10 (the second highest peak has a climb of 25.5, the third highest is 4.4).
Using these peaks we can determine that there are two lines and they are (about) here:
import cv2
import numpy as np
import matplotlib.pyplot as plt
# returns direction of gradient
# 1 if positive, -1 if negative, 0 if flat
def getDirection(one, two):
dx = two - one;
if dx == 0:
return 0;
if dx > 0:
return 1;
return -1;
# detects and returns peaks and valleys
def mountainClimber(vals, minClimb):
# init trackers
last_valley = vals[0];
last_peak = vals[0];
last_val = vals[0];
last_dir = getDirection(vals[0], vals[1]);
# get climbing
peak_valley = []; # index, height, climb (positive for peaks, negative for valleys)
for a in range(1, len(vals)):
# get current direction
sign = getDirection(last_val, vals[a]);
last_val = vals[a];
# if not equal, check gradient
if sign != 0:
if sign != last_dir:
# change in gradient, record peak or valley
# peak
if last_dir > 0:
last_peak = vals[a];
climb = last_peak - last_valley;
climb = round(climb, 2);
peak_valley.append([a, vals[a], climb]);
else:
# valley
last_valley = vals[a];
climb = last_valley - last_peak;
climb = round(climb, 2);
peak_valley.append([a, vals[a], climb]);
# change direction
last_dir = sign;
# filter out very small climbs
filtered_pv = [];
for dot in peak_valley:
if abs(dot[2]) > minClimb:
filtered_pv.append(dot);
return filtered_pv;
# run an mean filter over the graph values
def meanFilter(vals, size):
fil = [];
filtered_vals = [];
for val in vals:
fil.append(val);
# check if full
if len(fil) >= size:
# pop front
fil = fil[1:];
filtered_vals.append(sum(fil) / size);
return filtered_vals;
# averages each row (also gets graph values while we're here)
def smushRows(img):
vals = [];
h,w = img.shape[:2];
for y in range(h):
ave = np.average(img[y, :]);
img[y, :] = ave;
vals.append(ave);
return vals;
# linear reframe [min1, max1] -> [min2, max2]
def reframe(img, min1, max1, min2, max2):
copy = img.astype(np.float32);
copy -= min1;
copy /= (max1 - min1);
copy *= (max2 - min2);
copy += min2;
return copy.astype(np.uint8);
# load image
img = cv2.imread("strip.png");
# resize
scale = 2;
h,w = img.shape[:2];
h = int(h*scale);
w = int(w*scale);
img = cv2.resize(img, (w,h));
# lab colorspace
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB);
l,a,b = cv2.split(lab);
# stretch contrast
low = np.min(a);
high = np.max(a);
a = reframe(a, low, high, 0, 255);
# smush and get graph values
vals = smushRows(a);
# filter and round values
mean_filter_size = 20;
filtered_vals = meanFilter(vals, mean_filter_size);
for ind in range(len(filtered_vals)):
filtered_vals[ind] = round(filtered_vals[ind], 2);
# get peaks and valleys
pv = mountainClimber(filtered_vals, 1);
# pull x and y values
pv_x = [ind[0] for ind in pv];
pv_y = [ind[1] for ind in pv];
# find big peaks
big_peaks = [];
for dot in pv:
if dot[2] > 10: # climb filter size
big_peaks.append(dot);
print(big_peaks);
# make plot points for the two best
tops_x = [dot[0] for dot in big_peaks];
tops_y = [dot[1] for dot in big_peaks];
# plot
x = [index for index in range(len(filtered_vals))];
fig, ax = plt.subplots()
ax.plot(x, filtered_vals);
ax.plot(pv_x, pv_y, 'og');
ax.plot(tops_x, tops_y, 'vr');
plt.show();
# draw on original image
h,w = img.shape[:2];
for dot in big_peaks:
y = int(dot[0] + mean_filter_size / 2.0); # adjust for mean filter cutting
cv2.line(img, (0, y), (w,y), (100,200,0), 2);
# show
cv2.imshow("a", a);
cv2.imshow("strip", img);
cv2.waitKey(0);
Edit:
I was wondering why the lines seemed so off, then I realized that I forgot to account for the fact that the meanFilter reduces the size of the list (it cuts from the front and back). I've updated to take that into account.
I have some code, largely taken from various sources linked at the bottom of this post, written in Python, that takes an image of shape [height, width] and some bounding boxes in the [x_min, y_min, x_max, y_max] format, both numpy arrays, and rotates an image and its bounding boxes counterclockwise. Since after rotation the bounding box becomes more of a "diamond shape", i.e. not axis aligned, then I perform some calculations to make it axis aligned. The purpose of this code is to perform data augmentation in training an object detection neural network through the use of rotated data (where flipping horizontally or vertically is common). It seems flips of other angles are common for image classification, without bounding boxes, but when there is boxes, the resources for how to flip the boxes as well as the images is relatively sparse/niche.
It seems when I input an angle of 45 degrees, that I get some less than "tight" bounding boxes, as in the four corners are not a very good annotation, whereas the original one was close to perfect.
The image shown below is the first image in the MS COCO 2014 object detection dataset (training image), and its first bounding box annotation. My code is as follows:
import math
import cv2
import numpy as np
# angle assumed to be in degrees
# bbs a list of bounding boxes in x_min, y_min, x_max, y_max format
def rotateImageAndBoundingBoxes(im, bbs, angle):
h, w = im.shape[0], im.shape[1]
(cX, cY) = (w//2, h//2) # original image center
M = cv2.getRotationMatrix2D((cX, cY), angle, 1.0) # 2 by 3 rotation matrix
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# compute the dimensions of the rotated image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
# adjust the rotation matrix to take into account translation of the new centre
M[0, 2] += (nW / 2) - cX
M[1, 2] += (nH / 2) - cY
rotated_im = cv2.warpAffine(im, M, (nW, nH))
rotated_bbs = []
for bb in bbs:
# get the four rotated corners of the bounding box
vec1 = np.matmul(M, np.array([bb[0], bb[1], 1], dtype=np.float64)) # top left corner transformed
vec2 = np.matmul(M, np.array([bb[2], bb[1], 1], dtype=np.float64)) # top right corner transformed
vec3 = np.matmul(M, np.array([bb[0], bb[3], 1], dtype=np.float64)) # bottom left corner transformed
vec4 = np.matmul(M, np.array([bb[2], bb[3], 1], dtype=np.float64)) # bottom right corner transformed
x_vals = [vec1[0], vec2[0], vec3[0], vec4[0]]
y_vals = [vec1[1], vec2[1], vec3[1], vec4[1]]
x_min = math.ceil(np.min(x_vals))
x_max = math.floor(np.max(x_vals))
y_min = math.ceil(np.min(y_vals))
y_max = math.floor(np.max(y_vals))
bb = [x_min, y_min, x_max, y_max]
rotated_bbs.append(bb)
// my function to resize image and bbs to the original image size
rotated_im, rotated_bbs = resizeImageAndBoxes(rotated_im, w, h, rotated_bbs)
return rotated_im, rotated_bbs
The good bounding box looks like:
The not-so-good bounding box looks like :
I am trying to determine if this is an error of my code, or this is expected behavior? It seems like this problem is less apparent at integer multiples of pi/2 radians (90 degrees), but I would like to achieve tight bounding boxes at any angle of rotation. Any insights at all appreciated.
Sources:
[Open CV2 documentation] https://docs.opencv.org/3.4/da/d54/group__imgproc__transform.html#gafbbc470ce83812914a70abfb604f4326
[Data Augmentation Discussion]
https://blog.paperspace.com/data-augmentation-for-object-detection-rotation-and-shearing/
[Mathematics of rotation around an arbitrary point in 2 dimension]
https://math.stackexchange.com/questions/2093314/rotation-matrix-of-rotation-around-a-point-other-than-the-origin
It seems for the most part this is expected behavior as per the comments. I do have a kind of hacky solution to this problem, where you can write a function like
# assuming box coords = [x_min, y_min, x_max, y_max]
def cropBoxByPercentage(box_coords, image_width, image_height, x_percentage=0.05, y_percentage=0.05):
box_xmin = box_coords[0]
box_ymin = box_coords[1]
box_xmax = box_coords[2]
box_ymax = box_coords[3]
box_width = box_xmax-box_xmin+1
box_height = box_ymax-box_ymin+1
dx = int(x_percentage * box_width)
dy = int(y_percentage * box_height)
box_xmin = max(0, box_xmin-dx)
box_xmax = min(image_width-1, box_xmax+dx)
box_ymin = max(0, box_ymax - dy)
box_ymax = min(image_height - 1, box_ymax + dy)
return np.array([box_xmin, box_xmax, box_ymin, box_ymax])
Where computing the x_percentage and y_percentage can be computed using a fixed value, or could be computed using some heuristic.
Hello to everyone. The above image is sum of two images in which i did feature matching and draw all matching points. I also found the contours of the pcb parts in the first image (half left image-3 contours). The question is, how could i draw only the matching points that is inside those contours in the first image instead this blue mess? I'm using python 2.7 and opencv 2.4.12.
I wrote a function for draw matches cause in opencv 2.4.12 there isn't any implemented method for that. If i didn't include something please tell me. Thank you in advance!
import numpy as np
import cv2
def drawMatches(img1, kp1, img2, kp2, matches):
# Create a new output image that concatenates the two images
# (a.k.a) a montage
rows1 = img1.shape[0]
cols1 = img1.shape[1]
rows2 = img2.shape[0]
cols2 = img2.shape[1]
# Create the output image
# The rows of the output are the largest between the two images
# and the columns are simply the sum of the two together
# The intent is to make this a colour image, so make this 3 channels
out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')
# Place the first image to the left
out[:rows1,:cols1] = np.dstack([img1, img1, img1])
# Place the next image to the right of it
out[:rows2,cols1:] = np.dstack([img2, img2, img2])
# For each pair of points we have between both images
# draw circles, then connect a line between them
for mat in matches:
# Get the matching keypoints for each of the images
img1_idx = mat.queryIdx
img2_idx = mat.trainIdx
# x - columns
# y - rows
(x1,y1) = kp1[img1_idx].pt
(x2,y2) = kp2[img2_idx].pt
# Draw a small circle at both co-ordinates
# radius 4
# colour blue
# thickness = 1
cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)
cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)
# Draw a line in between the two points
# thickness = 1
# colour blue
cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255,0,0), 1)
# Show the image
cv2.imshow('Matched Features', out)
cv2.imwrite("shift_points.png", out)
cv2.waitKey(0)
cv2.destroyWindow('Matched Features')
# Also return the image if you'd like a copy
return out
img1 = cv2.imread('pic3.png', 0) # Original image - ensure grayscale
img2 = cv2.imread('pic1.png', 0) # Rotated image - ensure grayscale
sift = cv2.SIFT()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# Create matcher
bf = cv2.BFMatcher()
# Perform KNN matching
matches = bf.knnMatch(des1, des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
# Add first matched keypoint to list
# if ratio test passes
good.append(m)
# Show only the top 10 matches - also save a copy for use later
out = drawMatches(img1, kp1, img2, kp2, good)
Based on what you are asking I am assuming you mean you have some sort of closed contour outlining the areas you want to bound your data point pairs to.
This is fairly simple for polygonal contours and more math is required for more complex curved lines but the solution is the same.
You draw a line from the point in question to infinity. Most people draw out a line to +x infinity, but any direction works. If there are an odd number of line intersections, the point is inside the contour.
See this article:
http://www.geeksforgeeks.org/how-to-check-if-a-given-point-lies-inside-a-polygon/
For point pairs, only pairs where both points are inside the contour are fully inside the contour. For complex contour shapes with concave sections, if you also want to test that the linear path between the points does not cross the contour, you perform a similar test with just the line segment between the two points, if there are any line intersections the direct path between the points crosses outside the contour.
Edit:
Since your contours are rectangles, a simpler approach will suffice for determining if your points are inside the rectangle.
If your rectangles are axis aligned (they are straight and not rotated), then you can use your values for top,left and bottom,right to check.
Let point A = Top,Left, point B = Bottom,Right, and point C = your test point.
I am assuming an image based coordinate system where 0,0 is the left,top of the image, and width,height is the bottom right. (I'm writing in C#)
bool PointIsInside(Point A, Point B, Point C)
{
if (A.X <= C.X && B.X >= C.X && A.Y <= C.Y && B.Y >= C.Y)
return true;
return false;
}
if your rectangle is NOT axis aligned, then you can perform four half-space tests to determine if your point is inside the rectangle.
Let Point A = Top,Left, Point B = Bottom,Right, double W = Width, double H = Height, double N = rotation angle, and Point C = test point.
for an axis aligned rectangle, Top,Right can be calculated by taking the vector (1,0) , multiplying by Width, and adding that vector to Top,Left. For Bottom,Right We take the vector (0,1), multiply by height, and add to Top,Right.
(1,0) is the equivalent of a Unit Vector (length of 1) at Angle 0. Similarly, (0,1) is a unit vector at angle 90 degrees. These vectors can also be considered the direction the line is pointing. This also means these same vectors can be used to go from Bottom,Left to Bottom,Right, and from Top,Left to Bottom,Left as well.
We need to use different unit vectors, at the angle provided. To do this we simply need to take the Cosine and Sine of the angle provided.
Let Vector X = direction from Top,Left to Top,Right, Vector Y = direction from Top,Right to Bottom,Right.
I am using angles in degrees for this example.
Vector X = new Vector();
Vector Y = new Vector();
X.X = Math.Cos(R);
X.Y = Math.Sin(R);
Y.X = Math.Cos(R+90);
Y.Y = Math.Sin(R+90);
Since we started with Top,Left, we can find Bottom,Right by simply adding the two vectors to Top,Left
Point B = new Point();
B = A + X + Y;
We now want to do a half-space test using the dot product for our test point. The first two test will use the test point, and Top,Left, the other two will use the test point, and Bottom,Right.
The half-space test is inherently based on directionality. Is the point in front, behind, or perpendicular to a given direction? We have the two directions we need, but they are directions based on the top,left point of the rectangle, not the full space of the image, so we need to get a vector from the top,left, to the point in question, and another from the bottom, right, since those are the two points we test against.
This is simple to calculate, as it is just Destination - Origin.
Let Vector D = Top,Left to test point C, and Vector E = Bottom,Right to test point.
Vector D = C - A;
Vector E = C - B;
The dot product is x1 * x2 + y1*y2 of the two vectors. if the result is positive, the two directions have an absolute angle of less than 90 degrees, or are roughly going in the same direction, a result of zero means they are perpendicular. In our case it means the test point is directly on a side of the rectangle we are testing against. Less than zero means an absolute angle of greater than 90 degrees, or they are roughly going opposite directions.
If a point is inside the rectangle, then the dot products from top left will be >= 0, while the dot products from bottom right will be <= 0. In essence the test point is closer to bottom right when testing from top left, but when taking the same directions when we are already at bottom right, it will be going away, back toward top,left.
double DotProd(Vector V1, Vector V2)
{
return V1.X * V2.X + V1.Y * V2.Y;
}
and so our test ends up as:
if( DotProd(X, D) >= 0 && DotProd(Y, D) >= 0 && DotProd(X, E) <= 0 && DotProd(Y, E) <= 0)
then the point is inside the rectangle. Do this for both points, if both are true then the line is inside the rectangle.
I have tried to extract the dark line inside very noisy images without success. Some tips?
My current steps for the first example:
1) Clahe: with clip_limit = 10 and grid_size = (8,8)
2) Box Filter: with size = (5,5)
3) Inverted Image: 255 - image
4) Threshold: when inverted_image < 64
UPDATE
I have performed some preprocessing steps to improve the quality of tested images. I adjusted my ROI mask to crop top and down (because they are low intensities) and added a illumination correction to see better the line. Follow below the current images:
Even though the images are noisy, you are only looking for straight lines towards the north of image. So, why don't use some kind of matched filter with morphological operations?
EDIT: I have modified it.
1) Use median filter along the x and y axis, and normalize the images.
2) Matched filter with all possible orientations of lines.
% im=imread('JwXON.png');
% im=imread('Fiy72.png');
% im=imread('Ya9AN.png');
im=imread('OcgaIt8.png');
imOrig=im;
matchesx = fl(im, 1);
matchesy = fl(im, 0);
matches = matchesx + matchesy;
[x, y] = find(matches);
figure(1);
imagesc(imOrig), axis image
hold on, plot(y, x, 'r.', 'MarkerSize',5)
colormap gray
%----------
function matches = fl(im, direc)
if size(im,3)~=1
im = double(rgb2gray(im));
else
im=double(im);
end
[n, m] = size(im);
mask = bwmorph(imfill(im>0,'holes'),'thin',10);
indNaN=find(im==0); im=255-im; im(indNaN)=0;
N = n - numel(find(im(:,ceil(m/2))==0));
N = ceil(N*0.8); % possible line length
% Normalize the image with median filter
if direc
background= medfilt2(im,[1,30],'symmetric');
thetas = 31:149;
else
background= medfilt2(im,[30,1],'symmetric');
thetas = [1:30 150:179];
end
normIm = im - background;
normIm(normIm<0)=0;
% initialize matched filter result
matches=im*0;
% search for different angles of lines
for theta=thetas
normIm2 = imclose(normIm>0,strel('line',5,theta));
normIm3 = imopen(normIm2>0,strel('line',N,theta));
matches = matches + normIm3;
end
% eliminate false alarms
matches = imclose(matches,strel('disk',2));
matches = matches>3 & mask;
matches = bwareaopen(matches,100);
currently I'm working on a project where I try to find the corners of the rectangle's surface in a photo using OpenCV (Python, Java or C++)
I've selected desired surface by filtering color, then I've got mask and passed it to the cv2.findContours:
cnts, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnt = sorted(cnts, key = cv2.contourArea, reverse = True)[0]
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.02*peri, True)
if len(approx) == 4:
cv2.drawContours(mask, [approx], -1, (255, 0, 0), 2)
This gives me an inaccurate result:
Using cv2.HoughLines I've managed to get 4 straight lines that accurately describe the surface. Their intersections are exactly what I need:
edged = cv2.Canny(mask, 10, 200)
hLines = cv2.HoughLines(edged, 2, np.pi/180, 200)
lines = []
for rho,theta in hLines[0]:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(mask, (x1,y1), (x2,y2), (255, 0, 0), 2)
lines.append([[x1,y1],[x2,y2]])
The question is: is it possible to somehow tweak findContours?
Another solution would be to find coordinates of intersections. Any clues for this approach are welcome :)
Can anybody give me a hint how to solve this problem?
Finding intersection is not so trivial problem as it seems to be, but before the intersection points will be found, following problems should be considered:
The most important thing is to choose the right parameters for the HoughLines function, since it can return from 0 to an infinite numbers of lines (we need 4 parallel)
Since we do not know in what order these lines go, they need to be compared with each other
Because of the perspective, parallel lines are no longer parallel, so each line will have a point of intersection with the others. A simple solution would be to filter the coordinates located outside the photo. But it may happen that an undesirable intersection will be within the photo.
The coordinates should be sorted. Depending on the task, it could be done in different ways.
cv2.HoughLines will return an array with the values of rho and theta for each line.
Now the problem becomes a system of equations for all lines in pairs:
def intersections(edged):
# Height and width of a photo with a contour obtained by Canny
h, w = edged.shape
hl = cv2.HoughLines(edged,2,np.pi/180,190)[0]
# Number of lines. If n!=4, the parameters should be tuned
n = hl.shape[0]
# Matrix with the values of cos(theta) and sin(theta) for each line
T = np.zeros((n,2),dtype=np.float32)
# Vector with values of rho
R = np.zeros((n),dtype=np.float32)
T[:,0] = np.cos(hl[:,1])
T[:,1] = np.sin(hl[:,1])
R = hl[:,0]
# Number of combinations of all lines
c = n*(n-1)/2
# Matrix with the obtained intersections (x, y)
XY = np.zeros((c,2))
# Finding intersections between all lines
for i in range(n):
for j in range(i+1, n):
XY[i+j-1, :] = np.linalg.inv(T[[i,j],:]).dot(R[[i,j]])
# filtering out the coordinates outside the photo
XY = XY[(XY[:,0] > 0) & (XY[:,0] <= w) & (XY[:,1] > 0) & (XY[:,1] <= h)]
# XY = order_points(XY) # obtained points should be sorted
return XY
here is the result:
It is possible to:
select the longest contour
break it into segments and group them by gradient
Fit lines to the largest four groups
Find intersection points
But then, Hough transform does nearly the same thing. Is there any particular reason for not using it?
Intersection points of lines are very easy to calculate. A high-school coordinate geometry lesson can provide you with the algorithm.