How to get points from a line in OpenCV? - image-processing

The cvLine() function can draw a straight line given two points P1(x1,y1) and P2(x2,y2). What I'm stuck at is getting the points on this line instead of drawing it straight away.
Suppose I draw a line (in green) AB and another line AC. If I follow all the pixels on line AB there will be a point where I encounter black pixels (the border of the circle that encloses A) before I reach B.
Again when traveling along the pixels on line AC black pixels will be encountered twice.
Basically I'm trying to get the points on the (green) lines, but cvLine() doesn't seem to return any point sequence structure. Is there any way to get these points using OpenCV?
A rather dumb approach would be to draw the line using cvLine() on a separate image, then find contours on it, then traverse that contour's CvSeq* (the line drawn) for the points. Both the scratch image and the original image being of same size we'd be getting the points' positions. Like I said, kinda dumb. Any enlightened approach would be great!

I think a CvLinIterator does what you want.

Another dirty but efficient way to find the number of points of intersection between circles and line without iterating over all pixels of the line is as follows:
# First, create a single channel image having circles drawn on it.
CircleImage = np.zeros((Height, Width), dtype=np.uint8)
CircleImage = cv2.circle(CircleImage, Center, Radius, 255, 1) # 255-color, 1-thickness
# Then create an image of the same size with only the line drawn on it
LineImage = np.zeros((Height, Width), dtype=np.uint8)
LineImage = cv2.line(LineImage, PointA, PointB, 255, 1) # 255-color, 1-thickness
# Perform bitwise AND operation
IntersectionImage = cv2.bitwise_and(CircleImage, LineImage)
# Count number of white pixels now for the number of points of intersection.
Num = np.sum(IntersectionImage == 255)
This method is also fast as instead of iterating over pixels, it is using OpenCV and numpy libraries.
On adding another circle in the image "CircleImage", you can find the number of interaction points of both the circles and the line AC.

Related

Rectangle detection in noisy contours

I'm trying to build an algorithm that calculates the dimensions of slabs (in pixel units as of now). I tried masking, but there is no one HSV color range that will work for all the test cases, as the slabs are of varying colors. I tried Otsu thresholding as well but it didn't work quite well...
Now I'm trying my hand with canny edge detection. The original image, and the image after canny-edge look like this:
I used dilation to make the central region a uniform white region, and then used contour detection. I identified the contour having the maximum area as the contour of interest. The resulting contours are a bit noisy, because the canny edge detection also included some background stuff that was irrelevant:
I used cv2.boundingRect() to estimate the height and width of the rectangle, but it keeps returning the height and width of the entire image. I presume this is because it works by calculating (max(x)-min(x),max(y)-min(y)) for each (x,y) in the contour, and in my case the resulting contour has some pixels touching the edges of the image, and so this calculation simply results in (image width, image height).
I am trying to get better images to work with, but assuming all images are like this only, i.e. have noisy contours, what can be an alternate approach to detect the dimensions of the white rectangular region obtained after dilating?
To get the right points of the rectangle use this:
p = cv2.arcLength(cnt True) # cnt is the rect Contours
appr = cv2.approxPolyDP(cnt , 0.01 * p, True) # appr contains the 4 points
# draw the rect
cv2.drawContours(img, [appr], 0, (0, 255, 0), 2)
The appr var contains the turning point of the rect. You still need to do some more cleaning to get better results, but cv2.boundingRect() is not a good solution for your case.

Filling gaps between borders/contours in images

I have two images that I will like to combine together and fill/remove the gaps between the borders after combining. The image on the left is the edge while the image on the right is the mask. (Ignore the little patch on the right picture but it will be nice to be able to remove it too)
The expected result after combination is
but this is the current result achieved so far
I have tried different strategies from scikit-image apis, which includes:
ndi.binary_opening, ndi.binary_closing, morphology.{erosion, dilation, opening, closing} but none of them seem to work.
I think this may be a basis for a strategy...
Step 1:
Start with the "edge image". Randomly choose any white pixel. Flood fill with black using that pixel as the seed, or start point. This should fill one side of the edge. Remember the seed, and get centroid (and maybe area) of the new, filled in black area. Invert the filled-in image and get the centroid of the other part of the image.
You now know the centroids of both sides of the edge - as marked in red below:
Step 2:
Now move to the mask image. Maybe use dilation/erosion to fill in any smallish holes. Then run "labelling" on the image to get a list of the contiguous black blobs and their centroids and areas. Select the largest blob by area.
You should now have the centroid of the largest blob as marked in green below:
Step 3:
Now choose the nearer of the two red points to the green one and use the corresponding seed to do your fill.
It may better at Step 1 to repeatedly choose white seed points at random till you get a different centroid, rather than doing the inversion. That's because if you just invert and get the centroid, you don't know a good seed pixel. It's not certain that the centroid a good seed.
Seems like you need to find the Center of Mass (CoM) of the mask function to determine which side to fill, and then use floodFill to the edge image from the CoM as seeding point.
You could try something like this:
# Calculate the Center of Mass
com = np.zeros(2)
sum = np.zeros(2)
for x in range(0, nu_of_rows):
for y in range(0, num_of_cols):
com += image[x][y] * np.array([x,y])
sum += image[x][y]
com /= sum
# FloodFill a new image
h, w = mask_img.shape[:2]
new_image = mask_img.copy()
temp = np.zeros((h+2,w+2),np.uint8) # Needs to be 2 pixels wider/higher
cv2.FloodFill(new_image, temp, coi, 255, flags=cv2.FLOODFILL_MASK_ONLY)
This will work if the line in your edge and mask images has values 255 and the backgrounds 0. If this is not the case, first invert your two images with the following command:
inverted_img = cv2.bitwise_not(img)
Note: I did not test this with your image, but rather with one of mine. So you might have to change something here and there. Here is my rough example working:

Opencv Subdiv2d (Delaunay Triangulation) remove vertices at infinity

I am using subdiv2d class of opencv for delaunay triangulation. I am particularly facing the problem with vertices at infinity. I do not need any triangle outside the image. So, I inserted the corner points of the image which works nice in many cases. But in certain cases, I still have triangles outside the image. So, I want to remove those vertices at infinity. Any help regarding removing the vertices or any other way, someone can suggest so that the triangles are always within the image?
Here is the code for inserting feature points in subdiv. I have inserted the corner points of the image.
Mat img = imread(...);
Rect rect(0, 0, 600, 600);
Subdiv2D subdiv(rect);
// inserting corners of the image
subdiv.insert( Point2f(0,0));
subdiv.insert( Point2f(img.cols-1, 0));
subdiv.insert( Point2f(img.cols-1, img.rows-1));
subdiv.insert( Point2f(0, img.rows-1));
// inserting N feature points
// ...
// further processing
Here is an example where corner points at infinity are creating problem
5 feature points, one in the middle and 4 corners
http://i.stack.imgur.com/VONsN.jpg
5 feature points, one near the bottom and 4 corner points
http://i.stack.imgur.com/kjxgm.jpg
You can see in the 2nd image that triangle is outside the image

Extract single line contours from Canny edges

I'd like to extract the contours of an image, expressed as a sequence of point coordinates.
With Canny I'm able to produce a binary image that contains only the edges of the image. Then, I'm trying to use findContours to extract the contours. The results are not OK, though.
For each edge I often got 2 lines, like if it was considered as a very thin area.
I would like to simplify my contours so I can draw them as single lines. Or maybe extract them with a different function that directly produce the correct result would be even better.
I had a look on the documentation of OpenCV but I was't able to find anything useful, but I guess that I'm not the first one with a similar problem. Is there any function or method I could use?
Here is the Python code I've written so far:
def main():
img = cv2.imread("lena-mono.png", 0)
if img is None:
raise Exception("Error while loading the image")
canny_img = cv2.Canny(img, 80, 150)
contours, hierarchy = cv2.findContours(canny_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contours_img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
scale = 10
contours_img = cv2.resize(contours_img, (0, 0), fx=scale, fy=scale)
for cnt in contours:
color = np.random.randint(0, 255, (3)).tolist()
cv2.drawContours(contours_img,[cnt*scale], 0, color, 1)
cv2.imwrite("canny.png", canny_img)
cv2.imwrite("contours.png", contours_img)
The scale factor is used to highlight the double lines of the contours.
Here are the links to the images:
Lena greyscale
Edges extracted with Canny
Contours: 10x zoom where you can see the wrong results produced by findContours
Any suggestion will be greatly appreciated.
If I understand you right, your question has nothing to do with finding lines in a parametric (Hough transform) sense.
Rather, it is an issue with the findContours method returning multiple contours for a single line.
This is because Canny is an edge detector - that means it is filter attuned to the image intensity gradient which occurs on both sides of a line.
So your question is more akin to: “how can I convert low-level edge features to single line?”, or perhaps: “how can I navigate the contours hierarchy to detect single lines?"
This is a fairly common topic - and here is a previous post which proposed one solution:
OpenCV converting Canny edges to contours

Finding location of rectangles in an image with OpenCV

I'm trying to use OpenCV to "parse" screenshots from the iPhone game Blocked. The screenshots are cropped to look like this:
I suppose for right now I'm just trying to find the coordinates of each of the 4 points that make up each rectangle. I did see the sample file squares.c that comes with OpenCV, but when I run that algorithm on this picture, it comes up with 72 rectangles, including the rectangular areas of whitespace that I obviously don't want to count as one of my rectangles. What is a better way to approach this? I tried doing some Google research, but for all of the search results, there is very little relevant usable information.
The similar issue has already been discussed:
How to recognize rectangles in this image?
As for your data, rectangles you are trying to find are the only black objects. So you can try to do a threshold binarization: black pixels are those ones which have ALL three RGB values less than 40 (I've found it empirically). This simple operation makes your picture look like this:
After that you could apply Hough transform to find lines (discussed in the topic I referred to), or you can do it easier. Compute integral projections of the black pixels to X and Y axes. (The projection to X is a vector of x_i - numbers of black pixels such that it has the first coordinate equal to x_i). So, you get possible x and y values as the peaks of the projections. Then look through all the possible segments restricted by the found x and y (if there are a lot of black pixels between (x_i, y_j) and (x_i, y_k), there probably is a line probably). Finally, compose line segments to rectangles!
Here's a complete Python solution. The main idea is:
Apply pyramid mean shift filtering to help threshold accuracy
Otsu's threshold to get a binary image
Find contours and filter using contour approximation
Here's a visualization of each detected rectangle contour
Results
import cv2
image = cv2.imread('1.png')
blur = cv2.pyrMeanShiftFiltering(image, 11, 21)
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.015 * peri, True)
if len(approx) == 4:
x,y,w,h = cv2.boundingRect(approx)
cv2.rectangle(image,(x,y),(x+w,y+h),(36,255,12),2)
cv2.imshow('thresh', thresh)
cv2.imshow('image', image)
cv2.waitKey()
I wound up just building on my original method and doing as Robert suggested in his comment on my question. After I get my list of rectangles, I then run through and calculate the average color over each rectangle. I check to see if the red, green, and blue components of the average color are each within 10% of the gray and blue rectangle colors, and if they are I save the rectangle, if they aren't I discard it. This process gives me something like this:
From this, it's trivial to get the information I need (orientation, starting point, and length of each rectangle, considering the game window as a 6x6 grid).
The blocks look like bitmaps - why don't you use simple template matching with different templates for each block size/color/orientation?
Since your problem is the small rectangles I would start by removing them.
Since those lines are much thinner than the borders of the rectangles I would start by applying morphological operations on the image.
Using a structural element that looks like this:
element = [ 1 1
1 1 ]
should remove lines that are less than two pixels wide. After the small lines are removed the rectangle finding algorithm of OpenCV will most likely do the rest of the job for you.
The erosion can be done in OpenCV by the function cvErode
Try one of the many corner detectors like harris corner detector. also it is in general a good idea to try that at multiple resolutions : so do some preprocessing of of varying magnification.
It appears that you want some sort of color dominated square then you can suppress the other colors, by first using something like cvsplit .....and then thresholding the color...so only that region remains....follow that with a cropping operation ...I think that could work as well ....

Resources