OpenCV: How to detect nested geometrical shapes? - opencv

Apologies, I'm new to OpenCV. How to detect nested geometrical shapes in OpenCV?
I got this answer about outer shapes, but I need something like a triangle in a square kind of thing. Also, is there a way to make it work with rounded corners? Example:

Try this code for finding contours
import cv2
img = cv2.imread('shapes.png', 0)
thresh = cv2.threshold(img, 60, 255, cv2.THRESH_BINARY_INV)[1]
cnts, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
In the code above the image is read in gray format. Then while thresholding the format used is binary_INV because we want the background as black and the foreground as white before finding contours. Your displayed test image has the opposite. Now while finding contours you will need to use RETR_TREE and not RETR_EXTERNAL because the latter only finds the external contours where as the former will find all contours. Now you use any of the links provided for finding sides.

Related

Detect separate figures that intersects each other with Opencv

I am trying to define the coordinates of multiple rectangles appearing randomly in the screen. The width of the rectangles is defined (even if with the contour method i noticed there is a bit of inaccuracy in determine it).
With my python code:
yellow = (5,242,206)
while True:
isFrameValid, frame = capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
roi = gray [0:300, 0:1920]
threshold, thresh_image = cv2.threshold(roi, 30, 255, cv2.THRESH_BINARY)
#Select contours
contours, _ =cv2.findContours(thresh_image,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(frame, contours, -1,yellow,1)
I can only detect the entire block, even trying with different options instead of RETR_EXTERNAL.Looking at my example images, what I'd like to achive is to detect the 3 rectangles (appearing in random position in the screen) so I can correctly determine their coordinates. Are there any ideas or methods I dont know about since im new with Opencv?
)
Example to reproduce the problem with an image
import cv2
img= cv2.imread('./IwOXW.png')
yellow = (5,242,206)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
roi = gray [0:300, 0:1920]
threshold, thresh_image = cv2.threshold(roi, 30, 255, cv2.THRESH_BINARY)
#Select contours
contours, _ = cv2.findContours(thresh_image,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(img, contours, -1,yellow,1)
cv2.imshow('Frame',img)
cv2.waitKey(0)
with this image
The reason findCountours can't detect the entire block is because there is no line on the inside for it to detect.
I can think of two options for you to try:
Use the contours that you have, and write some smarts to find 90 degree bends, and thus build your rectangles
You could investigate using HoughLines to detect the lines. You would probably still have to write some code to take the detected lines and figure out what are rectangles, but it might be simpler with HoughLines as it will give you straight lines to work with. Look for HoughLines in the docs: https://docs.opencv.org/4.x/

Remove circles from technical drawing image

When OCRing technical drawing, most (all?) ocr engines have problems with surrounding geometry and sometimes falsely interpret a line as letter.
To improve the quality of the OCR, I first want to remove certain elements from the drawing, mainly circles and rectangels, from the image.
The drawings are all black & white and look very similar to the below example.
What is the best way to achieve this? I've played around with image magick and opencv with little success...
Here's a partial solution. This problem can be broken up into two steps:
1) Remove rectangles by removing horizontal + vertical lines
We create vertical and horizontal kernels then perform morph close to detect the lines. From here we use bitwise operations to remove the lines.
Detected vertical lines (left) and horizontal lines (right)
Removed lines
import cv2
image = cv2.imread('1.jpg')
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,15))
remove_vertical = 255 - cv2.morphologyEx(image, cv2.MORPH_CLOSE, vertical_kernel)
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15,1))
remove_horizontal = 255 - cv2.morphologyEx(image, cv2.MORPH_CLOSE, horizontal_kernel)
result = cv2.add(cv2.add(remove_vertical, remove_horizontal), image)
cv2.imshow('result', result)
cv2.waitKey()
2) Detect/remove circles
There are several approaches to remove the circles
Use cv2.HoughCircles(). Here's a good tutorial to detect circles in images using Hough Circles
Construct a cv2.MORPH_ELLIPSE kernel using cv2.getStructuringElement() then perform morphological operations to isolate the circle contours
Use simple shape detection with contour approximation and contour filtering to detect the circles. This method uses cv2.arcLength() and cv2.approxPolyDP() for contour approximation. One tradeoff with this method is that it only works with "perfect" shapes. Take a look at detect simple geometric shapes and opencv shape detection

Billboard corner detection

I was trying to detect billboard images on a random background. I was able to localize the billboard using SSD, this give me approximate bounding box around the billboard. Now I want to find the exact corners of the billboard for my application. I tried using different strategies which I came across such as Harris corner detection (using Opencv), finding intersections of lines using, Canny + morphological operations + contours. The details on the output is given below.
Harris corner detection
The pseudocode for the harris corner detection is as follows:
img_patch_gray = np.float32(img_patch_gray)
harris_point = cv2.cornerHarris(img_patch_gray,2,3,0.04)
img_patch[harris_point>0.01*harris_point.max()]=[255,0,0]
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(img_patch)
Here the red dots are the corners detected by the Harris corner detection algorithm and the points of interest are encircled in green.
Using Hough line detection
Here I was trying to find the intersection of the lines and then choosing the points. Something similar to stackoverflow link, but it is very difficult to get the exact lines since billboards have text and graphics in it.
Contour based
In this approach I have used canny edge detector, followed by dilation(3*3 kernel), followed by contour.
bin_img = cv2.Canny(gray_img_patch,100,250)
bin_img = dilate(bin_img, 3)
plt.imshow(bin_img, cmap='gray')
(_,cnts, _) = cv2.findContours(bin_img.copy(),
cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:10]
cv2.drawContours(img_patch, [cnts[0]],0, (0,255,0), 1)
, . I had tried using approxPolyDp function from openCV but it was not as expected since it can also approximate larger or smaller contours by four points and in some images it might not form contours around the billboard frame.
I have used openCV 3.4 for all the image processing operations. used can be found here. Please note that the image discussed here is just for the illustration purpose and in general image can be of any billboard.
Thanks in advance, any help is appreciated.
This is a very difficult task because the image containes a lot of noise. You can get an approximation of the contour but specific corners would be very hard. I have made an example on how I would make an approximation. It may not work on other images. Maybe it will help a bit or give you a new idea. Cheers!
import cv2
import numpy as np
# Read the image
img = cv2.imread('billboard.png')
# Blur the image with a big kernel and then transform to gray colorspace
blur = cv2.GaussianBlur(img,(19,19),0)
gray = cv2.cvtColor(blur,cv2.COLOR_BGR2GRAY)
# Perform histogram equalization on the blur and then perform Otsu threshold
equ = cv2.equalizeHist(gray)
_, thresh = cv2.threshold(equ,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Perform opening on threshold with a big kernel (erosion followed by dilation)
kernel = np.ones((20,20),np.uint8)
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
# Search for contours and select the biggest one
_, contours, hierarchy = cv2.findContours(opening,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
# Make a hull arround the contour and draw it on the original image
mask = np.zeros((img.shape[:2]), np.uint8)
hull = cv2.convexHull(cnt)
cv2.drawContours(mask, [hull], 0, (255,255,255),-1)
# Search for contours and select the biggest one again
_, thresh = cv2.threshold(mask,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
_, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
# Draw approxPolyDP on the image
epsilon = 0.008*cv2.arcLength(cnt,True)
approx = cv2.approxPolyDP(cnt,epsilon,True)
cv2.drawContours(img, [cnt], 0, (0,255,0), 5)
# Display the image
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:

Extract single line contours from Canny edges

I'd like to extract the contours of an image, expressed as a sequence of point coordinates.
With Canny I'm able to produce a binary image that contains only the edges of the image. Then, I'm trying to use findContours to extract the contours. The results are not OK, though.
For each edge I often got 2 lines, like if it was considered as a very thin area.
I would like to simplify my contours so I can draw them as single lines. Or maybe extract them with a different function that directly produce the correct result would be even better.
I had a look on the documentation of OpenCV but I was't able to find anything useful, but I guess that I'm not the first one with a similar problem. Is there any function or method I could use?
Here is the Python code I've written so far:
def main():
img = cv2.imread("lena-mono.png", 0)
if img is None:
raise Exception("Error while loading the image")
canny_img = cv2.Canny(img, 80, 150)
contours, hierarchy = cv2.findContours(canny_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contours_img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
scale = 10
contours_img = cv2.resize(contours_img, (0, 0), fx=scale, fy=scale)
for cnt in contours:
color = np.random.randint(0, 255, (3)).tolist()
cv2.drawContours(contours_img,[cnt*scale], 0, color, 1)
cv2.imwrite("canny.png", canny_img)
cv2.imwrite("contours.png", contours_img)
The scale factor is used to highlight the double lines of the contours.
Here are the links to the images:
Lena greyscale
Edges extracted with Canny
Contours: 10x zoom where you can see the wrong results produced by findContours
Any suggestion will be greatly appreciated.
If I understand you right, your question has nothing to do with finding lines in a parametric (Hough transform) sense.
Rather, it is an issue with the findContours method returning multiple contours for a single line.
This is because Canny is an edge detector - that means it is filter attuned to the image intensity gradient which occurs on both sides of a line.
So your question is more akin to: “how can I convert low-level edge features to single line?”, or perhaps: “how can I navigate the contours hierarchy to detect single lines?"
This is a fairly common topic - and here is a previous post which proposed one solution:
OpenCV converting Canny edges to contours

Finding location of rectangles in an image with OpenCV

I'm trying to use OpenCV to "parse" screenshots from the iPhone game Blocked. The screenshots are cropped to look like this:
I suppose for right now I'm just trying to find the coordinates of each of the 4 points that make up each rectangle. I did see the sample file squares.c that comes with OpenCV, but when I run that algorithm on this picture, it comes up with 72 rectangles, including the rectangular areas of whitespace that I obviously don't want to count as one of my rectangles. What is a better way to approach this? I tried doing some Google research, but for all of the search results, there is very little relevant usable information.
The similar issue has already been discussed:
How to recognize rectangles in this image?
As for your data, rectangles you are trying to find are the only black objects. So you can try to do a threshold binarization: black pixels are those ones which have ALL three RGB values less than 40 (I've found it empirically). This simple operation makes your picture look like this:
After that you could apply Hough transform to find lines (discussed in the topic I referred to), or you can do it easier. Compute integral projections of the black pixels to X and Y axes. (The projection to X is a vector of x_i - numbers of black pixels such that it has the first coordinate equal to x_i). So, you get possible x and y values as the peaks of the projections. Then look through all the possible segments restricted by the found x and y (if there are a lot of black pixels between (x_i, y_j) and (x_i, y_k), there probably is a line probably). Finally, compose line segments to rectangles!
Here's a complete Python solution. The main idea is:
Apply pyramid mean shift filtering to help threshold accuracy
Otsu's threshold to get a binary image
Find contours and filter using contour approximation
Here's a visualization of each detected rectangle contour
Results
import cv2
image = cv2.imread('1.png')
blur = cv2.pyrMeanShiftFiltering(image, 11, 21)
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.015 * peri, True)
if len(approx) == 4:
x,y,w,h = cv2.boundingRect(approx)
cv2.rectangle(image,(x,y),(x+w,y+h),(36,255,12),2)
cv2.imshow('thresh', thresh)
cv2.imshow('image', image)
cv2.waitKey()
I wound up just building on my original method and doing as Robert suggested in his comment on my question. After I get my list of rectangles, I then run through and calculate the average color over each rectangle. I check to see if the red, green, and blue components of the average color are each within 10% of the gray and blue rectangle colors, and if they are I save the rectangle, if they aren't I discard it. This process gives me something like this:
From this, it's trivial to get the information I need (orientation, starting point, and length of each rectangle, considering the game window as a 6x6 grid).
The blocks look like bitmaps - why don't you use simple template matching with different templates for each block size/color/orientation?
Since your problem is the small rectangles I would start by removing them.
Since those lines are much thinner than the borders of the rectangles I would start by applying morphological operations on the image.
Using a structural element that looks like this:
element = [ 1 1
1 1 ]
should remove lines that are less than two pixels wide. After the small lines are removed the rectangle finding algorithm of OpenCV will most likely do the rest of the job for you.
The erosion can be done in OpenCV by the function cvErode
Try one of the many corner detectors like harris corner detector. also it is in general a good idea to try that at multiple resolutions : so do some preprocessing of of varying magnification.
It appears that you want some sort of color dominated square then you can suppress the other colors, by first using something like cvsplit .....and then thresholding the color...so only that region remains....follow that with a cropping operation ...I think that could work as well ....

Resources