Extracting different coloured arrows with noise - opencv

Given an image like this, how would I go about extracting only the arrows? I'm having trouble filtering all the noise, and it's difficult to filter the colour since the arrows can either be in the HSV range of blue and red. My goal is to have just the 4 arrows showing in the image and nothing else.
Here are some of the different photos:
What I've tried so far with OpenCV-Python:
Applying GaussianBlur
Grabbing hue and then applying canny
Drawing contours
Filtering by colour, then applying canny (difficult because arrows can be different colours)
In the end, none of these did a good job in filtering for the arrows.

Here is one approach in Python/OpenCV that works for the image provided. I am not sure if it works for all the image. Perhaps the threshold needs adjusting for each.
Read the input
Convert to LAB colorspace
Separate LAB channels
Get pixel-by-pixel maximum between the A and B channels
Threshold
Apply morphology close
Get contours
Filter the contours on area and perimeter
Draw the remaining contours as white filled on a black background
Save the results
Input:
import cv2
import numpy as np
# read the input
img = cv2.imread('game_arrows.png')
# convert to LAB
LAB = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
# separate channels
L,A,B = cv2.split(LAB)
# get the maximum between the A and B pixel by pixel
ABmax = np.maximum(A, B)
# threshold
thresh = cv2.threshold(ABmax, 180, 255, cv2.THRESH_BINARY)[1]
# morphology close and open
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (7,7))
morph = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# get contours
contours = cv2.findContours(morph, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
# filter contours
result = np.zeros_like(img)
for cntr in contours:
area = cv2.contourArea(cntr)
perimeter = cv2.arcLength(cntr, True)
if area > 300 and perimeter < 200:
cv2.drawContours(result, [cntr], 0, (255,255,255), -1)
# save results
cv2.imwrite('game_arrows_ABmax.jpg', ABmax)
cv2.imwrite('game_arrows_thresh.jpg', thresh)
cv2.imwrite('game_arrows_morph.jpg', morph)
cv2.imwrite('game_arrows_result.jpg', result)
# show results
cv2.imshow('ABmax', ABmax)
cv2.imshow('thresh', thresh)
cv2.imshow('morph', morph)
cv2.imshow('result', result)
cv2.waitKey(0)
Maximum of A and B channels:
Threshold Image:
Morphology Closed Image:
Filtered Contours:

Related

Detecting a sheet of paper inside an image like cam-scanner app

Out of an image, I need to extract a sheet of paper, just like camscanner app does, https://www.camscanner.com/
I know that I can do this by detecting the edges of the sheet of paper i want to detect. And later performing perspective transform. I use openCV library in python.
This is the image in which I'm trying to find the sheet of paper:
Here is what I already tried:
Method 1:
(using thresholding)
Preprocessing the image with image smoothening (guassian
blur/bilateral blur)
splitting image into h,s,v channels
adaptive thresholding on the saturation channel
some morphological operations like dilation and erosion
finding contours, identifying the largest contour and finding the
corner points
I've implemented this method based on a stackoverflow answer:
Detecting a sheet of paper / Square Detection
I'm able to find the paper sheet for some images, but it fails for images like this:
Method 2:
(using sobel gradient operator)
Preprocessing the image by converting into grayscale, image smoothening (guassian
blur/bilateral blur)
Finding the gradients of the image
downsampling and upsampling the image
After this I don't know how to find the appropriate boundary enclosing the image.
I've implemented this method based on a stackoverflow answer:
detect paper from background almost same as paper color
Here's how far I got with the image:
Method 3:
(using canny edge detector)
According to the posts I've read on this community seems that everyone prefers canny edge method to extract the edges, but in my case the results are not satisfactory. Here's what I did:
Preprocessing the image by converting into grayscale, image smoothening (guassian
blur/bilateral blur)
Finding the edges using canny edge
some morphological operations like dilation and erosion
But the edges obtained from canny are really not up to the mark.
I've implemented this method based on a stackoverflow answer:
Detecting a sheet of paper / Square Detection, also I didn't quite what he does by iterating over multiple channels in this answer.
Here's how far I got with the image:
Here's some code on the method1(thresholding):
#READING IMAGE INTO BGR SPACE
image = cv2.imread("./images/sheet3.png")
#BILATERAL FILTERING TO SMOOTHEN THE IMAGE BUT NOT THE EDGES
img = cv2.bilateralFilter(image,20,75,75)
#CONVERTING BGR TO HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
#SPLITTING THE HSV CHANNELS
h,s,v = cv2.split(hsv)
#DOUBLING THE SATURATION CHANNEL
gray_s = cv2.addWeighted(cv2.cvtColor(img, cv2.COLOR_BGR2GRAY), 0.0, s, 2.0, 0)
#THRESHOLDING USING ADAPTIVETHRESHOLDING
threshed = cv2.adaptiveThreshold(gray_s, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 109, 10)
#APPLYING MORPHOLOGICAL OPERATIONS OF DILATION AND EROSION
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 2))
morph = cv2.morphologyEx(threshed, cv2.MORPH_OPEN, kernel)
#FINDING ALL THE CONTOURS
cnts = cv2.findContours(morph, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[-2]
canvas = img.copy()
#SORTING THE CONTOURS AND TAKING THE LARGEST CONTOUR
cnts = sorted(cnts, key = cv2.contourArea)
cnt = cnts[-1]
#FINDING THE PERIMETER OF THE CONTOUR
arclen = cv2.arcLength(cnt, True)
#FINDING THE END POINTS OF THE CONTOUR BY APPROX POLY DP
approx = cv2.approxPolyDP(cnt, 0.02* arclen, True)
cv2.drawContours(canvas, [cnt], -1, (255,0,0), 1, cv2.LINE_AA)
cv2.drawContours(canvas, [approx], -1, (0, 0, 255), 1, cv2.LINE_AA)
cv2.imwrite("detected.png", canvas)
I'm kind of new to image processing and openCV.
Please share some insights on how to take this further and obtain results more accurately. TIA.

Billboard corner detection

I was trying to detect billboard images on a random background. I was able to localize the billboard using SSD, this give me approximate bounding box around the billboard. Now I want to find the exact corners of the billboard for my application. I tried using different strategies which I came across such as Harris corner detection (using Opencv), finding intersections of lines using, Canny + morphological operations + contours. The details on the output is given below.
Harris corner detection
The pseudocode for the harris corner detection is as follows:
img_patch_gray = np.float32(img_patch_gray)
harris_point = cv2.cornerHarris(img_patch_gray,2,3,0.04)
img_patch[harris_point>0.01*harris_point.max()]=[255,0,0]
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(img_patch)
Here the red dots are the corners detected by the Harris corner detection algorithm and the points of interest are encircled in green.
Using Hough line detection
Here I was trying to find the intersection of the lines and then choosing the points. Something similar to stackoverflow link, but it is very difficult to get the exact lines since billboards have text and graphics in it.
Contour based
In this approach I have used canny edge detector, followed by dilation(3*3 kernel), followed by contour.
bin_img = cv2.Canny(gray_img_patch,100,250)
bin_img = dilate(bin_img, 3)
plt.imshow(bin_img, cmap='gray')
(_,cnts, _) = cv2.findContours(bin_img.copy(),
cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:10]
cv2.drawContours(img_patch, [cnts[0]],0, (0,255,0), 1)
, . I had tried using approxPolyDp function from openCV but it was not as expected since it can also approximate larger or smaller contours by four points and in some images it might not form contours around the billboard frame.
I have used openCV 3.4 for all the image processing operations. used can be found here. Please note that the image discussed here is just for the illustration purpose and in general image can be of any billboard.
Thanks in advance, any help is appreciated.
This is a very difficult task because the image containes a lot of noise. You can get an approximation of the contour but specific corners would be very hard. I have made an example on how I would make an approximation. It may not work on other images. Maybe it will help a bit or give you a new idea. Cheers!
import cv2
import numpy as np
# Read the image
img = cv2.imread('billboard.png')
# Blur the image with a big kernel and then transform to gray colorspace
blur = cv2.GaussianBlur(img,(19,19),0)
gray = cv2.cvtColor(blur,cv2.COLOR_BGR2GRAY)
# Perform histogram equalization on the blur and then perform Otsu threshold
equ = cv2.equalizeHist(gray)
_, thresh = cv2.threshold(equ,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Perform opening on threshold with a big kernel (erosion followed by dilation)
kernel = np.ones((20,20),np.uint8)
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
# Search for contours and select the biggest one
_, contours, hierarchy = cv2.findContours(opening,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
# Make a hull arround the contour and draw it on the original image
mask = np.zeros((img.shape[:2]), np.uint8)
hull = cv2.convexHull(cnt)
cv2.drawContours(mask, [hull], 0, (255,255,255),-1)
# Search for contours and select the biggest one again
_, thresh = cv2.threshold(mask,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
_, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
# Draw approxPolyDP on the image
epsilon = 0.008*cv2.arcLength(cnt,True)
approx = cv2.approxPolyDP(cnt,epsilon,True)
cv2.drawContours(img, [cnt], 0, (0,255,0), 5)
# Display the image
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:

Find the coordinate of a specific text in an image

I am trying to segment the questions in the below image. The only clue I have is the number with the bold text which is indented by a tab space. I am trying to find the bold numbering (4,5,6 in this case) so that I can get the x and y of them and segment the image into 3 separate questions. How to get these or how to approach this problem.
I am using scikit image for image processing
Your image looks quite simple so texts can be segmented quite easily with contour detection around the dilated components. Here are detailed steps:
1) Binarize the image and invert it for easy morphological operations.
2) Dilate the image in horizontal directions only using long horizontal kernal say (20, 1) shape kernal.
3) Find contours of all the connected components and get their coordinates.
4) Use these bounding boxes dimensional information and their coordinates to segment the questions.
Here is the Python implementation of the same:
# Text segmentation
import cv2
import numpy as np
rgb = cv2.imread(r'D:\Image\st4.png')
small = cv2.cvtColor(rgb, cv2.COLOR_BGR2GRAY)
#threshold the image
_, bw = cv2.threshold(small, 0.0, 255.0, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)
# get horizontal mask of large size since text are horizontal components
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (20, 1))
connected = cv2.morphologyEx(bw, cv2.MORPH_CLOSE, kernel)
# find all the contours
_, contours, hierarchy,=cv2.findContours(connected.copy(),cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#Segment the text lines
for idx in range(len(contours)):
x, y, w, h = cv2.boundingRect(contours[idx])
cv2.rectangle(rgb, (x, y), (x+w-1, y+h-1), (0, 255, 0), 2)
Output image:

Removing Contour around a certain color

So i have a skin color range that I'm using to draw the contours of any skin color my camera sees, i.e, it'll draw contour on the hands or faces etc.
However, the range of color that I am using to draw the contours has a red range of 0-255, so it basically draws contour around everything that is red. The red pixel is however important for a good skin detection so I can't change that.
So i was wondering how I could tweek my contour code, so that it doesn't draw contour around the color red.
My code is as follow:
min_YCrCb = np.array([0,133,77],np.uint8) # Create a lower bound for the skin color
max_YCrCb = np.array([255,173,127],np.uint8) # Create an upper bound for skin color
skinRegion = cv2.inRange(converted,min_YCrCb,max_YCrCb) # Create a mask with boundaries
contours, hierarchy = cv2.findContours(skinRegion, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # Find the contour on the skin detection
for i, c in enumerate(contours): # Draw the contour on the source frame
area = cv2.contourArea(c)
if area > 10000:
cv2.drawContours(img, contours, i, (255, 255, 0), 2)
In my experience with OpenCV, and what many other would recommend, would be converting your RGB color scheme into HSV.
HSV stands for Hue, Saturation, and Value and is just another way of expressing different colors in the white light spectrum. In RGB it can be very difficult to find bright or dull colors but this is not the case in the HSV color range.
In the Python documentation of OpenCV one can see an example of converting BGR to HSV, NOTE: The H in HSV has a range of 0-179 while the others are 0-255.
You can switch to HSV by using the function hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) and then having your code find the contours within that image stream instead of your original. To figure out what your HSV range should be set to, you can google a RGB to HSV converter such as this one - NOTE: OpenCV commonly uses BGR instead of RBG so be careful not to get them mixed up
This should help you find the exact range of colors you need to make sure your program is properly working.
min_YCrCb = np.array([0,0,0],np.uint8) # Create a lower bound HSV
max_YCrCb = np.array([179,255,255],np.uint8) # Create an upper bound HSV
hsv = cv2.cvtColor(converted, cv2.COLOR_BGR2HSV) # assuming converted is your original stream...
skinRegion = cv2.inRange(hsv,min_YCrCb,max_YCrCb) # Create a mask with boundaries
contours, hierarchy = cv2.findContours(skinRegion, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # Find the contour on the skin detection
for i, c in enumerate(contours): # Draw the contour on the source frame
area = cv2.contourArea(c)
if area > 10000:
cv2.drawContours(img, contours, i, (255, 255, 0), 2)

OpenCV Hu moments extraction

I'm trying to make a fire detection using Machine learning. My features are mean RGB, variance RGB, and Hu moments.
So what I'm doing right now is I first segment an image based on this paper
According to the paper I use the rules
r > g && g > b
r > 190 && g > 100 && b < 140
here is the result of my color segmentation for the negative and positive images
The pictures on the right are now in
vector<Mat> processedImage
After that I get the hu moments of each picture by converting it into gray scale and blurring it.
cvtColor(processedImage[x], gray_image, CV_BGR2GRAY);
blur(gray_image, gray_image, Size(3, 3));
Canny(gray_image, canny_output, thresh, thresh * 2, 3);
findContours(canny_output, contours, hierarchy, CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
cv::Moments mom = cv::moments(contours[0]);
cv::HuMoments(mom, hu); // now in hu are your 7 Hu-Moments
Now I am stuck I'm not sure if my images are okay to obtain useful hu moments because the negative images are so scattered.
Am I on the right track with regards to Hu moments extraction? Will I do the same on testing where I do color segmentation before extracting hu moments?
I think you should follow these steps (Code in Python):
1.Create a binary image by iterating through the original. If a pixel is identified as fire will be turned to white otherwise to black (Be careful if you are using BRG or RGB. OpenCV read images in BRG so you need to convert first):
rows,cols = im2.shape[:2]
for i in xrange(rows):
for j in xrange(cols):
if im2[i,j,0]>im2[i,j,1] and im2[i,j,1]>im2[i,j,2] and im2[i,j,0]>190 and im2[i,j,1] > 100 and im2[i,j,2] <140:
im2[i,j,:]=255
else:
im2[i,j,:]=0
Result:
2.Use morphological operators and blurring to reduce noise/small contours.
# Convert to greyscale-monochromatic
gray = cv2.cvtColor(im2,cv2.COLOR_RGB2GRAY)
#Apply Gaussian Blur
blur= cv2.GaussianBlur(gray,(7,7),0)
# Threshold again since after gaussian blur the image is no longer binary
(thresh, bw_image) = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY| cv2.THRESH_OTSU)
# Difine Kernel Size and apply erosion filter
element = cv2.getStructuringElement(cv2.MORPH_RECT,(7,7))
dilated=cv2.dilate(bw_image,element)
eroded=cv2.erode(dilated,element)
3.Afterwards, you can detect contours using the cv2.RETR_EXTERNAL flag, so you can ignore all inner contours (you are interested only in the outer contours of the fire regions). Also you can retain only the contours whose are is bigger than e.g. 500px or just choose the bigger one if you know there is only one "fire".
g, contours,hierarchy = cv2.findContours(eroded,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
contours_retain=[]
for cnt in contours:
if cv2.contourArea(cnt)>500:
contours_retain.append(cnt)
cv2.drawContours(im_cp,contours_retain,-1,(255,0,255),3)
Here is the fire region:
4.Finally calculate your Hu moments
for cnt in contours_retain:
print cv2.HuMoments(cv2.moments(cnt)).flatten()
I hope this helps! Sorry I am not familiar with C++!

Resources