how to conditionally apply threshold in opencv - opencv

I have 2 types of image to deal. one with white background and another type with dark background. My requirement is to apply different thresholds for each type
for ex : for white back ground
(thresh, img_bin) = cv2.threshold(img, 128 , 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
for dark back ground
(thresh, img_bin) = cv2.threshold(img, 128 , 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
I am reading images using cv.imread(img,0)
I am doing morphological transformation , so i need to invert the white back ground image. but for the dark background i don't want to invert.

To expand on #nathancy's comment, you could use one of the OpenCV functions, sum or mean
For an plain old 24 bit color image, black and white are represented by
( 0, 0, 0) black
(255, 255, 255) white
Here is a sample image that looks like a page out of a book:
Now let's run some code on it
import cv2 as cv
import numpy as np
img = cv.imread('lorem_ipsum.png',cv.IMREAD_COLOR)
ret = cv.mean(img)
print(ret)
ret = cv.mean(ret)
print(ret)
ret = ret*4/3
print(ret)
ret = cv.mean(cv.mean(img))[0]*4/3
print(ret)
which gives output:
(229.78, 228.28, 228.95, 0.0)
(171.74, 0.0, 0.0, 0.0)
228.98
228.98
The first line gives us the mean of blue, green, red, and alpha channels. Second line is mean of the means. Because the first line had a zero entry, the mean of means is too low. We want to ignore alpha channel. So on the last line we pick off just the first element in mean of means and scale it by 4/3 to get a 0 to 255 answer. Our answer is 228.98 --> the picture is mostly white. The last line is the result of doing all of the operations in one line.

Related

Why cannot draw lines after Canny Edge Detection?

Actually, I am noob for working with Computer Vision. Sorry in advance.
I want to detect edges of tram lane. Mostly, the code works well but sometimes It cannot even draw a line. I don't know why.
cropped_Image function is just cropping the polygonal area of the current frame.
display_lines function draw lines whose absolute value of angle is between 30 and 90. It uses cv2.line to draw lines.
Here is the code:
_,frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY) # convert image to gray to be one layer
blur = cv2.GaussianBlur(gray, (1, 1), 0) # to reduce noise in gray scale image
canny = cv2.Canny(blur, 150, 200, apertureSize=3)
cropped_image = region_of_interest(canny) # simply, it crops bottom of image
lines = cv2.HoughLinesP(cropped_image, 1, np.pi / 180, 100, np.array([]),
minLineLength=5, maxLineGap=5)
hough_bundler = HoughBundler()
lines_merged = hough_bundler.process_lines(lines, cropped_image)
line_image = display_lines(frame, lines_merged)
combo_image = cv2.addWeighted(frame, 0.8, line_image, 1, 1)
cv2.imshow(‘test’, combo_image)
To see it: HoughBundler
Expected: expected img
Canny: canny img of wrong result
Result: wrong result
First of all I'd start by fixing the cv2.GuassianBlur() line. You've used a 1x1 kernel which doesn't do anything, you need to use at least a 3x3 kernel. Look into how convolutions are applied if you want to know why a 1x1 filter doesn't work.
Secondly, I would play with the Canny aperture size to suit my needs. Also after edge detection you can use cv2.erode() with a 3x3 or 5x5 kernel so that you don't get a broken line in the image.

How to get the area of the contours?

I have a picture like this:
And then I transform it into binary image and use canny to detect edge of the picture:
gray = cv.cvtColor(image, cv.COLOR_RGB2GRAY)
edge = Image.fromarray(edges)
And then I get the result as:
I want to get the area of 2 like this:
My solution is to use HoughLines to find lines in the picture and calculate the area of triangle formed by lines. However, this way is not precise because the closed area is not a standard triangle. How to get the area of region 2?
A simple approach using floodFill and countNonZero could be the following code snippet. My standard quote on contourArea from the help:
The function computes a contour area. Similarly to moments, the area is computed using the Green formula. Thus, the returned area and the number of non-zero pixels, if you draw the contour using drawContours or fillPoly, can be different. Also, the function will most certainly give a wrong results for contours with self-intersections.
Code:
import cv2
import numpy as np
# Input image
img = cv2.imread('images/YMMEE.jpg', cv2.IMREAD_GRAYSCALE)
# Needed due to JPG artifacts
_, temp = cv2.threshold(img, 128, 255, cv2.THRESH_BINARY)
# Dilate to better detect contours
temp = cv2.dilate(temp, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))
# Find largest contour
cnts, _ = cv2.findContours(temp, cv2.RETR_EXTERNAL , cv2.CHAIN_APPROX_NONE)
largestCnt = []
for cnt in cnts:
if (len(cnt) > len(largestCnt)):
largestCnt = cnt
# Determine center of area of largest contour
M = cv2.moments(largestCnt)
x = int(M["m10"] / M["m00"])
y = int(M["m01"] / M["m00"])
# Initiale mask for flood filling
width, height = temp.shape
mask = img2 = np.ones((width + 2, height + 2), np.uint8) * 255
mask[1:width, 1:height] = 0
# Generate intermediate image, draw largest contour, flood filled
temp = np.zeros(temp.shape, np.uint8)
temp = cv2.drawContours(temp, largestCnt, -1, 255, cv2.FILLED)
_, temp, mask, _ = cv2.floodFill(temp, mask, (x, y), 255)
temp = cv2.morphologyEx(temp, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))
# Count pixels in desired region
area = cv2.countNonZero(temp)
# Put result on original image
img = cv2.putText(img, str(area), (x, y), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, 255)
cv2.imshow('Input', img)
cv2.imshow('Temp image', temp)
cv2.waitKey(0)
Temporary image:
Result image:
Caveat: findContours has some problems one the right side, where the line is very close to the bottom image border, resulting in possibly omitting some pixels.
Disclaimer: I'm new to Python in general, and specially to the Python API of OpenCV (C++ for the win). Comments, improvements, highlighting Python no-gos are highly welcome!
There is a very simple way to find this area, if you take some assumptions that are met in the example image:
The area to be found is bounded on top by a line
Any additional lines in the image are above the line of interest
There are no discontinuities in the line
In this case, the area of the region of interest is given by the sum of the lengths from the bottom of the image to the first set pixel. We can compute this with:
import numpy as np
import matplotlib.pyplot as pp
img = pp.imread('/home/cris/tmp/YMMEE.jpg')
img = np.flip(img, axis=0)
pos = np.argmax(img, axis=0)
area = np.sum(pos)
print('Area = %d\n'%area)
This prints Area = 22040.
np.argmax finds the first set pixel on each column of the image, returning the index. By first using np.flip, we flip this axis so that the first pixel is actually the one on the bottom. The index corresponds to the number of pixels between the bottom of the image and the line (not including the set pixel).
Thus, we're computing the area under the line. If you need to include the line itself in the area, add pos.shape[0] to the area (i.e. the number of columns).

OpenCV - Extracting lines on a graph

I would like to create a program that is able to extract lines from a graph.
For example, if a graph like this is inputted, I would just want the red line to be outputted.
Below I have tried to do this using a hough line transformation, however, I do not get very promising results.
import cv2
import numpy as np
graph_img = cv2.imread("/Users/2020shatgiskessell/Desktop/Graph1.png")
gray = cv2.cvtColor(graph_img, cv2.COLOR_BGR2GRAY)
kernel_size = 5
#grayscale image
blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size),0)
#Canny edge detecion
edges = cv2.Canny(blur_gray, 50, 150)
#Hough Lines Transformation
#distance resoltion of hough grid (pixels)
rho = 1
#angular resolution of hough grid (radians)
theta = np.pi/180
#minimum number of votes
threshold = 15
#play around with these
min_line_length = 25
max_line_gap = 20
#make new image
line_image = np.copy(graph_img)
#returns array of lines
lines = cv2.HoughLinesP(edges, rho, theta, threshold, np.array([]),
min_line_length, max_line_gap)
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),2)
lines_edges = cv2.addWeighted(graph_img, 0.8, line_image, 1, 0)
cv2.imshow("denoised image",edges)
if cv2.waitKey(0) & 0xff == 27:
cv2.destroyAllWindows()
This produces the output image below, which does not accurately recognize the graph line. How might I go about doing this?
Note: For now, I am not concerned about the graph titles or any other text.
I would also like the code to work for other graph images aswell, such as:
etc.
If the graph does not have many noises around it (like your example) I would suggest to threshold your image with Otsu threshold instead of looking for edges . Then you simply search the contours, select the biggest one (graph) and draw it on a blank mask. After that you can perform a bitwise operation on image with the mask and you will get a black image with the graph. If you like the white background better, then simply change all black pixels to white. Steps are written in the example. Hope it helps a bit. Cheers!
Example:
import numpy as np
import cv2
# Read the image and create a blank mask
img = cv2.imread('graph.png')
h,w = img.shape[:2]
mask = np.zeros((h,w), np.uint8)
# Transform to gray colorspace and threshold the image
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Search for contours and select the biggest one and draw it on mask
_, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
cv2.drawContours(mask, [cnt], 0, 255, -1)
# Perform a bitwise operation
res = cv2.bitwise_and(img, img, mask=mask)
# Convert black pixels back to white
black = np.where(res==0)
res[black[0], black[1], :] = [255, 255, 255]
# Display the image
cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
EDIT:
For noisier pictures you could try this code. Note that different graphs have different noises and may not work on every graph image since the denoisiation process would be specific in every case. For different noises you can use different ways to denoise it, for example histogram equalization, eroding, blurring etc. This code works well for all 3 graphs. Steps are written in comments. Hope it helps. Cheers!
import numpy as np
import cv2
# Read the image and create a blank mask
img = cv2.imread('graph.png')
h,w = img.shape[:2]
mask = np.zeros((h,w), np.uint8)
# Transform to gray colorspace and threshold the image
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Perform opening on the thresholded image (erosion followed by dilation)
kernel = np.ones((2,2),np.uint8)
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
# Search for contours and select the biggest one and draw it on mask
_, contours, hierarchy = cv2.findContours(opening,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
cv2.drawContours(mask, [cnt], 0, 255, -1)
# Perform a bitwise operation
res = cv2.bitwise_and(img, img, mask=mask)
# Threshold the image again
gray = cv2.cvtColor(res,cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Find all non white pixels
non_zero = cv2.findNonZero(thresh)
# Transform all other pixels in non_white to white
for i in range(0, len(non_zero)):
first_x = non_zero[i][0][0]
first_y = non_zero[i][0][1]
first = res[first_y, first_x]
res[first_y, first_x] = 255
# Display the image
cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:

Reduce the image to the text contents using scikit image

Here is the image from which I want to take the text out.
How to remove the black border and reduce the image to only 50?
Approach I took:
I tried to use corner detectors (corner peak and corner harris) and pick the first 2 coordinates from the left and last 2 coordinates from the right.
With those 4 coordinates I cropped the image and I further reduced by 5 on all sides.
Certainly not efficient way of doing it. I also looked at few segmentation also. Not able to get it right. I am using scikit image for solving this.
Using corners might not work since corner points can also be present in characters.
Here is what i tried with hough lines as described below:
1) First erode the image to minimize the gap between lines and characters
2) Use Hough line detection algorithm to detect and delete the lines
3) Dilate the image to get clear characters
4) Now we have characters and lines separated, so we can delete the lines by finding the connected components.
Here is the code implementation of the same in Python:
img = cv2.imread('D:\Image\st1.png',0)
ret, thresh = cv2.threshold(img, 150, 255, cv2.THRESH_BINARY_INV)
#dilate the image to reduce gap between characters and lines and get hough lines correctly
kernel = np.ones((3,3),np.uint8)
erosion = cv2.erode(thresh,kernel,iterations = 1)
#find canny edge image
canny = cv2.Canny(erosion,100,200)
minLineLength=img.shape[1]/4
lines = cv2.HoughLinesP(image=canny,rho=0.02,theta=np.pi/500, threshold=10,lines=np.array([]), minLineLength=minLineLength,maxLineGap=10)
a,b,c = lines.shape
# delete the lines
for i in range(a):
cv2.line(erosion, (lines[i][0][0], lines[i][0][1]), (lines[i][0][2], lines[i][0][3]), 0, 3, cv2.LINE_AA)
#erode the image
kernel = np.ones((3,3),np.uint8)
erosion = cv2.dilate(erosion, kernel, iterations=1)
# find connected components
connectivity = 4
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(erosion, connectivity, cv2.CV_32S)
sizes = stats[1:, -1]; nb_components = nb_components - 1
min_size = 250 #threshhold value for lines length
img2 = np.zeros((output.shape), np.uint8)
for i in range(0, nb_components):
if sizes[i] >= min_size:
img2[output == i + 1] = 255 #delete the line components
img = cv2.bitwise_not(img2)
Output image:

Define black region in HSV color space

I want to find boundaries of black region
http://i40.tinypic.com/2lbi9s9.jpg
http://i44.tinypic.com/ka4vuc.jpg
I tried different values for black, but coluld find average value so region is thresholded in both pictures
One of ranges is
inRange(src_HSV, Scalar(0, 0, 0), Scalar(180, 150, 50), src_HSV);
Another is
inRange(src_HSV, Scalar(100, 40, 140), Scalar(140, 160, 255), src_HSV);
I tried to search the Internet for values of black, but couldn't find anything suitable for this case, having different tones of black
Note that in HSV, black is defined as V=0, independently of H and S (in your case, you probably need to look for small values of V and S). I would ignore the H component.
inRange(src_HSV, Scalar(0, 0, 0), Scalar(179,50, 100), src_HSV);
for black and grey shades.
Anyways it is application specific.
Follow this link to get good insights on HSV:
Would this be an option (in RGB):
if ((red + green + blue) <= 64) {
// black
} else {
// not black
}
If not you could try HSL (hue, saturation, lightness) values and set black if lightness < 10% ...

Resources