Detecting and removing tilt from barcode images using OpenCV - opencv

I want to create a generic tilt detection and correction program from barcode images using Python OpenCV. Does anyone have an idea of how to achieve this functionality?
See some examples of images below:
enter image description here
I will greatly appreciate any help/guidance to achieve this functionality.
Many Thanks and
Kind Regards,
B

I think you want a combination of Canny() and HoughLines(). Canny would detect lines, and the Hough transform creates a record containing the angle of the line. You could then transform the image by the average detected line angle, or something like that.
Example taken from:
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html
import cv2
import numpy as np
img = cv2.imread('dave.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
lines = cv2.HoughLines(edges,1,np.pi/180,200)
for rho,theta in lines[0]:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imwrite('houghlines3.jpg',img)

Related

HoughLinesP not detecting simple edges

I've been struggling with a program to get an anki Vector robot to follow lines on the ground. I've got the problem down to the fact that HoughLinesP won't detect two simple lines (see edges.jpg generated by code). I've cut the program down to basics so as to offer it for comment. Any suggestions most welcome. And yes I've read the similar posts but they don't seem to help.
import cv2
import os
import numpy as np
import time
dev = 1
img = cv2.imread('temp.png')
grey = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(grey,50,150,apertureSize = 3)
blank = np.zeros_like(grey)
maskArea = np.array([[(150, 250), (150,160), (450, 160), (450, 250)]], dtype=np.int32)
mask = cv2.fillPoly(blank, maskArea, 255)
maskedImage = cv2.bitwise_and(edges, mask)
cv2.imwrite('maskedImage.jpg',maskedImage) #save masked edges for diag
while (True):
lines = cv2.HoughLinesP(maskedImage,rho=6,
theta=np.pi/180,
threshold=100,
lines=np.array([]),
minLineLength=10,
maxLineGap=40)
print("============")
if (lines is not None):
radAngle=0
for i in range(0, len(lines)):
for x1,y1,x2,y2 in lines[i]:
if (abs(y2-y1) > 10): #select verticals
#print("====",x1,y1,x2,y2)
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2) #add line to image
radAngle += np.arctan2(x2 - x1, y2 - y1)
#if (len(lines)>2): radAngle = radAngle/len(lines)
degAngle = int(np.rad2deg(radAngle))
if (y1>y2): degAngle -=180
print("degrees = ",degAngle)
cv2.imwrite('houghlines.jpg',img) #save image to disc
disp=cv2.imread('houghlines.jpg')
cv2.imshow('hough_lines', disp) #display overlays on laptop
cv2.waitKey(100) #refresh display
else:
print("None")
time.sleep(1)

Extract face rectangle from ID card

I’m researching the subject of extracting the information from ID cards and have found a suitable algorithm to locate the face on the front. As it is, OpenCV has Haar cascades for that, but I’m unsure what can be used to extract the full rectangle that person is in instead of just the face (as is done in https://github.com/deepc94/photo-id-ocr). The few ideas that I’m yet to test are:
Find second largest rectangle that’s inside the card containing the face rect
Do “explode” of the face rectangle until it hits the boundary
Play around with filters to see what can be seen
What can be recommended to try here as well? Any thoughts, ideas or even existing examples are fine.
Normal approach:
import cv2
import numpy as np
import matplotlib.pyplot as plt
image = cv2.imread("a.jpg")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
_,thresh = cv2.threshold(gray,128,255,cv2.THRESH_BINARY)
cv2.imshow("thresh",thresh)
thresh = cv2.bitwise_not(thresh)
element = cv2.getStructuringElement(shape=cv2.MORPH_RECT, ksize=(7, 7))
dilate = cv2.dilate(thresh,element,6)
cv2.imshow("dilate",dilate)
erode = cv2.erode(dilate,element,6)
cv2.imshow("erode",erode)
morph_img = thresh.copy()
cv2.morphologyEx(src=erode, op=cv2.MORPH_CLOSE, kernel=element, dst=morph_img)
cv2.imshow("morph_img",morph_img)
_,contours,_ = cv2.findContours(morph_img,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
areas = [cv2.contourArea(c) for c in contours]
sorted_areas = np.sort(areas)
cnt=contours[areas.index(sorted_areas[-3])] #the third biggest contour is the face
r = cv2.boundingRect(cnt)
cv2.rectangle(image,(r[0],r[1]),(r[0]+r[2],r[1]+r[3]),(0,0,255),2)
cv2.imshow("img",image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I found the first two biggest contours are the boundary, the third biggest contour is the face. Result:
There is also another way to investigate the image, using sum of pixel values by axises:
x_hist = np.sum(morph_img,axis=0).tolist()
plt.plot(x_hist)
plt.ylabel('sum of pixel values by X-axis')
plt.show()
y_hist = np.sum(morph_img,axis=1).tolist()
plt.plot(y_hist)
plt.ylabel('sum of pixel values by Y-axis')
plt.show()
Base on those pixel sums over 2 asixes, you can crop the region you want by setting thresholds for it.
Haarcascades approach (The most simple)
# Using cascade Classifiers
import numpy as np
import cv2
# We point OpenCV's CascadeClassifier function to where our
# classifier (XML file format) is stored
face_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# Load our image then convert it to grayscale
image = cv2.imread('./your/image/path.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow('Original image', image)
# Our classifier returns the ROI of the detected face as a tuple
# It stores the top left coordinate and the bottom right coordiantes
faces = face_classifier.detectMultiScale(gray, 1.3, 5)
# When no faces detected, face_classifier returns and empty tuple
if faces is ():
print("No faces found")
# We iterate through our faces array and draw a rectangle
# over each face in faces
for (x, y, w, h) in faces:
x = x - 25 # Padding trick to take the whole face not just Haarcascades points
y = y - 40 # Same here...
cv2.rectangle(image, (x, y), (x + w + 50, y + h + 70), (27, 200, 10), 2)
cv2.imshow('Face Detection', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Link to the haarcascade_frontalface_default file
update to #Sanix darker code,
# Using cascade Classifiers
import numpy as np
import cv2
img = cv2.imread('link_to_your_image')
face_classifier = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
scale_percent = 60 # percent of original size
width = int(img.shape[1] * scale_percent / 100)
height = int(img.shape[0] * scale_percent / 100)
dim = (width, height)
# resize image
image = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# face classifier
faces = face_classifier.detectMultiScale(gray, 1.3, 5)
# When no faces detected, face_classifier returns and empty tuple
if faces is ():
print("No faces found")
# We iterate through our faces array and draw a rectangle
# over each face in faces
for (x, y, w, h) in faces:
x = x - 25 # Padding trick to take the whole face not just Haarcascades points
y = y - 40 # Same here...
cv2.rectangle(image, (x, y), (x + w + 50, y + h + 70), (27, 200, 10), 2)
cv2.imshow('Face Detection', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
# if you want to crop the face use below code
for (x, y, width, height) in faces:
roi = image[y:y+height, x:x+width]
cv2.imwrite("face.png", roi)

How to detect test strips with OpenCV?

I'm a newbie to computer vision, and I'm trying to detect all the test strips in this image:
The result I'm trying to get:
I assume it should be very easy, because all the target objects are in rectangular shape and have a fixed aspect ratio. But I have no idea which algorithm or function should I use.
I've tried edge detection and the 2D feature detection example in OpenCV, but the result is not ideal. How should I detect these similar objects but with small differences?
Update:
The test strips can vary in colors, and of course, the shade of the result lines. But they all have the same references lines, as showing in the picture:
I don't know how should I describe these simple features for object detection, as most examples I found online are for complex objects like a building or a face.
The solution is not exact, but it provides a good starting point. You have to play with the parameters though. It would greatly help you if you partition the strips using some threshold method and then apply hough lines individually as #api55 mentioned.
Here are the results I got.
Code.
import cv2
import numpy as np
# read image
img = cv2.imread('KbxN6.jpg')
# filter it
img = cv2.GaussianBlur(img, (11, 11), 0)
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# get edges using laplacian
laplacian_val = cv2.Laplacian(gray_img, cv2.CV_32F)
# lap_img = np.zeros_like(laplacian_val, dtype=np.float32)
# cv2.normalize(laplacian_val, lap_img, 1, 255, cv2.NORM_MINMAX)
# cv2.imwrite('laplacian_val.jpg', lap_img)
# apply threshold to edges
ret, laplacian_th = cv2.threshold(laplacian_val, thresh=2, maxval=255, type=cv2.THRESH_BINARY)
# filter out salt and pepper noise
laplacian_med = cv2.medianBlur(laplacian_th, 5)
# cv2.imwrite('laplacian_blur.jpg', laplacian_med)
laplacian_fin = np.array(laplacian_med, dtype=np.uint8)
# get lines in the filtered laplacian using Hough lines
lines = cv2.HoughLines(laplacian_fin,1,np.pi/180,480)
for rho,theta in lines[0]:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
# overlay line on original image
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
# cv2.imwrite('processed.jpg', img)
# cv2.imshow('Window', img)
# cv2.waitKey(0)
This is an alternative solution by using the function findCountours in combination with canny edge detection. The code is based very slightly on this tutorial
import cv2
import numpy as np
import imutils
image = cv2.imread('test.jpg')
resized = imutils.resize(image, width=300)
ratio = image.shape[0] / float(resized.shape[0])
# convert the resized image to grayscale, blur it slightly,
# and threshold it
gray = cv2.cvtColor(resized, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(resized,100,200)
cv2.imshow('dsd2', edges)
cv2.waitKey(0)
cnts = cv2.findContours(edges.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_NONE)
cnts = cnts[0] if imutils.is_cv2() else cnts[1]
sd = ShapeDetector()
# loop over the contours
for c in cnts:
# compute the center of the contour, then detect the name of the
# shape using only the contour
M = cv2.moments(c)
cX = int((M["m10"] / M["m00"]) * ratio)
cY = int((M["m01"] / M["m00"]) * ratio)
# multiply the contour (x, y)-coordinates by the resize ratio,
# then draw the contours and the name of the shape on the image
c = c.astype("float")
c *= ratio
c = c.astype("int")
cv2.drawContours(image, [c], -1, (0, 255, 0), 2)
#show the output image
#cv2.imshow("Image", image)
#cv2.waitKey(0)
cv2.imwrite("erg.jpg",image)
Result:
I guess it can be improved by tuning following parameters:
image resizing width
CHAIN_APPROX_NONE (findContour Docs)
It is maybe also usefull to filter small contours or merge contours which are close to each other.

Hough space for cv2 houghlines

I am having some trouble with cv2.Houghlines() showing vertical lines when I believe that the real fit should provide horizontal lines.
Here is a clip of the code I am using:
rho_resoultion = 1
theta_resolution = np.pi/180
threshold = 200
lines = cv2.HoughLines(image, rho_resoultion, theta_resolution, threshold)
# print(lines)
for line in lines:
rho, theta = line[0]
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(image,(x1,y1),(x2,y2),(255,255,255),1)
cv2.namedWindow('thing', cv2.WINDOW_NORMAL)
cv2.imshow("thing", image)
cv2.waitKey(0)
This is the input and output:
I think it would be easier to extract out what is occurring if the Hough space image could be viewed.
However, the documentation does not provide information for how to show the full hough space.
How would one show the whole Hough transform space?
I attempted reducing the threshold to 1 but it did not provide an image.
Maybe you got something wrong when calculationg the angles. Feel free to show some code.
Here is an example of how to show all Hough lines in an image:
import cv2
import numpy as np
img = cv2.imread('sudoku.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
lines = cv2.HoughLines(edges,1,np.pi/180,200)
for line in lines:
for rho,theta in line:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imshow('Houghlines',img)
if cv2.waitKey(0) & 0xff == 27:
cv2.destroyAllWindows()
Original image:
Result:

Hough Line: Detect ticks on the image

I am trying the detect ticks on the following image using hough line transformation:
I am using the following simple open CV code:
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
cv2.imwrite('original.jpg',img)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
lines = cv2.HoughLines(edges,1,np.pi/180,200)
for rho,theta in lines[0]:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
I am getting the following output:
I wanted to detect the ticks, but instead it detected the lines. How can I solve it? Any help or suggestion will be appreciated.
I'm not sure what you mean by "ticks", I guess the green and red lines?!?
Using C++ API and HoughLinesP:
function call:
cv::HoughLinesP(edges, lines, 1, CV_PI/720.0, 30, 10 /* min-length */, 1 /* max gap */);
canny:
cv::Mat edges;
cv::Canny(gray, edges, 50, 150, 3);
I get this result for canny:
edges look like this
that's why the result looks like:
but using edges from thresholds:
edges = gray > 50;
edge image:
result:

Resources