Hough Lines Detection inconsistent from frame to frame - opencv

The HOUGH lines (RED and WHITE) I get vary from one frame of video to the next even though the scene is static.
There is also lots of variation in the Canny results from frame to frame. This problem is not so bad here with my test case but for a real street scene, the Canny detected edges really go nuts from frame to frame.
As can be seen, many lines also simply get missed.
I realize that the noise is different from frame to frame but the conversion to grayscale and subsequent blur make the input images very close (at least to my eye).
What is going on and is there any way to fix this?
# Python 2/3 compatibility
import sys
PY3 = sys.version_info[0] == 3
if PY3:
xrange = range
import numpy as np
import cv2
import math
from time import sleep
cap = cv2.VideoCapture(0)
if __name__ == '__main__':
SLOPE = 2.0
while(True):
sleep(0.2)
ret, src = cap.read()
gray = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)
gray_blur = cv2.medianBlur(gray, 5)
gray_blur_canny = cv2.Canny(gray_blur, 25, 150)
cv2.imshow("src", src)
cv2.imshow("gray_blur", gray_blur)
cv2.imshow("gray_blur_canny", gray_blur_canny)
cimg = src.copy() # numpy function
lines = cv2.HoughLinesP(
gray_blur_canny,
1,
math.pi/180.0,
40,
np.array([]),
50,
10)
if lines is not None:
a,b,c = lines.shape
for i in range(a):
numer = lines[i][0][3] - lines[i][0][1] + 0.001;
denom = lines[i][0][2] - lines[i][0][0];
if (denom == 0):
denom = 0.001;
slope = abs(numer/denom);
print slope
if (slope > SLOPE):
cv2.line(
cimg,
(lines[i][0][0], lines[i][0][1]),
(lines[i][0][2], lines[i][0][3]),
(0, 0, 255),
3,
cv2.LINE_AA)
if (slope < (1.0/SLOPE)):
cv2.line(
cimg,
(lines[i][0][0], lines[i][0][1]),
(lines[i][0][2], lines[i][0][3]),
(200, 200, 200),
3,
cv2.LINE_AA)
cv2.imshow("hough lines", cimg)
ch = cv2.waitKey(1)
if ch == 27:
break
cv2.destroyAllWindows()

Related

Adjusting pytesseract parameters

Note: I am migrating this question from Data Science Stack Exchange, where it received little exposure.
I am trying to implement an OCR solution to identify the numbers read from the picture of a screen.
I am adapting this pyimagesearch tutorial to my problem.
Because I am dealing with a dark background, I first invert the image, before converting it to grayscale and thresholding it:
inverted_cropped_image = cv2.bitwise_not(cropped_image)
gray = get_grayscale(inverted_cropped_image)
thresholded_image = cv2.threshold(gray, 100, 255, cv2.THRESH_BINARY)[1]
Then I call pytesseract's image_to_data function to output a dictionary containing the different text regions and their confidence intervals:
from pytesseract import Output
results = pytesseract.image_to_data(thresholded_image, output_type=Output.DICT)
Finally I iterate over results and plot them when their confidence exceeds a user defined threshold (70%). What bothers me, is that my script identifies everything in the image except the number that I would like to recognize (1227.938).
My first guess is that the image_to_data parameters are not set properly.
Checking this website, I selected a page segmentation mode (psm) of 11 (sparse text) and tried whitelisting numbers only (tessedit_char_whitelist=0123456789m.'):
results = pytesseract.image_to_data(thresholded_image, config='--psm 11 --oem 3 -c tessedit_char_whitelist=0123456789m.', output_type=Output.DICT)
Alas, this is even worse, and the script now identifies nothing at all!
Do you have any suggestion? Am I missing something obvious here?
EDIT #1:
At Ann Zen's request, here's the code used to obtain the first image:
import imutils
import cv2
import matplotlib.pyplot as plt
import numpy as np
import pytesseract
from pytesseract import Output
def get_grayscale(image):
return cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
filename = "IMAGE.JPG"
cropped_image = cv2.imread(filename)
inverted_cropped_image = cv2.bitwise_not(cropped_image)
gray = get_grayscale(inverted_cropped_image)
thresholded_image = cv2.threshold(gray, 100, 255, cv2.THRESH_BINARY)[1]
results = pytesseract.image_to_data(thresholded_image, config='--psm 11 --oem 3 -c tessedit_char_whitelist=0123456789m.', output_type=Output.DICT)
color = (255, 255, 255)
for i in range(0, len(results["text"])):
x = results["left"][i]
y = results["top"][i]
w = results["width"][i]
h = results["height"][i]
text = results["text"][i]
conf = int(results["conf"][i])
print("Confidence: {}".format(conf))
if conf > 70:
print("Confidence: {}".format(conf))
print("Text: {}".format(text))
print("")
text = "".join([c if ord(c) < 128 else "" for c in text]).strip()
cv2.rectangle(cropped_image, (x, y), (x + w, y + h), color, 2)
cv2.putText(cropped_image, text, (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX,1.2, color, 3)
cv2.imshow('Image', cropped_image)
cv2.waitKey(0)
EDIT #2:
Rarely have I spent reputation points so well! All three replies posted so far helped me refine my algorithm.
First, I wrote a Tkinter program allowing me to manually crop the image around the number of interest (modifying the one found in this SO post)
Then I used Ann Zen's idea of narrowing down the search area around the fractional part. I am using her nifty process function to prepare my grayscale image for contour extraction: contours, _ = cv2.findContours(process(img_gray), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE). I am using RETR_EXTERNAL to avoid dealing with overlapping bounding rectangles.
I then sorted my contours from left to right. Bounding rectangles exceeding a user-defined threshold are associated with the integral part (white rectangles); otherwise they are associated with the fractional part (black rectangles).
I then extracted the characters using Esraa's approach i.e. applying a Gaussian blur prior to calling Tesseract. I used a much larger kernel (15x15 vs 3x3) to achieve this.
I am not out of the woods yet, but hopefully I will get better results by using Ahx's adaptive thresholding.
The Concept
As you have probably heard, pytesseract is not good at detecting text of different sizes on the same line as one piece of text. In your case, you want to detect the 1227.938, where the 1227 is much larger than the .938.
One way to go about solving this is to have the program estimate where the .938 is, and enlarge that part of the image. After that, pytesseract will have no problem in returning the text.
The Code
import cv2
import numpy as np
import pytesseract
def process(img):
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(img_gray, 200, 255, cv2.THRESH_BINARY)
img_canny = cv2.Canny(thresh, 100, 100)
kernel = np.ones((3, 3))
img_dilate = cv2.dilate(img_canny, kernel, iterations=2)
return cv2.erode(img_dilate, kernel, iterations=2)
img = cv2.imread("image.png")
img_copy = img.copy()
hh = 50
contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
for cnt in contours:
if 20 * hh < cv2.contourArea(cnt) < 30 * hh:
x, y, w, h = cv2.boundingRect(cnt)
ww = int(hh / h * w)
src_seg = img[y: y + h, x: x + w]
dst_seg = img_copy[y: y + hh, x: x + ww]
h_seg, w_seg = dst_seg.shape[:2]
dst_seg[:] = cv2.resize(src_seg, (ww, hh))[:h_seg, :w_seg]
gray = cv2.cvtColor(img_copy, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 180, 255, cv2.THRESH_BINARY)
results = pytesseract.image_to_data(thresh)
for b in map(str.split, results.splitlines()[1:]):
if len(b) == 12:
x, y, w, h = map(int, b[6: 10])
cv2.putText(img, b[11], (x, y + h + 15), cv2.FONT_HERSHEY_COMPLEX, 0.6, 0)
cv2.imshow("Result", img)
cv2.waitKey(0)
The Output
Here is the input image:
And here is the output image:
As you have said in your post, the only part you need the the decimal 1227.938. If you want to filter out the rest of the detected text, you can try tweaking some parameters. For example, replacing the 180 from _, thresh = cv2.threshold(gray, 180, 255, cv2.THRESH_BINARY) with 230 will result in the output image:
The Explanation
Import the necessary libraries:
import cv2
import numpy as np
import pytesseract
Define a function, process(), that will take in an image array, and return a binary image array that is the processed version of the image that will allow proper contour detection:
def process(img):
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(img_gray, 200, 255, cv2.THRESH_BINARY)
img_canny = cv2.Canny(thresh, 100, 100)
kernel = np.ones((3, 3))
img_dilate = cv2.dilate(img_canny, kernel, iterations=2)
return cv2.erode(img_dilate, kernel, iterations=2)
I'm sure that you don't have to do this, but due to a problem in my environment, I have to add pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe' before I can call the pytesseract.image_to_data() method, or it throws an error:
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
Read in the original image, make a copy of it, and define the rough height of the large part of the decimal:
img = cv2.imread("image.png")
img_copy = img.copy()
hh = 50
Detect the contours of the processed version of the image, and add a filter that roughly filters out the contours so that the small text remains:
contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
for cnt in contours:
if 20 * hh < cv2.contourArea(cnt) < 30 * hh:
Define the bounding box of each contour that didn't get filtered out, and use the properties to enlarge those parts of the image to the height defined for the large text (making sure to also scale the width accordingly):
x, y, w, h = cv2.boundingRect(cnt)
ww = int(hh / h * w)
src_seg = img[y: y + h, x: x + w]
dst_seg = img_copy[y: y + hh, x: x + ww]
h_seg, w_seg = dst_seg.shape[:2]
dst_seg[:] = cv2.resize(src_seg, (ww, hh))[:h_seg, :w_seg]
Finally, we can use the pytesseract.image_to_data() method to detect the text. Of course, we'll need to threshold the image again:
gray = cv2.cvtColor(img_copy, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 180, 255, cv2.THRESH_BINARY)
results = pytesseract.image_to_data(thresh)
for b in map(str.split, results.splitlines()[1:]):
if len(b) == 12:
x, y, w, h = map(int, b[6: 10])
cv2.putText(img, b[11], (x, y + h + 15), cv2.FONT_HERSHEY_COMPLEX, 0.6, 0)
cv2.imshow("Result", img)
cv2.waitKey(0)
I have been working with Tesseract for quite some time, so let me clarify something for you. Tesseract is extremely helpful if you're trying to recognize text in documents more than any other computer vision projects. It usually needs a binarized image to get a good output. Therefore, you will always need some image pre-processing.
However, after several trials in the past with all page segmentation modes, I realized that it fails when font size differs on the same line without having a space. Sometimes PSM 6 is helpful if the difference is low, but in your condition, you may try an alternative. If you don't care about the decimals, you may try the following solution:
img = cv2.imread(r'E:\Downloads\Iwzrg.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_blur = cv2.GaussianBlur(gray, (3,3),0)
_,thresh = cv2.threshold(img_blur,200,255,cv2.THRESH_BINARY_INV)
# If using a fixed camera
new_img = thresh[0:100, 80:320]
text = pytesseract.image_to_string(new_img, lang='eng', config='--psm 6 --oem 3 -c tessedit_char_whitelist=0123456789')
OUTPUT: 1227
I would like to recommend applying another image processing method.
Because I am dealing with a dark background, I first invert the image, before converting it to grayscale and thresholding it:
You applied global thresholding and couldn't achieve the desired result.
Then you can apply either adaptive-thresholding or inRange
For the given image, if we apply the inRange threshold:
To be able to recognize the image as accurately as possible we can add a border to the top of the image and resize the image (Optional)
In the OCR section, check if the detected region contains a digit
if text.isdigit():
Then display on the image:
The result is nearly the desired value. Now you can try with the other suggested methods to find the exact value.
The problem is .938 recognized as 235, maybe resizing using different values might improve the result.
Code:
from cv2 import imread, cvtColor, COLOR_BGR2HSV as HSV, inRange, getStructuringElement, resize
from cv2 import imshow, waitKey, MORPH_RECT, dilate, bitwise_and, rectangle, putText
from cv2 import copyMakeBorder as addBorder, BORDER_CONSTANT as CONSTANT, FONT_HERSHEY_SIMPLEX
from numpy import array
from pytesseract import image_to_data, Output
bgr = imread("Iwzrg.png")
resized = resize(bgr, (800, 600), fx=0.75, fy=0.75)
bordered = addBorder(resized, 200, 0, 0, 0, CONSTANT, value=0)
hsv = cvtColor(bordered, HSV)
mask = inRange(hsv, array([0, 0, 250]), array([179, 255, 255]))
kernel = getStructuringElement(MORPH_RECT, (50, 30))
dilated = dilate(mask, kernel, iterations=1)
thresh = 255 - bitwise_and(dilated, mask)
data = image_to_data(thresh, output_type=Output.DICT)
for i in range(0, len(data["text"])):
x = data["left"][i]
y = data["top"][i]
w = data["width"][i]
h = data["height"][i]
text = data["text"][i]
if text.isdigit():
print("Text: {}".format(text))
print("")
text = "".join([c if ord(c) < 128 else "" for c in text]).strip()
rectangle(thresh, (x, y), (x + w, y + h), (0, 255, 0), 2)
putText(thresh, text, (x, y - 10), FONT_HERSHEY_SIMPLEX, 1.2, (0, 0, 255), 3)
imshow("", thresh)
waitKey(0)

OpenCv Get edge distance to circle center

A bit off an intro, i need to make a visual aid to align sheets against fixed points.
My setup has 3 points, a sheetmetal plate needs to be positioned against these points using a forklift.
Its a gentle task, we cant use brut force to align the sheet, so i want to install camera's to help them align there sheetmetal plate.
Code so far:
import sys
import cv2 as cv
import numpy as np
cap = cv.VideoCapture(0)
val = 50
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
gray = cv.GaussianBlur(gray, (5,5), 0)
rows = gray.shape[0]
circles = cv.HoughCircles(gray, cv.HOUGH_GRADIENT, 1, rows / 8,
param1=100, param2=30,
minRadius=1, maxRadius=30)
edges = cv.Canny(gray,val,val*3,apertureSize = 3)
lines = cv.HoughLines(edges,1.2,np.pi/180,200)
font = cv.FONT_HERSHEY_SIMPLEX
color = (255, 255, 255)
thickness = 2
index = 1
if len(circles[0]) > 2 :
circles = np.uint16(np.floor(circles))
circles2=sorted(circles[0],key=lambda x:x[0],reverse=False)
print (circles2)
for i in circles2:
center = (i[0], i[1])
cv.circle(frame, center, 1, (0, 255, 0), 3)
text = str(index) +' ' + str(i[0]) +' ' + str(i[1])
cv.putText(frame, text, center, font, 1, color, thickness, cv.LINE_AA)
index += 1
cv.imshow("detected circles", frame)
cv.imshow("detected edges", edges)
if cv.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv.destroyAllWindows()
So the points are found, somehow i need to find the first 255 value in edges right 'above' the 2nd and 3th point, and the last 255 value next to the first point
i'm struggling too slice? the array, find the value 255, returns its index, so i can calculate the distance between point and plate.
any ideas on how to get the distance between?
Thank you in advance
I got it.
the code:
import sys
import cv2 as cv
import numpy as np
cap = cv.VideoCapture(0)
val = 50
singleprint = 0
# Dots per millimeter
dpmm = 2
def distance(circle):
# Calculating the distance np.where(array[row, column])
p = 0
if axis == 1:
p = np.where(edges[:,circle[0]] == 255)[0][0]
return (circle[1] - p - circle[2])/dpmm
else:
p = np.where(edges[circle[1],:] == 255)[0][-1]
return (p - circle[0] - circle[2])/dpmm
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
gray = cv.GaussianBlur(gray, (5,5), 0)
rows = gray.shape[0]
circles = cv.HoughCircles(gray, cv.HOUGH_GRADIENT, 1, rows / 8,
param1=100, param2=30,
minRadius=1, maxRadius=30)
edges = cv.Canny(gray,val,val*3,apertureSize = 3)
lines = cv.HoughLines(edges,1.2,np.pi/180,200)
# Text property's
font = cv.FONT_HERSHEY_SIMPLEX
color = (255, 255, 255)
thickness = 2
index = 1
axis = 0
if len(circles[0]) > 2 :
circles = np.uint16(np.floor(circles))
circles2=sorted(circles[0],key=lambda x:x[0],reverse=False)
for i in circles2:
center = (i[0], i[1])
cv.circle(frame, center, 1, (0, 255, 0), 3)
text = str(distance(i))
cv.putText(frame, text, center, font, 1, color, thickness, cv.LINE_AA)
index += 1
axis = 1
cv.imshow("detected circles", frame)
if cv.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv.destroyAllWindows()
Key was learning to use Numpy in a specific row or column
np.where(edges[:,circle[0]] == 255)[0][0]
resource: https://youtu.be/GB9ByFAIAH4?t=1103
Hope this helps others.
Thanks all

How to get the face temperature in opencv python

I'm new to the forum and I certainly don't know the rules.
But I have a question I want to determine the temperature of the face with a Raspberry 4 and two cameras one normal, the other thermal (MLX90640). My question is how can I determine the temperature of the face with the thermal camera every time the normal camera detects a face? I wrote a code but it detects the temperature of the environment not that of the face. I tried a correction but I have an error message that I would post. Thank you
import cv2,io,imutils
import numpy as np
from imutils.video import VideoStream
import time
import board
import busio
import adafruit_mlx90640
import random
import math
PRINT_TEMPERATURES = True
i2c = busio.I2C(board.SCL, board.SDA, frequency=800000)
mlx = adafruit_mlx90640.MLX90640(i2c)
print("MLX addr detected on I2C")
print([hex(i) for i in mlx.serial_number])
frame = [0]*768
mlx.refresh_rate = adafruit_mlx90640.RefreshRate.REFRESH_8_HZ
print("Starting Camera...........")
detector= cv2.CascadeClassifier('/home/pi/opencv/data/haarcascades/haarcascade_frontalface_alt.xml')
cap = cv2.VideoCapture(0)
max_t=0
height = cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 24)
width = cap.set(cv2.CAP_PROP_FRAME_WIDTH, 32)
fps = cap.set(cv2.CAP_PROP_FPS, 10)
height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
while(True):
ret,img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector.detectMultiScale(gray, 1.3, 5)
mlx.getFrame(frame)
for h in range(24):
for w in range(32):
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi = frame [y:y+h,x:x+w]
max_t = max(roi)
min_t = int (min(frame))
roi_gray = roi.astype(np.uint8)
roi = cv2.applyColorMap(roi, cv2.COLORMAP_JET)
text = "Min : "+ str(min_t) + "C Max :" + str(max_t) + "C"
org = (10, 40)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(roi, text, org, font, 1, (255, 255, 255), 2, cv2.LINE_AA)
cv2.imshow('Screen1',roi)
if PRINT_TEMPERATURES:
print("%d",max_t)
text = "occupe"
cv2.imshow('Screen',img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Traceback (most recent call last):
File "/home/pi/Downloads/ta.py", line 39, in <module>
roi = frame [y:y+h,x:x+w]
TypeError: list indices must be integers or slices, not tuple

issue of the recognize people by their clothes color with not severe illumination environments

I am interested in the human following using a real robot.
I'd like to use the color of clothes as a key feature to identify the target person in front of the robot to follow him/ her but I am suffering due to it is a weak feature with a very simple illumination changing. So, I need to alter this algorithm to another or update values (RGB) online in real-time but I don't have enough experience with image processing.
this is my full code for color detection:
import cv2
import numpy as np
from imutils.video import FPS
# capturing video through webcam
import time
cap = cv2.VideoCapture(0)
width = cap.get(3) # float
height = cap.get(4) # float
print width, height
time.sleep(2.0)
fps = FPS().start()
while (1):
_, img = cap.read()
if _ is True:
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
else:
continue
# blue color
blue_lower = np.array([99, 115, 150], np.uint8)
blue_upper = np.array([110, 255, 255], np.uint8)
blue = cv2.inRange(hsv, blue_lower, blue_upper)
kernal = np.ones((5, 5), "uint8")
blue = cv2.dilate(blue, kernal)
res_blue = cv2.bitwise_and(img, img, mask=blue)
# Tracking blue
(_, contours, hierarchy) = cv2.findContours(blue, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for pic, contour in enumerate(contours):
area = cv2.contourArea(contour)
if (area > 300):
x, y, w, h = cv2.boundingRect(contour)
img = cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
cv2.putText(img, "Blue Colour", (x, y), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0))
cv2.imshow("Color Tracking", img)
if cv2.waitKey(10) & 0xFF == ord('q'):
cap.release()
cv2.destroyAllWindows()
break
fps.update()
# stop the timer and display FPS information
fps.stop()
# print("[INFO] elapsed time: {:.2f}".format(fps.elapsed()))
# print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))
these are the outputs:
1- recognize a person by his clothes color
2- it is lost, the illumination changing is a very simple not severe
Any ideas or suggestions will be appreciated
It does look like you need to use a bit more advanced color similarity function to handle complex cases. Delta E will be the right starting point.
Proper threshold or several colors with associated thresholds will help to achieve pretty accurate results:
See the list of colours on the right side
Complete example.

Detecting circles in OpenCV

Though I realize that there is no "one size fits all" setting for OpenCV's HoughCircles, I'm having quite a bit of trouble finding even one reasonable set of parameters.
My input image is the following photo, which contains some pretty obvious big black circles, as well as some noise around it:
I tried playing with the p1 and p2 arguments, to try and get precisely the four black circles detected (and optionally the tape roll at the top -- that's not required but I wouldn't mind if it matched either).
import numpy as np
import cv2
gray = frame = cv2.imread('testframe2.png')
gray = cv2.cvtColor(gray, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray,(5,5),0)
# gray = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 5, 2)
p1 = 200
p2 = 55
while True:
out = cv2.cvtColor(gray, cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1, 10, param1=p1, param2=p2, minRadius=10, maxRadius=0)
if circles is not None:
for (x, y, r) in circles[0]:
cv2.rectangle(out, (int(x - r), int(y - r)), (int(x + r), int(y + r)), (255, 0, 0))
cv2.putText(out, "r = %d" % int(r), (int(x + r), int(y)), cv2.FONT_HERSHEY_SIMPLEX, 0.3, (255, 0, 0))
cv2.putText(out, "p: (%d, %d)" % (p1, p2), (0, 100), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 4)
cv2.imshow('debug', out)
if cv2.waitKey(0) & 0xFF == ord('x'):
break
elif cv2.waitKey(0) & 0xFF == ord('q'):
p1 += 5
elif cv2.waitKey(0) & 0xFF == ord('a'):
p1 -= 5
elif cv2.waitKey(0) & 0xFF == ord('w'):
p2 += 5
elif cv2.waitKey(0) & 0xFF == ord('s'):
p2 -= 5
cv2.destroyAllWindows()
It seems the best I can do is detect the big circle several times but not the small one at all, or get a lot of false positives:
I've Read The F** Manual but it does not help me further: how do I somewhat reliably detect the circles and nothing but the circles in this image?
There was a bit of manual tweaking with the HoughCircles params, but this gives the result you're looking for. I've used the OpenCV Wrapper library which just simplifies some things.
import cv2
import opencv_wrapper as cvw
import numpy as np
frame = cv2.imread("tape.png")
gray = cvw.bgr2gray(frame)
thresh = cvw.threshold_otsu(gray, inverse=True)
opened = cvw.morph_open(thresh, 9)
circles = cv2.HoughCircles(
opened, cv2.HOUGH_GRADIENT, 1, 10, param1=100, param2=17, minRadius=5, maxRadius=-1
)
if circles is not None:
circles = np.around(circles).astype(int)
for circle in circles[0]:
cv2.floodFill(thresh, None, (circle[0], circle[1]), 155)
only_circles = thresh.copy()
only_circles[only_circles != 155] = 0
contours = cvw.find_external_contours(only_circles)
cvw.draw_contours(frame, contours, (255, 0, 255), thickness=2)
cv2.imwrite("tape_result.png", frame)
I used HoughCircles to find just the centers, as suggested in the documentation note.
I then used floodFill to fill the circles. Note that the left-most circle is very close to the edge. If the image was blurred, the flood filling would go into the background.
Disclosure: I'm the author of OpenCV Wrapper. Haven't added Hough Circles and flood filling yet.

Resources