Detecting a very indistinct triangle on a dial using OpenCV - opencv

So I have a temperature box where I am trying to pinpoint the coordinate location of a small triangle on each temperature dial. Here are the examples of the box with slight variations:
[
I have been able to isolate each dial, get their outlines and centers. I then have an algorithm that will generate an angle measure from the center point and then the eventually found point on the triangle. However, I have been unable, so to speak, "find" solely the triangle using OpenCV. I've been able to outline it and such but cannot figure out how to isolate just it's lines. I have tried multiple shape detection and edge detection blocks of code but have had no luck because its so lightly raised from the actual dial. If I can just get a point on the dial that would be good enough even.

There are several possible approaches you can try in order to find the direction of the dial. In this answer I will try it with classic contour detection. However a well trained ML model can be much more robust and reliable in different lighting conditions. But of course it is more effort to set it up.
Let's say that you already have isolated the dial and know its radius and center. Starting from there the straight forward approach would be:
Prepare the image for thresholding:
If the image is of low resolution as in our case, scale it up by some reasonable factor
If the image is of high resolution, blur it to reduce noise
Convert it to grayscale
Apply adaptiveThresholding or Canny, in this case use the first one
Only keep areas that are of interest:
In this case only keep the features in a circular range where the triangle is supposed to be
In this case only keep the contour with the largest area
Derive the result:
In this case just get the centroid of the largest contour
Code:
import cv2
import numpy as np
# read image, scale it up by some factor and apply adaptive thresholding
img = cv2.imread("img_red.jpg")
h, w, _ = img.shape
f = 8
img = cv2.resize(img, (w * f, h * f))
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray, 255,
cv2.cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY_INV, 71, 5)
cv2.imwrite("thresh.png", thresh)
# only examine circle where the triangle is supposed to be
mask = np.zeros_like(thresh)
cv2.circle(mask, (int(w * f / 2), int(h * f / 2)), int(w * f / 3), 255, int(w * f / 6))
thresh = cv2.bitwise_and(thresh, mask)
cv2.imwrite("thresh_mask.png", thresh)
# get contours, derive contour with largest area and get centroid
contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
if contours:
m = max([(c, cv2.contourArea(c)) for c in contours], key=lambda i: i[1])[0]
M = cv2.moments(m)
if M['m00'] > 0:
x = round(M['m10'] / M['m00'])
y = round(M['m01'] / M['m00'])
# draw small red circle at centroid
cv2.circle(img, (x, y), 2 * f, (0, 0, 255), f)
cv2.imwrite("out.png", img)
Results:

Related

Why do I still get child bounding boxes while I'm using RETR_EXTERNAL?

I'm struggling to get my contours right on a relative simple image.
I'm using RETR_EXTERNAL so if my understanding is well this setting should ignore any contour that's nested inside the parent contours, yet I still get child contours.
They are very noticeable in the last digit (the 8) and less noticeable in the first digit (upper left corner).
So what am I missing here? Or are there better ways to only get the parent bounding box?
Below slightly simplified script, mainly to show the problem.
img_tmp = sample.copy()
img_rgb = cv2.cvtColor(sample, cv2.COLOR_GRAY2RGB) # Original to RGB to distinguish bounding boxes
print('original image\n')
showImage(img_tmp,8,cmap=cm.gray)
# Threshold and get contours
img_tmp = cv2.threshold(img_tmp, 230, 255,cv2.THRESH_BINARY)[1]
edged = cv2.Canny(img_tmp, 100, 250) #low_threshold, high_threshold
cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cnts = sort_contours(cnts, method="left-to-right")[0]
print('\nthreshed and edged image\n')
showImage(edged,8,cmap=cm.gray)
for c in cnts:
# compute the bounding box of the contour and isolate ROI
(x, y, w, h) = cv2.boundingRect(c)
roi = img_tmp[y:y + h, x:x + w]
# append to rgb original
cv2.rectangle(img_rgb, (x, y), (x + w, y + h), (0,255 , 0), 2)
print('\nOuter boxes on original, but also inner...\n')
showImage(img_rgb,8)
In your case, RETR_EXTERNAL is doing correctly what it is expected.
CV_RETR_EXTERNAL retrieves only the extreme outer contours.
Actually, its is ignoring the inner contours which it has detected but the contours you want to ignore belong to a different segment. I mean new contours so RETR_EXTERNAL has nothing to do with them.
What you need to use is that RETR_TREE:
RETR_TREE retrieves all of the contours and reconstructs a full hierarchy of
nested contours.
With the help of this, you will be able to learn all of the contour hierarchy. Here is a well explained example of it.

Identifying imperfect shapes with noisy backgrounds with OpenCV

I am trying to identify a rectangle underwater in a noisy environment. I implemented Canny to find the edges, and drew the found edges using cv2.circle. From here, I am trying to identify the imperfect rectangle in the image (the black one below the long rectangle that covers the top of the frame)
I have attempted multiple solutions, including thresholds, blurs and resizing the image to detect the rectangle. Below is the barebones code with just drawing the identified edges.
import numpy as np
import cv2
import imutils
img_text = 'img5.png'
img = cv2.imread(img_text)
original = img.copy()
min_value = 50
max_value = 100
# draw image and return coordinates of drawn pixels
image = cv2.Canny(img, min_value, max_value)
indices = np.where(image != 0)
coordinates = zip(indices[1], indices[0])
for point in coordinates:
cv2.circle(original, point, 1, (0, 0, 255), -1)
cv2.imshow('original', original)
cv2.waitKey(0)
cv2.destroyAllWindows()
Where the output displays this:
output
From here I want to be able to separately detect just the rectangle and draw another rectangle on top of the output in green, but I haven't been able to find a way to detect the original rectangle on its own.
For your specific image, I obtained quite good results with a simple thresholding on the blue channel.
image = cv2.imread("test.png")
t, img = cv2.threshold(image[:,:,0], 80, 255, cv2.THRESH_BINARY)
In order to adapt the threshold, I propose a simple way of varying the threshold until you get one component. I have also implemented the rectangle drawing:
def find_square(image):
markers = 0
threshold = 10
while np.amax(markers) == 0:
threshold += 5
t, img = cv2.threshold(image[:,:,0], threshold, 255, cv2.THRESH_BINARY_INV)
_, markers = cv2.connectedComponents(img)
kernel = np.ones((5,5),np.uint8)
img = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
img = cv2.morphologyEx(img, cv2.MORPH_DILATE, kernel)
nonzero = cv2.findNonZero(img)
x, y, w, h = cv2.boundingRect(nonzero)
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.imshow("image", image)
And the results on the provided example images:
The idea behind this approach is based on the observation that the most information is in the blue channel. If you separate the images in the channels, you will see that in the blue channel, the dark square has the best contrast. It is also the darkest region on this channel, which is why thresholding works. The problem remains the threshold setting. Based on the above intuition, we are looking for the lowest threshold that will bring up something (and hope that it will be the square). What I did is to simply increase gradually the threshold until something appears.
Then, I applied some morphology operations to eliminate other small points that may appear after thresholding and to make the square look a bit bigger (the edges of the square are lighter, and therefore not the entire square is captured). Then is was a matter of drawing the rectangle.
The code can be made much nicer (and more efficient) by doing some statistical analysis on the histogram. Simply compute the threshold such that 5% (or some percent) of the pixels are darker. You may require do so a connected component analysis to keep the biggest blob.
Also, my usage of connectedComponents is very poor and inefficient. Again, code written in a hurry to prove the concept.

Splitting up digits in images

I've gotten access to a lot of reports which are filled out by hand. One of the columns in the report contains a timestamp, which I would like to attempt to identify without going through each report manually.
I am playing with the idea of splitting the times, e.g. 00:30, into four digits, and running these through a classifier trained on MNIST to identify the actual timestamps.
When I manually extract the four digits in Photoshop and run these through an MNIST classifier, it works perfectly. But so far I haven't been able to figure out how to programatically split the number sequences into single digits. I tried to use different types of countour finding in OpenCV, but it didn't work very reliably.
Any suggestions?
I've added a screenshot of some of the relevant columns in the reports.
I would do something like this (no code as long as it is just an idea, you could test it to see if works):
Extract each area for each group of numbers as Rick M. suggested above. So you will have many Kl [hour] rectangles under image form.
For each of these rectangles extract (using OpenCV contours feature) each ROI. Delete Kl if you don't need it (you know the dimensions of this ROI (you can calculate it with img.shape) and they have more or less the same dimensions)
Extract all digits using the same script used above. You can take a look at my questions/answers to find some pieces of code which do this.
You will have a problem with underline in some cases. Search about this on SO, there are few solutions complete with code.
Now, about splitting up. We know the ROI's are in hour format, so hh:mm (or 4 digits). A simply (and very rudimental) solution to split chars wich are attached between would be to split in half the ROI you get with 2 digits inside. It's a raw solution but should perform well in your case because the digits attached are just 2.
Some digits will output with "missing pieces". This can be avoided by using some erosion/dilation/skeletonization.
Here you don't have letters, only numbers so MNIST should work well (not perfect, keep this in mind).
In a few, extracting the data it's not the hard task but recognizing the digits will make you sweat a bit.
I hope I can provide some code to show the steps above as soon as possible.
EDIT - code
This is some code I made. Final output is this:
The code works 100% with this image so, if something don't work for you, check folders/paths/modules installation.
Hope this helped.
import cv2
import numpy as np
# 1 - remove the vertical line on the left
img = cv2.imread('image.jpg', 0)
# gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(img, 100, 150, apertureSize=5)
lines = cv2.HoughLines(edges, 1, np.pi / 50, 50)
for rho, theta in lines[0]:
a = np.cos(theta)
b = np.sin(theta)
x0 = a * rho
y0 = b * rho
x1 = int(x0 + 1000 * (-b))
y1 = int(y0 + 1000 * (a))
x2 = int(x0 - 1000 * (-b))
y2 = int(y0 - 1000 * (a))
cv2.line(img, (x1, y1), (x2, y2), (255, 255, 255), 10)
cv2.imshow('marked', img)
cv2.waitKey(0)
cv2.imwrite('image.png', img)
# 2 - remove horizontal lines
img = cv2.imread("image.png")
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_orig = cv2.imread("image.png")
img = cv2.bitwise_not(img)
th2 = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 15, -2)
cv2.imshow("th2", th2)
cv2.waitKey(0)
cv2.destroyAllWindows()
horizontal = th2
rows, cols = horizontal.shape
# inverse the image, so that lines are black for masking
horizontal_inv = cv2.bitwise_not(horizontal)
# perform bitwise_and to mask the lines with provided mask
masked_img = cv2.bitwise_and(img, img, mask=horizontal_inv)
# reverse the image back to normal
masked_img_inv = cv2.bitwise_not(masked_img)
cv2.imshow("masked img", masked_img_inv)
cv2.waitKey(0)
cv2.destroyAllWindows()
horizontalsize = int(cols / 30)
horizontalStructure = cv2.getStructuringElement(cv2.MORPH_RECT, (horizontalsize, 1))
horizontal = cv2.erode(horizontal, horizontalStructure, (-1, -1))
horizontal = cv2.dilate(horizontal, horizontalStructure, (-1, -1))
cv2.imshow("horizontal", horizontal)
cv2.waitKey(0)
cv2.destroyAllWindows()
# step1
edges = cv2.adaptiveThreshold(horizontal, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 3, -2)
cv2.imshow("edges", edges)
cv2.waitKey(0)
cv2.destroyAllWindows()
# step2
kernel = np.ones((1, 2), dtype="uint8")
dilated = cv2.dilate(edges, kernel)
cv2.imshow("dilated", dilated)
cv2.waitKey(0)
cv2.destroyAllWindows()
im2, ctrs, hier = cv2.findContours(dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# sort contours
sorted_ctrs = sorted(ctrs, key=lambda ctr: cv2.boundingRect(ctr)[0])
for i, ctr in enumerate(sorted_ctrs):
# Get bounding box
x, y, w, h = cv2.boundingRect(ctr)
# Getting ROI
roi = img[y:y + h, x:x + w]
# show ROI
rect = cv2.rectangle(img_orig, (x, y), (x + w, y + h), (255, 255, 255), -1)
cv2.imshow('areas', rect)
cv2.waitKey(0)
cv2.imwrite('no_lines.png', rect)
# 3 - detect and extract ROI's
image = cv2.imread('no_lines.png')
cv2.imshow('i', image)
cv2.waitKey(0)
# grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow('gray', gray)
cv2.waitKey(0)
# binary
ret, thresh = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY_INV)
cv2.imshow('thresh', thresh)
cv2.waitKey(0)
# dilation
kernel = np.ones((8, 45), np.uint8) # values set for this image only - need to change for different images
img_dilation = cv2.dilate(thresh, kernel, iterations=1)
cv2.imshow('dilated', img_dilation)
cv2.waitKey(0)
# find contours
im2, ctrs, hier = cv2.findContours(img_dilation.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# sort contours
sorted_ctrs = sorted(ctrs, key=lambda ctr: cv2.boundingRect(ctr)[0])
for i, ctr in enumerate(sorted_ctrs):
# Get bounding box
x, y, w, h = cv2.boundingRect(ctr)
# Getting ROI
roi = image[y:y + h, x:x + w]
# show ROI
# cv2.imshow('segment no:'+str(i),roi)
cv2.rectangle(image, (x, y), (x + w, y + h), (255, 255, 255), 1)
# cv2.waitKey(0)
# save only the ROI's which contain a valid information
if h > 20 and w > 75:
cv2.imwrite('roi\\{}.png'.format(i), roi)
cv2.imshow('marked areas', image)
cv2.waitKey(0)
These are next steps:
Understand what I write ;). It's the most important step.
Using pieces of the code above (especially step 3) you can delete remaining Kl in extracted images.
Create folder for each image and extract digits.
Using MNIST, recognize each digit.
Breaking up text into individual characters is not as easy as it sounds at first. You can try to find some rules and manipulate the image by that, but there will be just too many exceptions. For example you can try to find disjoint marks, but the fourth one in your image, 0715 has it's "5" broken up into three pieces, and the 9th one, 17.00 has the two zeros overlapping.
You are very lucky with the horizontal lines - at least it's easy to separate different entries. But you have to come up with a lot of ideas related to semi-fixed character width, a "soft" disjointness rule, etc.
I did a project like that two years ago and we ended up using an external open source library called Tesseract. Here's this article of Roman numerals recognition with it, up to about 90% accuracy. You might also want to look into the Lipi Toolkit, but I have no experience with that.
You might also want to consider to just train a network to recognize the four digits at once. So the input would be the whole field with the four handwritten digits and the output would be the four numbers. And let the network sort out where the characters are. If you have enough training data, that's probably the easiest approach.
EDIT:
Inspired by #Link's answer, I just came up with this idea, you can give it a try. Once you extracted the area between the two lines, trim the image to get rid of white space all around. Then make an educated guess about how big the characters are. Use maybe the height of the area? Then create a sliding window over the image, and run the recognition all the way. There will most likely be four peaks which would correspond to the four digits.

Find the coordinate of a specific text in an image

I am trying to segment the questions in the below image. The only clue I have is the number with the bold text which is indented by a tab space. I am trying to find the bold numbering (4,5,6 in this case) so that I can get the x and y of them and segment the image into 3 separate questions. How to get these or how to approach this problem.
I am using scikit image for image processing
Your image looks quite simple so texts can be segmented quite easily with contour detection around the dilated components. Here are detailed steps:
1) Binarize the image and invert it for easy morphological operations.
2) Dilate the image in horizontal directions only using long horizontal kernal say (20, 1) shape kernal.
3) Find contours of all the connected components and get their coordinates.
4) Use these bounding boxes dimensional information and their coordinates to segment the questions.
Here is the Python implementation of the same:
# Text segmentation
import cv2
import numpy as np
rgb = cv2.imread(r'D:\Image\st4.png')
small = cv2.cvtColor(rgb, cv2.COLOR_BGR2GRAY)
#threshold the image
_, bw = cv2.threshold(small, 0.0, 255.0, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)
# get horizontal mask of large size since text are horizontal components
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (20, 1))
connected = cv2.morphologyEx(bw, cv2.MORPH_CLOSE, kernel)
# find all the contours
_, contours, hierarchy,=cv2.findContours(connected.copy(),cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#Segment the text lines
for idx in range(len(contours)):
x, y, w, h = cv2.boundingRect(contours[idx])
cv2.rectangle(rgb, (x, y), (x+w-1, y+h-1), (0, 255, 0), 2)
Output image:

Find coins on image

I'm trying to find coins at different images and mark their location. Coins always are perfect circles (not ellipses), but they can touch or even overlap. Here are some example images, as well as results of my tries (a Python script using skimage and its outputs), but it doesn't seem to perform well.
The script:
def edges(img, t):
#adapt_rgb(each_channel)
def filter_rgb(image):
sigma = 1
return feature.canny(image, sigma=sigma, low_threshold=t/sigma/2, high_threshold=t/sigma)
edges = color.rgb2hsv(filter_rgb(img))
edges = edges[..., 2]
return edges
images = io.ImageCollection('*.bmp', conserve_memory=True)
for i, im in enumerate(images):
es = edges(im, t=220)
output = im.copy()
circles = cv2.HoughCircles((es*255).astype(np.uint8), cv2.cv.CV_HOUGH_GRADIENT, dp=1, minDist=50, param2=50, minRadius=0, maxRadius=0)
if circles is not None:
circles = np.round(circles[0, :]).astype("int")
for (x, y, r) in circles:
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# now es is edges
# and output is image with marked circles
A couple of example images, with detected edges and circles:
I am using canny edge detection & hough transform, which is the most common way to detect circles. However, with the same parameters it finds almost nothing on some photos, and finds way too many circles on other.
Can you give me any pointers and suggestions on how to do this better?
I ended up using dlib's object detector and it performed very well. The detector can be easily applied for detecting any kind of objects. For some related discussion see the question topic on reddit.
Hmmm, I would do some morphological operations in the canny results, such
as closing and opening operations:
http://en.wikipedia.org/wiki/Mathematical_morphology
I would also recommend you to take a look in the watershed scheme. Applied
directly into the image gradient and then a Hough Transform on it.
http://en.wikipedia.org/wiki/Watershed_%28image_processing%29

Resources