I'm trying to segment the hand out of the image using OpenCV python. One of the images contains a ring on one of the fingers as shown here
After thresholding I get this result:
How can I reconnect the finger after thresholding?
Don't know how you thresholded your image, but if we use it and morphological closing we can close some gaps:
morph = cv2.morphologyEx(threshold_img, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (31, 31)))
One problem you can notice is it closes in all directions. If you know there are more likely gaps in vertical direction you can change the structuring element accordingly:
morph = cv2.morphologyEx(threshold_img, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 31)))
Related
When OCRing technical drawing, most (all?) ocr engines have problems with surrounding geometry and sometimes falsely interpret a line as letter.
To improve the quality of the OCR, I first want to remove certain elements from the drawing, mainly circles and rectangels, from the image.
The drawings are all black & white and look very similar to the below example.
What is the best way to achieve this? I've played around with image magick and opencv with little success...
Here's a partial solution. This problem can be broken up into two steps:
1) Remove rectangles by removing horizontal + vertical lines
We create vertical and horizontal kernels then perform morph close to detect the lines. From here we use bitwise operations to remove the lines.
Detected vertical lines (left) and horizontal lines (right)
Removed lines
import cv2
image = cv2.imread('1.jpg')
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,15))
remove_vertical = 255 - cv2.morphologyEx(image, cv2.MORPH_CLOSE, vertical_kernel)
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15,1))
remove_horizontal = 255 - cv2.morphologyEx(image, cv2.MORPH_CLOSE, horizontal_kernel)
result = cv2.add(cv2.add(remove_vertical, remove_horizontal), image)
cv2.imshow('result', result)
cv2.waitKey()
2) Detect/remove circles
There are several approaches to remove the circles
Use cv2.HoughCircles(). Here's a good tutorial to detect circles in images using Hough Circles
Construct a cv2.MORPH_ELLIPSE kernel using cv2.getStructuringElement() then perform morphological operations to isolate the circle contours
Use simple shape detection with contour approximation and contour filtering to detect the circles. This method uses cv2.arcLength() and cv2.approxPolyDP() for contour approximation. One tradeoff with this method is that it only works with "perfect" shapes. Take a look at detect simple geometric shapes and opencv shape detection
I am trying to perform image segmentation on the following image of brain tissue:
The following is what the segmented result should look like:
I have the following result which I have obtained after applying thresholding, morphological transformations and contour area filtering (used to remove noise in the image) to the original image:
Result before contour filtering:
Result after contour filtering:
However, in my result, some of the black edges got separated/broken apart. Is there any simple method that I can use to close the small gaps between some of the edges.
E.g. is it possible to fill the white spaces between the edges circled in red with black?
Any insights are appreciated.
The easiest method would be to attempt to use Morphology. You simply perform a dilation operation followed by an erosion operation.
The following script uses opencv's Morphology function:
import numpy as np
import cv2
folder = 'C:/Users/Mark/Desktop/'
image = cv2.imread(folder + '6P7Lj.png')
image2 = cv2.bitwise_not(image)
kernel = np.ones((8,8),np.uint8)
closing = cv2.morphologyEx(image2, cv2.MORPH_CLOSE, kernel)
closing = cv2.bitwise_not(closing)
cv2.imshow('image', closing)
cv2.waitKey(0)
This is the results:
Most of the edges were connected. I'm sure you can further play with the function's kernel to get better results (or even use openCV's separate dilatation and erosion function to get ever more control).
Note: I add to invert the image before performing the operation because it treats white pixels as positive and black as negative, unlike your image. In the end it was inverted again to return to your format.
I'm dealing with an image and I need your help. After a lot of image processing I get this from a microscopic image. This is my pre-final thresholded image:
As you can see there's a big C in the upper left corner. This should not be an open blob, it must be a closed one.
How can I achieve that with out modifying the rest? I was thinking on applying Convex Hull to that contour, but I don't know how to apply to that and only that contour, with out even touching the others.
I mean, maybe there's a "meassure" I can use to isolate this contour from the rest. Maybe a way to tell how convex/cancave it is or how big is the "hole" it delimits.
In further works maybe appears some other unclosed contours that I'll need to close, so don't focus it this particular case I'll need something I can use or adapt to other similar cases.
Thanks in advance!
While Jeru's answer is correct on the part where you want to close the contour once you have identified it, I think OP also wants to know how he can automatically identify the "C" blob without having to find out manually that's it the 29th contour.
Hence, I propose a method to identify it : compute the centroids of each shape and check if this centroid is inside the shape. It should be the case for blobs (circle) but not for "C"s.
img=cv2.imread(your_image,0)
if img is None:
sys.exit("No input image") #good practice
res=np.copy(img) #just for visualisation purposes
#finding the connectedComponents (each blob)
output=cv2.connectedComponentsWithStats(img,8)
#centroid is sort of the "center of mass" of the object.
centroids=output[3]
#taking out the background
centroids=centroids[1:]
#for each centroid, check if it is inside the object
for centroid in centroids:
if(img[int(centroid[1]),int(centroid[0])]==0):
print(centroid)
#save it somewhere, then do what Jeru Luke proposes
#an image with the centroids to visualize
res[int(centroid[1]), int(centroid[0])]=100
This works for your code (I tried it out), but caveat, may not work for every "C form" especially if they are "fatter", as their centroid could well be inside them. I think there may be a better measure for convexity as you say, at least looking for such a measure seems the right direction for me.
Maybe you can try something like computing ConvexHull on all your objects (without modifying your input image), than measure the ratio between the object's area and the "convex hull around it"'s area, and if that ratio is under a certain threshold then you classify it as a "C" shape and modify it accordingly.
I have a solution.
First, I found and drew contours on the threshold image given by you.
In the image, I figured out that the 29th contour is the one with the C. Hence I colored every contour apart from the 29th contour with black. The contour having the C alone was in white.
Code:
#---- finding all contours
contours, iji = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
#---- Turning all contours black
cv2.drawContours(im1, contours, -1, (0,0,0), -1)
#---- Turning contour of interest alone white
cv2.drawContours(im1, contours, 29, (255, 255, 255), -1)
You are left with the blob of interest
Having isolated the required blob, I then performed morphological closing using the ellipse kernel for a certain number of iterations.
#---- k = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(30,30))
#---- closing = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, k)
#---- cv2.imshow("closed_img", closing)
The ball is now in your court! I learnt something as well! Have fun.
I have to remove some lines from the sides of hundreds of grayscale images.
In this image lines appear in three sides.
The lines are not consistent though, i.e, they appear above, below, left and/or right side of the image. And they are of unequal length and width.
If you could assume that the borders are free of important information, you may crop the photo like this:
C++ code:
cv::Mat img;
//load your image into img;
int padding=MAX_WIDTH_HEIGHT_OF_THE LINEAS_AREA
img=img(cv::Rect(padding,padding,img.cols-padding,img.rows-padding));
If not, you have to find a less dumb solution like this for example:
Findcontours
Delete contours that are far from the borders.
Draw contours on blank image
Apply hough line with suitable thresholds.
Delete contours that intersect with lines inside the image border.
Another solution, assuming the handwritten shape is connected:
Findcontours
Get the contour with the biggest area.
Draw it on a blank image with -1(fill) flag in the strock argument.
bitwise_and between the original image and the one you made
Another solution, asuming that the handwritten shape could be discontinuity :
Findcontours
Delete any contour that its all points are very close to the border (using euclidian distance with a threshold)
Draw all remaining contours on a blank image with -1(fill) flag in the strock argument.
bitwise_and between the original image and the one you made
P.S. I did not touch HoughLine transform since I do not about the shapes. I assume that some of them may contain very straight lines.
I'm trying to use OpenCV to "parse" screenshots from the iPhone game Blocked. The screenshots are cropped to look like this:
I suppose for right now I'm just trying to find the coordinates of each of the 4 points that make up each rectangle. I did see the sample file squares.c that comes with OpenCV, but when I run that algorithm on this picture, it comes up with 72 rectangles, including the rectangular areas of whitespace that I obviously don't want to count as one of my rectangles. What is a better way to approach this? I tried doing some Google research, but for all of the search results, there is very little relevant usable information.
The similar issue has already been discussed:
How to recognize rectangles in this image?
As for your data, rectangles you are trying to find are the only black objects. So you can try to do a threshold binarization: black pixels are those ones which have ALL three RGB values less than 40 (I've found it empirically). This simple operation makes your picture look like this:
After that you could apply Hough transform to find lines (discussed in the topic I referred to), or you can do it easier. Compute integral projections of the black pixels to X and Y axes. (The projection to X is a vector of x_i - numbers of black pixels such that it has the first coordinate equal to x_i). So, you get possible x and y values as the peaks of the projections. Then look through all the possible segments restricted by the found x and y (if there are a lot of black pixels between (x_i, y_j) and (x_i, y_k), there probably is a line probably). Finally, compose line segments to rectangles!
Here's a complete Python solution. The main idea is:
Apply pyramid mean shift filtering to help threshold accuracy
Otsu's threshold to get a binary image
Find contours and filter using contour approximation
Here's a visualization of each detected rectangle contour
Results
import cv2
image = cv2.imread('1.png')
blur = cv2.pyrMeanShiftFiltering(image, 11, 21)
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.015 * peri, True)
if len(approx) == 4:
x,y,w,h = cv2.boundingRect(approx)
cv2.rectangle(image,(x,y),(x+w,y+h),(36,255,12),2)
cv2.imshow('thresh', thresh)
cv2.imshow('image', image)
cv2.waitKey()
I wound up just building on my original method and doing as Robert suggested in his comment on my question. After I get my list of rectangles, I then run through and calculate the average color over each rectangle. I check to see if the red, green, and blue components of the average color are each within 10% of the gray and blue rectangle colors, and if they are I save the rectangle, if they aren't I discard it. This process gives me something like this:
From this, it's trivial to get the information I need (orientation, starting point, and length of each rectangle, considering the game window as a 6x6 grid).
The blocks look like bitmaps - why don't you use simple template matching with different templates for each block size/color/orientation?
Since your problem is the small rectangles I would start by removing them.
Since those lines are much thinner than the borders of the rectangles I would start by applying morphological operations on the image.
Using a structural element that looks like this:
element = [ 1 1
1 1 ]
should remove lines that are less than two pixels wide. After the small lines are removed the rectangle finding algorithm of OpenCV will most likely do the rest of the job for you.
The erosion can be done in OpenCV by the function cvErode
Try one of the many corner detectors like harris corner detector. also it is in general a good idea to try that at multiple resolutions : so do some preprocessing of of varying magnification.
It appears that you want some sort of color dominated square then you can suppress the other colors, by first using something like cvsplit .....and then thresholding the color...so only that region remains....follow that with a cropping operation ...I think that could work as well ....