I am trying to perform image segmentation on the following image of brain tissue:
The following is what the segmented result should look like:
I have the following result which I have obtained after applying thresholding, morphological transformations and contour area filtering (used to remove noise in the image) to the original image:
Result before contour filtering:
Result after contour filtering:
However, in my result, some of the black edges got separated/broken apart. Is there any simple method that I can use to close the small gaps between some of the edges.
E.g. is it possible to fill the white spaces between the edges circled in red with black?
Any insights are appreciated.
The easiest method would be to attempt to use Morphology. You simply perform a dilation operation followed by an erosion operation.
The following script uses opencv's Morphology function:
import numpy as np
import cv2
folder = 'C:/Users/Mark/Desktop/'
image = cv2.imread(folder + '6P7Lj.png')
image2 = cv2.bitwise_not(image)
kernel = np.ones((8,8),np.uint8)
closing = cv2.morphologyEx(image2, cv2.MORPH_CLOSE, kernel)
closing = cv2.bitwise_not(closing)
cv2.imshow('image', closing)
cv2.waitKey(0)
This is the results:
Most of the edges were connected. I'm sure you can further play with the function's kernel to get better results (or even use openCV's separate dilatation and erosion function to get ever more control).
Note: I add to invert the image before performing the operation because it treats white pixels as positive and black as negative, unlike your image. In the end it was inverted again to return to your format.
Related
I am trying to do segmentation of leaf images of tomato crops. I want to convert images like following image
to following image with black background
I have reference this code from Github
but it does not do well on this problem, It does something like this
Can anyone suggest me a way to do it ?
The image is separable using the HSV-colorspace. The background has little saturation, so thresholding the saturation removes the gray.
Result:
Code:
import numpy as np
import cv2
# load image
image = cv2.imread('leaf.jpg')
# create hsv
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# set lower and upper color limits
low_val = (0,60,0)
high_val = (179,255,255)
# Threshold the HSV image
mask = cv2.inRange(hsv, low_val,high_val)
# remove noise
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel=np.ones((8,8),dtype=np.uint8))
# apply mask to original image
result = cv2.bitwise_and(image, image,mask=mask)
#show image
cv2.imshow("Result", result)
cv2.imshow("Mask", mask)
cv2.imshow("Image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
The problem with your image is the different coloration of the leaf. If you convert the image to grayscale, you will see the problem for the binarization algorithm:
Do you notice the very different brightness of the bottom half and the top half of the leaf? This gives you three mostly uniformly bright areas of the image: The actual background, the top-half leaf and the bottom-half leaf. That's not good for binarization.
However, your problem can be solved by separating your color image into it's respective channels. After separation, you will notice that in the blue channel the leaf looks very uniformly bright:
Which makes sense if we think about the colors we are talking about: Both green and yellow have very small amounts blue in it, if any.
This makes it easy for us to binarize it. For the sake of a clearer image, i first applied smoothing
and then used the iso_data Threshold of ImageJ (you can however use any of the existing automatic thresholding methods available) to create a binary mask:
Because the algorithm has set the leaf to background (black), we have to invert it:
This mask can be further improved by applying binary "fill holes" algorithms:
This mask can be used to crop the original image to extract the leaf:
The quality of the result image could be further improved by eroding the mask a little bit.
For the sake of completeness: You do not have to smooth the image, to get a result. Here is the mask for the unsmoothed image:
To remove the noise, you first apply binary fill holes, then binary closing followed by binary erosion. This will give you:
as a mask.
This will lead to
I've been recently working at a segmentation process for corneal
endothelial cells, and I've found a pretty decent paper that describes ways to perform it with nice results. I have been trying to follow that paper and implement it all using scikit-image and openCV, but I've gotten stucked at the watershed segmentation.
I will briefly describe how is the process supposed to be:
First of all, you have the original endothelial cells image
original image
Then, they instruct you to perform a morphological grayscale reconstruction, in order to level a little bit the grayscale of the image (however, they do not explain how to get the markers for the grayscale, so I've been fooling around and tried to get some on my own way)
This is what the reconstructed image was supposed to look like:
desired reconstruction
This is what my reconstructed image (lets label it as r) looks like:
my reconstruction
The purpose is to use the reconstructed image to get the markers for the watershed segmentation, how do we do that?! We get the original image (lets label it as f), and perform a threshold in (f - r) to extract the h-domes of the cell, i.e., our markers.
This is what the hdomes image was supposed to look like:
desired hdomes
This is what my hdomes image looks like:
my hdomes
I believe that the hdomes I've got are as good as theirs, so, the final step is to finally perform the watershed segmentation on the original image, using the hdomes we've been working so hard to get!
As input image, we will use the inverted original image, and as markers, our markers.
This is the derised output:
desired output
However, I am only getting a black image, EVERY PIXEL IS BLACK and I have no idea of what's happening... I've also tried using their markers and inverted image, however, also getting black image. The paper I've been using is Luc M. Vincent, Barry R. Masters, "Morphological image processing and network analysis of cornea endothelial cell images", Proc. SPIE 1769
I apologize for the long text, however I really wanted to explain everything in detail of what is my understanding so far, btw, I've tried watershed segmentation from both scikit-image and opencv, both gave me the black image.
Here is the following code that I have been using
img = cv2.imread('input.png',0)
mask = img
marker = cv2.erode(mask, cv2.getStructuringElement(cv2.MORPH_ERODE,(3,3)), iterations = 3)
reconstructedImage = reconstruction(marker, mask)
hdomes = img - reconstructedImage
cell_markers = cv2.threshold(hdomes, 0, 255, cv2.THRESH_BINARY)[1]
inverted = (255 - img)
labels = watershed(inverted, cell_markers)
cv2.imwrite('test.png', labels)
plt.figure()
plt.imshow(labels)
plt.show()
Thank you!
Here's a rough example for the watershed segmentation of your image with scikit-image.
What is missing in your script is calculating the Euclidean distance (see here and here) and extracting the local maxima from it.
Note that the watershed algorithm outputs a piece-wise constant image where pixels in the same regions are assigned the same value. What is shown in your 'desired output' panel (e) are the edges between the regions instead.
import numpy as np
import cv2
import matplotlib.pyplot as plt
from skimage.morphology import watershed
from scipy import ndimage as ndi
from skimage.feature import peak_local_max
from skimage.filters import threshold_local
img = cv2.imread('input.jpg',0)
'''Adaptive thersholding
calculates thresholds in regions of size block_size surrounding each pixel
to handle the non-uniform background'''
block_size = 41
adaptive_thresh = threshold_local(img, block_size)#, offset=10)
binary_adaptive = img > adaptive_thresh
# Calculate Euclidean distance
distance = ndi.distance_transform_edt(binary_adaptive)
# Find local maxima of the distance map
local_maxi = peak_local_max(distance, labels=binary_adaptive, footprint=np.ones((3, 3)), indices=False)
# Label the maxima
markers = ndi.label(local_maxi)[0]
''' Watershed algorithm
The option watershed_line=True leave a one-pixel wide line
with label 0 separating the regions obtained by the watershed algorithm '''
labels = watershed(-distance, markers, watershed_line=True)
# Plot the result
plt.imshow(img, cmap='gray')
plt.imshow(labels==0,alpha=.3, cmap='Reds')
plt.show()
I'm able to detect the main hard edges in an image quite well using morphological gradient - see below image. How can I process this image to just extract the hardest/whitest edges? Thresholding either results in a very noisy image or hard edges lacking in detail/too eroded.
My thresholding result:
My goal is something like this:
*Note: I'm attempting to use the Morphological Gradient operation as a light weight way to detect the hard/main edges in an image. The OpenCV code will run on a raspberry pi robot so I'm trying to be quite efficient with my resources - thus I'm using Morphological Gradient as opposed to Canny or etc.
Original image:
I initially said Difference of Gaussians but you already had a better threshold image.
So I took the first image as input and performed Otsu Threshold. I used the image obtained from the function cv2.threshold(img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) to detect lines.
Lines were detected using cv2.HoughLinesP()
My (not so good) result:
You have the option of drawing the lines onto your original image.
It is possible obtain only 5 objects (one per sign) by applying find_contour (opencv module) in this image: https://docs.google.com/file/d/0ByS6Z5WRz-h2WHEzNnJucDlRR2s/edit ?
Now I obtain 64 objects
After that I want to retrieve Humoments and make a comparison with other images.
For now i'd try only with the same image a little bit translated, for testing it returns they are the same.
My question I how can I obtain only 5 objects for applying humoments or if there are other solutions to calculate humoments fot the image?
import cv2
im = cv2.imread('Sassatelli 1984 n. 165 mod1.jpg')
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(imgray, (0,0), 5)
cv2.imshow('Blur', blur)
cv2.waitKey()
th = 20
edges = cv2.Canny(blur, th, th*3)
cv2.imshow('canny',edges)
cv2.waitKey()
contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
print('objects found')
print(len(contours))
cnt = contours[0]
cv2.drawContours(blur,contours,-1,(0,255,0),3)
cv2.imshow('draw contours',blur)
cv2.waitKey()
moments = cv2.moments(cnt)
Case 1: Problem with saving image in jpg format
When you save a black-and-white-only (ie pixel values 0 and 255 only) image in jpg format, there is lossy compression, which changes the pixel values. If you want to see it, create such an image, save it in jpg, open the saved image and zoom to black-white edge. You can see a pixel value change.
So when you find contours, you expect there is only white objects, but in reality, there is some mid-values also, which is also considered as contours. It increases number of contours.
So to avoid this problem,
Better save images in png or any other lossless format etc.
Apply a threshold, (with a values of 127 or as you like) to make image real binary one before finding contours.
This is much more explained here : What does result of 'list(contour)' denote?
Case 2: Problem with white background
OpenCV findcontours() is designed to find white objects in black background. So if your background is white, it is also treated as one object. So invert the image before finding contours.
Case 3 : Problem with holes in objects
If you have holes in your object, it is also considered as an object. So if you want only external boundary of the objects, use cv2.RETR_EXTERNAL flag for findcontours() function.
Sample Code:
import cv2
import numpy as np
img = cv2.imread('sof.jpg')
gray = cv2.imread('sof.jpg',0)
ret,thresh = cv2.threshold(gray,127,255,cv2.THRESH_BINARY_INV)
thresholded and inverted image :
Now find the contours, draw it, check the number of contours:
cv2.drawContours(img,contours,-1,(0,255,0),2)
cv2.imshow('img',img),cv2.waitKey(0),cv2.destroyAllWindows()
Result :
NOTE :
Here, I have taken only external contours. If you want to remove internal holes from these objects, you will need to use cv2.RETR_TREE or cv2.RETR_CCOMP flags, and check their hierarchy, and remove them. It is explained in this link : Contours 5 : Hierarchy
I'm trying to use OpenCV to "parse" screenshots from the iPhone game Blocked. The screenshots are cropped to look like this:
I suppose for right now I'm just trying to find the coordinates of each of the 4 points that make up each rectangle. I did see the sample file squares.c that comes with OpenCV, but when I run that algorithm on this picture, it comes up with 72 rectangles, including the rectangular areas of whitespace that I obviously don't want to count as one of my rectangles. What is a better way to approach this? I tried doing some Google research, but for all of the search results, there is very little relevant usable information.
The similar issue has already been discussed:
How to recognize rectangles in this image?
As for your data, rectangles you are trying to find are the only black objects. So you can try to do a threshold binarization: black pixels are those ones which have ALL three RGB values less than 40 (I've found it empirically). This simple operation makes your picture look like this:
After that you could apply Hough transform to find lines (discussed in the topic I referred to), or you can do it easier. Compute integral projections of the black pixels to X and Y axes. (The projection to X is a vector of x_i - numbers of black pixels such that it has the first coordinate equal to x_i). So, you get possible x and y values as the peaks of the projections. Then look through all the possible segments restricted by the found x and y (if there are a lot of black pixels between (x_i, y_j) and (x_i, y_k), there probably is a line probably). Finally, compose line segments to rectangles!
Here's a complete Python solution. The main idea is:
Apply pyramid mean shift filtering to help threshold accuracy
Otsu's threshold to get a binary image
Find contours and filter using contour approximation
Here's a visualization of each detected rectangle contour
Results
import cv2
image = cv2.imread('1.png')
blur = cv2.pyrMeanShiftFiltering(image, 11, 21)
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.015 * peri, True)
if len(approx) == 4:
x,y,w,h = cv2.boundingRect(approx)
cv2.rectangle(image,(x,y),(x+w,y+h),(36,255,12),2)
cv2.imshow('thresh', thresh)
cv2.imshow('image', image)
cv2.waitKey()
I wound up just building on my original method and doing as Robert suggested in his comment on my question. After I get my list of rectangles, I then run through and calculate the average color over each rectangle. I check to see if the red, green, and blue components of the average color are each within 10% of the gray and blue rectangle colors, and if they are I save the rectangle, if they aren't I discard it. This process gives me something like this:
From this, it's trivial to get the information I need (orientation, starting point, and length of each rectangle, considering the game window as a 6x6 grid).
The blocks look like bitmaps - why don't you use simple template matching with different templates for each block size/color/orientation?
Since your problem is the small rectangles I would start by removing them.
Since those lines are much thinner than the borders of the rectangles I would start by applying morphological operations on the image.
Using a structural element that looks like this:
element = [ 1 1
1 1 ]
should remove lines that are less than two pixels wide. After the small lines are removed the rectangle finding algorithm of OpenCV will most likely do the rest of the job for you.
The erosion can be done in OpenCV by the function cvErode
Try one of the many corner detectors like harris corner detector. also it is in general a good idea to try that at multiple resolutions : so do some preprocessing of of varying magnification.
It appears that you want some sort of color dominated square then you can suppress the other colors, by first using something like cvsplit .....and then thresholding the color...so only that region remains....follow that with a cropping operation ...I think that could work as well ....