I've been recently working at a segmentation process for corneal
endothelial cells, and I've found a pretty decent paper that describes ways to perform it with nice results. I have been trying to follow that paper and implement it all using scikit-image and openCV, but I've gotten stucked at the watershed segmentation.
I will briefly describe how is the process supposed to be:
First of all, you have the original endothelial cells image
original image
Then, they instruct you to perform a morphological grayscale reconstruction, in order to level a little bit the grayscale of the image (however, they do not explain how to get the markers for the grayscale, so I've been fooling around and tried to get some on my own way)
This is what the reconstructed image was supposed to look like:
desired reconstruction
This is what my reconstructed image (lets label it as r) looks like:
my reconstruction
The purpose is to use the reconstructed image to get the markers for the watershed segmentation, how do we do that?! We get the original image (lets label it as f), and perform a threshold in (f - r) to extract the h-domes of the cell, i.e., our markers.
This is what the hdomes image was supposed to look like:
desired hdomes
This is what my hdomes image looks like:
my hdomes
I believe that the hdomes I've got are as good as theirs, so, the final step is to finally perform the watershed segmentation on the original image, using the hdomes we've been working so hard to get!
As input image, we will use the inverted original image, and as markers, our markers.
This is the derised output:
desired output
However, I am only getting a black image, EVERY PIXEL IS BLACK and I have no idea of what's happening... I've also tried using their markers and inverted image, however, also getting black image. The paper I've been using is Luc M. Vincent, Barry R. Masters, "Morphological image processing and network analysis of cornea endothelial cell images", Proc. SPIE 1769
I apologize for the long text, however I really wanted to explain everything in detail of what is my understanding so far, btw, I've tried watershed segmentation from both scikit-image and opencv, both gave me the black image.
Here is the following code that I have been using
img = cv2.imread('input.png',0)
mask = img
marker = cv2.erode(mask, cv2.getStructuringElement(cv2.MORPH_ERODE,(3,3)), iterations = 3)
reconstructedImage = reconstruction(marker, mask)
hdomes = img - reconstructedImage
cell_markers = cv2.threshold(hdomes, 0, 255, cv2.THRESH_BINARY)[1]
inverted = (255 - img)
labels = watershed(inverted, cell_markers)
cv2.imwrite('test.png', labels)
plt.figure()
plt.imshow(labels)
plt.show()
Thank you!
Here's a rough example for the watershed segmentation of your image with scikit-image.
What is missing in your script is calculating the Euclidean distance (see here and here) and extracting the local maxima from it.
Note that the watershed algorithm outputs a piece-wise constant image where pixels in the same regions are assigned the same value. What is shown in your 'desired output' panel (e) are the edges between the regions instead.
import numpy as np
import cv2
import matplotlib.pyplot as plt
from skimage.morphology import watershed
from scipy import ndimage as ndi
from skimage.feature import peak_local_max
from skimage.filters import threshold_local
img = cv2.imread('input.jpg',0)
'''Adaptive thersholding
calculates thresholds in regions of size block_size surrounding each pixel
to handle the non-uniform background'''
block_size = 41
adaptive_thresh = threshold_local(img, block_size)#, offset=10)
binary_adaptive = img > adaptive_thresh
# Calculate Euclidean distance
distance = ndi.distance_transform_edt(binary_adaptive)
# Find local maxima of the distance map
local_maxi = peak_local_max(distance, labels=binary_adaptive, footprint=np.ones((3, 3)), indices=False)
# Label the maxima
markers = ndi.label(local_maxi)[0]
''' Watershed algorithm
The option watershed_line=True leave a one-pixel wide line
with label 0 separating the regions obtained by the watershed algorithm '''
labels = watershed(-distance, markers, watershed_line=True)
# Plot the result
plt.imshow(img, cmap='gray')
plt.imshow(labels==0,alpha=.3, cmap='Reds')
plt.show()
Related
I'm working on a project in which we require to extract the cans being transported by a conveyor belt. I develop an automatic threshold selection algorithm based on Kittler's approach which uses the histogram of the grayscale image to determine the optimum threshold to separate the object from the background (similar to Otsu's algorithm implemented in OpenCV).
Now, for the algorithm to be successful it requires proper contrast between the object being analyzed and the background, so I have had so trouble making it work with the images below. To enhance the contrast on the image I have tried different contrast stretching and adaptive equalization with poor results.
So, I would like to know any suggestions on how to improve the image contrast? Or if there's a different segmentation method that could work better on this images instead of thresholding? An important detail to consider is that the camera is working with a blue led light.
Half full conveyor belt:
Full conveyor belt:
I wrote some python code to start you off.
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
import cv2
image = Image.open("TDP4f.jpg")
arr = np.asarray(image)
grey = np.mean(arr.astype(float)/255.0,axis=2)
grey = cv2.equalizeHist((255*grey).astype(np.uint8)).astype(float)/255.0
binary = cv2.adaptiveThreshold((255*grey).astype(np.uint8),255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,5,0.0)
#binary = cv2.morphologyEx(binary.astype(np.uint8),cv2.MORPH_ERODE,np.ones((3,3)),iterations=1)
# --- Adapted from https://docs.opencv.org/4.x/da/d53/tutorial_py_houghcircles.html
cimg=arr.copy()
R=20
err = 0.1
circles = cv2.HoughCircles(binary,cv2.HOUGH_GRADIENT,2,R,
param1=50,param2=30,minRadius=int(R-err*R),maxRadius=int(R+err*R))
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
# ---
plt.subplot(1,2,1)
plt.imshow(binary,cmap='gray')
plt.subplot(1,2,2)
plt.imshow(cimg)
plt.show()
Result:
You can tweak the parameters to get a better fit.
Resources:
https://docs.opencv.org/4.x/da/d53/tutorial_py_houghcircles.html
https://docs.opencv.org/3.4/d4/d1b/tutorial_histogram_equalization.html
https://docs.opencv.org/4.x/d7/d4d/tutorial_py_thresholding.html
I have an image for which I have the green border as the segmentation mask's outline. I'm looking to refine this outline based on contours found on the original image, to get a mask like the in 2nd image - where the edges of the hair are more refined.
I've tried combinations of dilation & erosion of the segmentation mask, but it didn't feel like a generic solution - since it involves manually tuning the kernel size.
Are there better approaches?
There is no generic solution. Parameter tuning is always required to get the desired output. For getting more refined fine edges of the hairs, you can apply thresholding as below:
import cv2 as cv
from matplotlib import pyplot as plt
im = cv2.imread("model.jpg",0)
plt.imshow(im)
thresh = cv2.adaptiveThreshold(im,255,cv2.ADAPTIVE_THRESH_MEAN_C,\
cv2.THRESH_BINARY_INV,31,3)
plt.imshow(thresh)
input:
output:
Note: The color changes are due to 'matplotlib'.
I have an image to which I apply a bilateral filter, followed by adaptive thresholding to get the image below.
original image (this is a screenshot off the depth image of the object)
thresholded image
I would like to fit lines to the vertical parts/lines and find the center poiint, output like image below:
I cant seem to understand the output of the cv2.adaptiveThreshold(). How are the purple pixels (i.e my edges) represented? and how can a line be fitted? MWE:
import cv2
image = cv2.imread("depth_frame0009.jpg")
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
bilateral_filter = cv2.bilateralFilter(gray_image, 15, 50, 50)
plt.figure()
plt.imshow(bilateral_filter)
plt.title("bilateral filter")
#plt.imsave("2dimage_gaussianFilter.png",blurred)
plt.imsave("depthmap_image_bilateralFilter.png",bilateral_filter)
th3 = cv2.adaptiveThreshold(bilateral_filter,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,11,2)
plt.figure()
plt.imshow(th3)
========
edit:
Canny edges
contours
They are represented as an image, a matrix of uint8.
The reason it is purple and yellow is because matplotlib is applying a colormap to it.
I generally prefer to use some specific parameters when plotting image processing output images, eg
plt.imshow(th3, cmap='gray', interpolation='nearest')
If you are specifically interested in finding and fitting lines you may want to use a different representation, such as Hough lines. Once you have the lines in the image you can take the best fit lines and find your center point between them.
I am trying to perform image segmentation on the following image of brain tissue:
The following is what the segmented result should look like:
I have the following result which I have obtained after applying thresholding, morphological transformations and contour area filtering (used to remove noise in the image) to the original image:
Result before contour filtering:
Result after contour filtering:
However, in my result, some of the black edges got separated/broken apart. Is there any simple method that I can use to close the small gaps between some of the edges.
E.g. is it possible to fill the white spaces between the edges circled in red with black?
Any insights are appreciated.
The easiest method would be to attempt to use Morphology. You simply perform a dilation operation followed by an erosion operation.
The following script uses opencv's Morphology function:
import numpy as np
import cv2
folder = 'C:/Users/Mark/Desktop/'
image = cv2.imread(folder + '6P7Lj.png')
image2 = cv2.bitwise_not(image)
kernel = np.ones((8,8),np.uint8)
closing = cv2.morphologyEx(image2, cv2.MORPH_CLOSE, kernel)
closing = cv2.bitwise_not(closing)
cv2.imshow('image', closing)
cv2.waitKey(0)
This is the results:
Most of the edges were connected. I'm sure you can further play with the function's kernel to get better results (or even use openCV's separate dilatation and erosion function to get ever more control).
Note: I add to invert the image before performing the operation because it treats white pixels as positive and black as negative, unlike your image. In the end it was inverted again to return to your format.
I want to find the orientation of the bright object in the images attached. For this purpose, I used Principal Component Analysis(PCA).
In case of image 1, PCA finds correct orientation as the first principal component is alligned in that direction. However, in case of image 2, the principal components are disoriented.
Can anyone please explain why the PCA is showing different results in the two images? Also, please suggest if there is some other method to find the orientation of the object.
import os
import gdal
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import skimage
from skimage.filters import threshold_otsu
from skimage.filters import try_all_threshold
import cv2
import math
from skimage import img_as_ubyte
from skimage.morphology import convex_hull_image
import pandas as pd
file="path to image file"
(fileRoot, fileExt)= os.path.splitext(file)
ds = gdal.Open(file)
band = ds.GetRasterBand(1)
arr = band.ReadAsArray()
geotransform = ds.GetGeoTransform()
[cols, rows] = arr.shape
thresh = threshold_otsu(arr)
binary = arr > thresh
points = binary>0
y,x = np.nonzero(points)
x = x - np.mean(x)
y = y - np.mean(y)
coords = np.vstack([x, y])
cov = np.cov(coords)
evals, evecs = np.linalg.eig(cov)
sort_indices = np.argsort(evals)[::-1]
evec1, evec2 = evecs[:, sort_indices]
x_v1, y_v1 = evec1
x_v2, y_v2 = evec2
scale = 40
plt.plot([x_v1*-scale*2, x_v1*scale*2],
[y_v1*-scale*2, y_v1*scale*2], color='red')
plt.plot([x_v2*-scale, x_v2*scale],
[y_v2*-scale, y_v2*scale], color='blue')
plt.plot(x,y, 'k.')
plt.axis('equal')
plt.gca().invert_yaxis()
plt.show()
theta = np.tanh((x_v1)/(y_v1)) * 180 /(math.pi)
You claim you are using just white pixels. Did you check which ones are selected by some overlay render? Anyway I do not think it is enough especially for your second image as it does not contain any fully saturated white pixels. I would use more processing before the PCA.
enhance dynamic range
your current images does not need this step as they contain both black and almost fully saturated white. This step allow to unify threshold values among more sample input images. For more info see:
Enhancing dynamic range and normalizing illumination
smooth a bit
this step will significantly lover the intensity of noise points and smooth the edges of bigger objects (but shrink them a bit). This can be done by any FIR filter or convolution or Gaussian filtering. Some also use morphology operators for this.
threshold by intensity
this will remove darker pixels (clear to black) so noise is fully removed
enlarge remaining objects by morphology operators back to former size
You can avoid this by enlarging the resulting OBB by few pixels (number is bound to smooth strength from #2).
now apply OBB search
You are using PCA so use it. I am using this instead:
How to Compute OBB of Multiple Curves?
When I tried your images with above approach (without the #4) I got these results:
Another problem I noticed with your second image is that there are not many white pixels in it. That may bias the PCA significantly especially without preprocessing. I would try to enlarge the image by bicubic filtering and use that as input. May be that is the only problem you got with it.