Refine segmentation mask based on contours of image - opencv

I have an image for which I have the green border as the segmentation mask's outline. I'm looking to refine this outline based on contours found on the original image, to get a mask like the in 2nd image - where the edges of the hair are more refined.
I've tried combinations of dilation & erosion of the segmentation mask, but it didn't feel like a generic solution - since it involves manually tuning the kernel size.
Are there better approaches?

There is no generic solution. Parameter tuning is always required to get the desired output. For getting more refined fine edges of the hairs, you can apply thresholding as below:
import cv2 as cv
from matplotlib import pyplot as plt
im = cv2.imread("model.jpg",0)
plt.imshow(im)
thresh = cv2.adaptiveThreshold(im,255,cv2.ADAPTIVE_THRESH_MEAN_C,\
cv2.THRESH_BINARY_INV,31,3)
plt.imshow(thresh)
input:
output:
Note: The color changes are due to 'matplotlib'.

Related

Separating overlaying colors in opencv

Let's say I have a picture with two colors, but both their colors are overlapping.
Object A is a star, and object B is a rectangle with a star-like hole. They overlap each other.
Is there a way of separating the objects? Kinda like finding the intersection and summing up to their "pure" standards?
I see two ways of doing this: via shape recognition or via color. Don't know which way would be smarter.
First I tried to separate the colors via histogram in grayscale, such as in BATspock's question
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread("origin.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
hist = cv2.calcHist([gray],[0],None,[256],[0,256])
colors = np.where(hist>25000)
img_number = 0
for color in colors[0]:
print(color)
split_image = img.copy()
split_image[np.where(gray != color)] = 0
cv2.imwrite(str(img_number)+".jpg",split_image)
img_number+=1
plt.hist(gray.ravel(),256,[0,256])
plt.savefig('plt')
plt.show()
But no success in getting the color of that intersection due to very low Histogram values.
I tried using the example present here, although it renders the same effect I desire, the example just refers to color splitting in RGB. Is there anything similar to this output but choosing the colors instead? Maybe feature recognition? Watershed? I'm lost.

How to extract the object of interest from a background when there's low contrast?

I'm working on a project in which we require to extract the cans being transported by a conveyor belt. I develop an automatic threshold selection algorithm based on Kittler's approach which uses the histogram of the grayscale image to determine the optimum threshold to separate the object from the background (similar to Otsu's algorithm implemented in OpenCV).
Now, for the algorithm to be successful it requires proper contrast between the object being analyzed and the background, so I have had so trouble making it work with the images below. To enhance the contrast on the image I have tried different contrast stretching and adaptive equalization with poor results.
So, I would like to know any suggestions on how to improve the image contrast? Or if there's a different segmentation method that could work better on this images instead of thresholding? An important detail to consider is that the camera is working with a blue led light.
Half full conveyor belt:
Full conveyor belt:
I wrote some python code to start you off.
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
import cv2
image = Image.open("TDP4f.jpg")
arr = np.asarray(image)
grey = np.mean(arr.astype(float)/255.0,axis=2)
grey = cv2.equalizeHist((255*grey).astype(np.uint8)).astype(float)/255.0
binary = cv2.adaptiveThreshold((255*grey).astype(np.uint8),255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,5,0.0)
#binary = cv2.morphologyEx(binary.astype(np.uint8),cv2.MORPH_ERODE,np.ones((3,3)),iterations=1)
# --- Adapted from https://docs.opencv.org/4.x/da/d53/tutorial_py_houghcircles.html
cimg=arr.copy()
R=20
err = 0.1
circles = cv2.HoughCircles(binary,cv2.HOUGH_GRADIENT,2,R,
param1=50,param2=30,minRadius=int(R-err*R),maxRadius=int(R+err*R))
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
# ---
plt.subplot(1,2,1)
plt.imshow(binary,cmap='gray')
plt.subplot(1,2,2)
plt.imshow(cimg)
plt.show()
Result:
You can tweak the parameters to get a better fit.
Resources:
https://docs.opencv.org/4.x/da/d53/tutorial_py_houghcircles.html
https://docs.opencv.org/3.4/d4/d1b/tutorial_histogram_equalization.html
https://docs.opencv.org/4.x/d7/d4d/tutorial_py_thresholding.html

OpenCV: How to closes edges in a binary image

I am trying to perform image segmentation on the following image of brain tissue:
The following is what the segmented result should look like:
I have the following result which I have obtained after applying thresholding, morphological transformations and contour area filtering (used to remove noise in the image) to the original image:
Result before contour filtering:
Result after contour filtering:
However, in my result, some of the black edges got separated/broken apart. Is there any simple method that I can use to close the small gaps between some of the edges.
E.g. is it possible to fill the white spaces between the edges circled in red with black?
Any insights are appreciated.
The easiest method would be to attempt to use Morphology. You simply perform a dilation operation followed by an erosion operation.
The following script uses opencv's Morphology function:
import numpy as np
import cv2
folder = 'C:/Users/Mark/Desktop/'
image = cv2.imread(folder + '6P7Lj.png')
image2 = cv2.bitwise_not(image)
kernel = np.ones((8,8),np.uint8)
closing = cv2.morphologyEx(image2, cv2.MORPH_CLOSE, kernel)
closing = cv2.bitwise_not(closing)
cv2.imshow('image', closing)
cv2.waitKey(0)
This is the results:
Most of the edges were connected. I'm sure you can further play with the function's kernel to get better results (or even use openCV's separate dilatation and erosion function to get ever more control).
Note: I add to invert the image before performing the operation because it treats white pixels as positive and black as negative, unlike your image. In the end it was inverted again to return to your format.

Segmentation problem for tomato leaf images in PlantVillage Dataset

I am trying to do segmentation of leaf images of tomato crops. I want to convert images like following image
to following image with black background
I have reference this code from Github
but it does not do well on this problem, It does something like this
Can anyone suggest me a way to do it ?
The image is separable using the HSV-colorspace. The background has little saturation, so thresholding the saturation removes the gray.
Result:
Code:
import numpy as np
import cv2
# load image
image = cv2.imread('leaf.jpg')
# create hsv
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# set lower and upper color limits
low_val = (0,60,0)
high_val = (179,255,255)
# Threshold the HSV image
mask = cv2.inRange(hsv, low_val,high_val)
# remove noise
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel=np.ones((8,8),dtype=np.uint8))
# apply mask to original image
result = cv2.bitwise_and(image, image,mask=mask)
#show image
cv2.imshow("Result", result)
cv2.imshow("Mask", mask)
cv2.imshow("Image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
The problem with your image is the different coloration of the leaf. If you convert the image to grayscale, you will see the problem for the binarization algorithm:
Do you notice the very different brightness of the bottom half and the top half of the leaf? This gives you three mostly uniformly bright areas of the image: The actual background, the top-half leaf and the bottom-half leaf. That's not good for binarization.
However, your problem can be solved by separating your color image into it's respective channels. After separation, you will notice that in the blue channel the leaf looks very uniformly bright:
Which makes sense if we think about the colors we are talking about: Both green and yellow have very small amounts blue in it, if any.
This makes it easy for us to binarize it. For the sake of a clearer image, i first applied smoothing
and then used the iso_data Threshold of ImageJ (you can however use any of the existing automatic thresholding methods available) to create a binary mask:
Because the algorithm has set the leaf to background (black), we have to invert it:
This mask can be further improved by applying binary "fill holes" algorithms:
This mask can be used to crop the original image to extract the leaf:
The quality of the result image could be further improved by eroding the mask a little bit.
For the sake of completeness: You do not have to smooth the image, to get a result. Here is the mask for the unsmoothed image:
To remove the noise, you first apply binary fill holes, then binary closing followed by binary erosion. This will give you:
as a mask.
This will lead to

watershed segmentation always return black image

I've been recently working at a segmentation process for corneal
endothelial cells, and I've found a pretty decent paper that describes ways to perform it with nice results. I have been trying to follow that paper and implement it all using scikit-image and openCV, but I've gotten stucked at the watershed segmentation.
I will briefly describe how is the process supposed to be:
First of all, you have the original endothelial cells image
original image
Then, they instruct you to perform a morphological grayscale reconstruction, in order to level a little bit the grayscale of the image (however, they do not explain how to get the markers for the grayscale, so I've been fooling around and tried to get some on my own way)
This is what the reconstructed image was supposed to look like:
desired reconstruction
This is what my reconstructed image (lets label it as r) looks like:
my reconstruction
The purpose is to use the reconstructed image to get the markers for the watershed segmentation, how do we do that?! We get the original image (lets label it as f), and perform a threshold in (f - r) to extract the h-domes of the cell, i.e., our markers.
This is what the hdomes image was supposed to look like:
desired hdomes
This is what my hdomes image looks like:
my hdomes
I believe that the hdomes I've got are as good as theirs, so, the final step is to finally perform the watershed segmentation on the original image, using the hdomes we've been working so hard to get!
As input image, we will use the inverted original image, and as markers, our markers.
This is the derised output:
desired output
However, I am only getting a black image, EVERY PIXEL IS BLACK and I have no idea of what's happening... I've also tried using their markers and inverted image, however, also getting black image. The paper I've been using is Luc M. Vincent, Barry R. Masters, "Morphological image processing and network analysis of cornea endothelial cell images", Proc. SPIE 1769
I apologize for the long text, however I really wanted to explain everything in detail of what is my understanding so far, btw, I've tried watershed segmentation from both scikit-image and opencv, both gave me the black image.
Here is the following code that I have been using
img = cv2.imread('input.png',0)
mask = img
marker = cv2.erode(mask, cv2.getStructuringElement(cv2.MORPH_ERODE,(3,3)), iterations = 3)
reconstructedImage = reconstruction(marker, mask)
hdomes = img - reconstructedImage
cell_markers = cv2.threshold(hdomes, 0, 255, cv2.THRESH_BINARY)[1]
inverted = (255 - img)
labels = watershed(inverted, cell_markers)
cv2.imwrite('test.png', labels)
plt.figure()
plt.imshow(labels)
plt.show()
Thank you!
Here's a rough example for the watershed segmentation of your image with scikit-image.
What is missing in your script is calculating the Euclidean distance (see here and here) and extracting the local maxima from it.
Note that the watershed algorithm outputs a piece-wise constant image where pixels in the same regions are assigned the same value. What is shown in your 'desired output' panel (e) are the edges between the regions instead.
import numpy as np
import cv2
import matplotlib.pyplot as plt
from skimage.morphology import watershed
from scipy import ndimage as ndi
from skimage.feature import peak_local_max
from skimage.filters import threshold_local
img = cv2.imread('input.jpg',0)
'''Adaptive thersholding
calculates thresholds in regions of size block_size surrounding each pixel
to handle the non-uniform background'''
block_size = 41
adaptive_thresh = threshold_local(img, block_size)#, offset=10)
binary_adaptive = img > adaptive_thresh
# Calculate Euclidean distance
distance = ndi.distance_transform_edt(binary_adaptive)
# Find local maxima of the distance map
local_maxi = peak_local_max(distance, labels=binary_adaptive, footprint=np.ones((3, 3)), indices=False)
# Label the maxima
markers = ndi.label(local_maxi)[0]
''' Watershed algorithm
The option watershed_line=True leave a one-pixel wide line
with label 0 separating the regions obtained by the watershed algorithm '''
labels = watershed(-distance, markers, watershed_line=True)
# Plot the result
plt.imshow(img, cmap='gray')
plt.imshow(labels==0,alpha=.3, cmap='Reds')
plt.show()

Resources