I am trying to do segmentation of leaf images of tomato crops. I want to convert images like following image
to following image with black background
I have reference this code from Github
but it does not do well on this problem, It does something like this
Can anyone suggest me a way to do it ?
The image is separable using the HSV-colorspace. The background has little saturation, so thresholding the saturation removes the gray.
Result:
Code:
import numpy as np
import cv2
# load image
image = cv2.imread('leaf.jpg')
# create hsv
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# set lower and upper color limits
low_val = (0,60,0)
high_val = (179,255,255)
# Threshold the HSV image
mask = cv2.inRange(hsv, low_val,high_val)
# remove noise
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel=np.ones((8,8),dtype=np.uint8))
# apply mask to original image
result = cv2.bitwise_and(image, image,mask=mask)
#show image
cv2.imshow("Result", result)
cv2.imshow("Mask", mask)
cv2.imshow("Image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
The problem with your image is the different coloration of the leaf. If you convert the image to grayscale, you will see the problem for the binarization algorithm:
Do you notice the very different brightness of the bottom half and the top half of the leaf? This gives you three mostly uniformly bright areas of the image: The actual background, the top-half leaf and the bottom-half leaf. That's not good for binarization.
However, your problem can be solved by separating your color image into it's respective channels. After separation, you will notice that in the blue channel the leaf looks very uniformly bright:
Which makes sense if we think about the colors we are talking about: Both green and yellow have very small amounts blue in it, if any.
This makes it easy for us to binarize it. For the sake of a clearer image, i first applied smoothing
and then used the iso_data Threshold of ImageJ (you can however use any of the existing automatic thresholding methods available) to create a binary mask:
Because the algorithm has set the leaf to background (black), we have to invert it:
This mask can be further improved by applying binary "fill holes" algorithms:
This mask can be used to crop the original image to extract the leaf:
The quality of the result image could be further improved by eroding the mask a little bit.
For the sake of completeness: You do not have to smooth the image, to get a result. Here is the mask for the unsmoothed image:
To remove the noise, you first apply binary fill holes, then binary closing followed by binary erosion. This will give you:
as a mask.
This will lead to
Related
I have an image to which I apply a bilateral filter, followed by adaptive thresholding to get the image below.
original image (this is a screenshot off the depth image of the object)
thresholded image
I would like to fit lines to the vertical parts/lines and find the center poiint, output like image below:
I cant seem to understand the output of the cv2.adaptiveThreshold(). How are the purple pixels (i.e my edges) represented? and how can a line be fitted? MWE:
import cv2
image = cv2.imread("depth_frame0009.jpg")
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
bilateral_filter = cv2.bilateralFilter(gray_image, 15, 50, 50)
plt.figure()
plt.imshow(bilateral_filter)
plt.title("bilateral filter")
#plt.imsave("2dimage_gaussianFilter.png",blurred)
plt.imsave("depthmap_image_bilateralFilter.png",bilateral_filter)
th3 = cv2.adaptiveThreshold(bilateral_filter,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,11,2)
plt.figure()
plt.imshow(th3)
========
edit:
Canny edges
contours
They are represented as an image, a matrix of uint8.
The reason it is purple and yellow is because matplotlib is applying a colormap to it.
I generally prefer to use some specific parameters when plotting image processing output images, eg
plt.imshow(th3, cmap='gray', interpolation='nearest')
If you are specifically interested in finding and fitting lines you may want to use a different representation, such as Hough lines. Once you have the lines in the image you can take the best fit lines and find your center point between them.
I came across this Kaggle kernel that has the following function.
def subtract_gaussian_blur(img):
gb_img = cv2.GaussianBlur(img, (0, 0), 5)
return cv2.addWeighted(img, 4, gb_img, -4, 128)
That converts this RGB image.
Into the following image.
I can see the effect is that it somewhat sharpens the image and turns it into a more grayscale image (not actually grayscale since the image is still RGB) but I'm not actually sure I fully understand what is happening in the function even after reading the OpenCV docs on GaussianBlur and addWeighted.
Also, does this particular image transformation have a specific name that I can do further reading into?
The main step I can see is cv2.addWeighted(img, 4, gb_img, -4, 128). The underlying equation for addWeighted is dst(I)=saturate(src1(I)∗alpha+src2(I)∗beta+gamma). In the example here, alpha is 4, beta -4, and gamma 128.
My understanding of how that works is it first performs a gaussian blur to make a denoised version of the image. However as well as removing noise, Gaussian Blurring can also "smear" edges, which is important later. It then subtracts the denoised version from the original, and adds 128 to each pixel colour channel.
In regions where the original pixel is identical to the filtered pixel, this will result in a uniform grey region. In areas where the original and filtered pixels differ a lot, you will end up either with a lighter or darker region depending on whether the intensity of the original or filtered pixel is higher. The differences will be most pronounced around edges in the original image, because those will be strongly "smeared" by the gaussian blur.
The result isn't fully greyscale as addWeighted() is applied to each colour channel of the pixels separately. Areas where the RGB values of the pre and post blur images differ in an unbalanced way (ie the difference between the two red channels is much bigger than between the blue or green channels) there will be a degree of colour rather than just grey.
I am trying to perform image segmentation on the following image of brain tissue:
The following is what the segmented result should look like:
I have the following result which I have obtained after applying thresholding, morphological transformations and contour area filtering (used to remove noise in the image) to the original image:
Result before contour filtering:
Result after contour filtering:
However, in my result, some of the black edges got separated/broken apart. Is there any simple method that I can use to close the small gaps between some of the edges.
E.g. is it possible to fill the white spaces between the edges circled in red with black?
Any insights are appreciated.
The easiest method would be to attempt to use Morphology. You simply perform a dilation operation followed by an erosion operation.
The following script uses opencv's Morphology function:
import numpy as np
import cv2
folder = 'C:/Users/Mark/Desktop/'
image = cv2.imread(folder + '6P7Lj.png')
image2 = cv2.bitwise_not(image)
kernel = np.ones((8,8),np.uint8)
closing = cv2.morphologyEx(image2, cv2.MORPH_CLOSE, kernel)
closing = cv2.bitwise_not(closing)
cv2.imshow('image', closing)
cv2.waitKey(0)
This is the results:
Most of the edges were connected. I'm sure you can further play with the function's kernel to get better results (or even use openCV's separate dilatation and erosion function to get ever more control).
Note: I add to invert the image before performing the operation because it treats white pixels as positive and black as negative, unlike your image. In the end it was inverted again to return to your format.
I have image data that comprises mostly roundish images surrounded by boring black background. I am handling this by grabbing the bounding box using PIL's getbbox(), and then cropping. This gives me some satisfaction, but tiny specks of grey within the sea of boring black cause getbbox() to return bounding boxes that are too large.
A deliberately generated problematic image is attached; note the single dark-grey pixel in the lower right. I have also included a more typical "real world" image.
Generated problematic image
Real-world image
I have done some faffing around with UnsharpMask and SHARP and BLUR filters in the PIL ImageFilter module with no success.
I want to throw out those stray gray pixels and get a nice bounding box, but without hosing my image data.
You want to run a median filter on a copy of your image to get the bounding box, then apply that bounding box to your original, unblurred image. So:
copy your original image
apply a median blur filter to the copy - probably 5x5 depending on the size of the speck
get bounding box
apply bounding box to your original image.
Here is some code to get you started:
#!/usr/local/bin/python3
import numpy as np
from PIL import Image, ImageFilter
# Load image
im = Image.open('eye.png').convert('L')
orig = im.copy() # Save original
# Threshold to make black and white
thr = im.point(lambda p: p > 128 and 255)
# Following line is just for debug
thr.save('result-1.png')
# Median filter to remove noise
fil = thr.filter(ImageFilter.MedianFilter(3))
# Following line is just for debug
fil.save('result-2.png')
# Get bounding box from filtered image
bbox = fil.getbbox()
# Apply bounding box to original image and save
result = orig.crop(bbox)
result.save('result.png')
I wanted to read characters/triangles from a bar.
Firstly I applied Otsu with different values to this bar but couldn't get the all characters properly. Also I tried triangle detection but couldn't extract again. The characters' colours are varying. Could someone give another way/algorithm to extract them? Also, is there any way to color sweeping, I mean try all colours then if exist, extract (extract all colored from black&white backgrounded image) ?
ret,im1 = cv2.threshold(crop_img,0,255,cv2.THRESH_OTSU)
The challenges, the last one is the hardest
The best one I got which is unsuccesful:
Your problem is best solved using color separation. You can use the inrange() function for that (docs). This is usually done best in the HSV colorspace. The code below shows how you can do this.
You can use this script to find the value ranges you need to do color separation. It also has a sample image that can help you understand how HSV works.
Result:
Purple only:
Code:
import numpy as np
import cv2
# load image
img = cv2.imread("image.png")
# Convert BGR to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of HSV-color
lower_val = np.array([0,50,80])
upper_val = np.array([179,255,255])
# purple only
#lower_val = np.array([140,50,80])
#upper_val = np.array([170,255,255])
# Threshold the HSV image to get a mask that holds the markings
mask = cv2.inRange(hsv, lower_val, upper_val)
# create an image of the markings with background excluded
img_masked = cv2.bitwise_and(img,img,mask=mask)
# display image
cv2.imshow("result", img_masked)
cv2.waitKey(0)
cv2.destroyAllWindows()