OpenCV watershed on Grayscale image - opencv

I am ready to pull my hair out, I have no idea what is going on.
I am performing watershed on an image I have. I have created markers for the watershed. I can apply the watershed on my original, 3 channel color image. HOWEVER, I need to do some image analysis prior to the watershed (noise reduction, etc., etc.).
Thus, the watershed applied to my original image does not turn out properly. Instead, I want to apply the watershed to an image with a distanceTransform applied, with my markers.
The relevant code:
# Need to watershed this
filled_img = filled_img.astype(np.uint8)
dist = cv2.distanceTransform(filled_img, cv2.DIST_L2, 0)
dist *= (1/dist.max())
dist3d = cv2.cvtColor(dist, cv2.COLOR_GRAY2BGR)
watershed_markers = cv2.watershed(dist3d, markers)
#watershed_markers = watershed(-dist, markers, mask=filled_img)
fig = plt.figure(figsize = (15,15))
plt.imshow(watershed_markers)
watershed_img = crop_img
watershed_img[watershed_markers==-1] == [255,0,0]
plt.figure(figsize=(20,20))
plt.imshow(watershed_img, 'jet')
However, no matter what I try, I get this error:
error Traceback (most recent call last)
new_BSA.ipynb Cell 12 in <cell line: 10>()
5 dist *= (1/dist.max())
7 dist3d = cv2.cvtColor(dist, cv2.COLOR_GRAY2BGR)
---> 10 watershed_markers = cv2.watershed(dist3d, markers)
11 #watershed_markers = watershed(-dist, markers, mask=filled_img)
13 fig = plt.figure(figsize = (15,15))
error: OpenCV(4.5.5) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\segmentation.cpp:161: error: (-215:Assertion failed) src.type() == CV_8UC3 && dst.type() == CV_32SC1 in function 'cv::watershed'
Does anyone have any idea how to resolve this?
It is frustrating because my original image and the 3d distance image are both 3 channel images, I don't know why this error is showing up.
Any help is greatly appreciated
EDIT:
For a minimum reproducible example, I will start with my processed binary image, as I can't perform the watershed on my original image, which is shown here:
Original image
After processing, I get two binary images:
closing, on the left, which I use to obtain my markers, and filled_img on the right, which I want to apply the watershed to.
Processed image
From here, I extract the markers:
# Get difference between the two images, closing and filled_img
closing = closing.astype(np.uint8)
filled_img = filled_img.astype(np.uint8)
markers = cv2.subtract(filled_img, closing)
Then, I create a sure background from the image (areas close to the objects I know are background) by using dilation. Then, I extract the unknown regions, by using the difference between the sure foreground and my markers:
# sure background area
kernal = np.ones((3,3), np.uint8)
sure_bg = cv2.dilate(filled_img, kernal, iterations=3)
# sure fg area
sure_fg = markers
# unknown region
unknown = cv2.subtract(sure_bg, markers)
Then, following this example: https://docs.opencv.org/4.x/d3/db4/tutorial_py_watershed.html
I label my regions, ensuring that the unknown region is = 0, where the watershed will be flooded:
ret, markers = cv2.connectedComponents(sure_fg)
markers = markers+1
markers[unknown==True]=0
Here is an image of what the markers now looks like:
markers
Finally, from my initial post, I apply the watershed, where the error is appearing:
# Need to watershed this
filled_img = filled_img.astype(np.uint8)
dist = cv2.distanceTransform(filled_img, cv2.DIST_L2, 0)
dist *= (1/dist.max())
dist3d = cv2.cvtColor(dist, cv2.COLOR_GRAY2BGR)
watershed_markers = cv2.watershed(dist3d, markers)
#watershed_markers = watershed(-dist, markers, mask=filled_img)
fig = plt.figure(figsize = (15,15))
plt.imshow(watershed_markers)
watershed_img = crop_img
watershed_img[watershed_markers==-1] == [255,0,0]

Related

Using map_coordinates to upscale an image

I am tasked with creating a low-resolution version of an image with the same shape by reducing the image's shape randomly (to lose data) and expanding it back. However, I cannot use any of the 'resize' methods such as in scikit/opencv.. and only allowed to use scipy.ndimage.zoom and map_coordinates.
I've managed to do the following: (im is an grayscale image)
factor = np.random.uniform(0.25, 1)
zoomed_im = ndimage.zoom(im, zoom)
height_range = np.arange(0, im.shape[0])
width_range = np.arange(0, im.shape[1])
col, row = np.meshgrid(width_range, height_range)
zoom_out = map_coordinates(input=zoomed_in, coordinates=[row, col])
however I get the same zoomed in image with the rest of the pixels added as black. I understand this is due to the default parameters of map_coordinates being :
mode='constant'
cval = 0.0
How can I enlarge the image back using interpolation to the same original shape?
You can use a different step size in np.arange():
factor = np.random.uniform(0.25, 1)
zoomed_im = ndimage.zoom(im, factor)
height_range = np.arange(0, im.shape[0]*factor, step=factor)
width_range = np.arange(0, im.shape[1]*factor, step=factor)

How to rotate a non-squared image in frequency domain

I want to rotate an image in frequency domain. Inspired in the answers in Image rotation and scaling the frequency domain? I managed to rotate square images. (See the following python script using OpenCV)
M = cv2.imread("lenna.png")
M=np.float32(M)
hanning=cv2.createHanningWindow((M.shape[1],M.shape[0]),cv2.CV_32F)
M=hanning*M
sM = fftshift(M)
rotation_center=(M.shape[1]/2,M.shape[0]/2)
rot_matrix=cv2.getRotationMatrix2D(rotation_center,angle,1.0)
FsM = fftshift(cv2.dft(sM,flags = cv2.DFT_COMPLEX_OUTPUT))
rFsM=cv2.warpAffine(FsM,rot_matrix,(FsM.shape[1],FsM.shape[0]),flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
IrFsM = ifftshift(cv2.idft(ifftshift(rFsM),flags=cv2.DFT_REAL_OUTPUT))
This works fine with squared images. (Better results could be achieved by padding the image)
However, when only using a non-squared portion of the image, the rotation in frequency domain shows some kind of shearing effect.
Any idea on how to achieve this? Obivously I could pad the image to make it square, however the final purpose of all this is to rotate FFTs as fast as possible for an iterative image registration algorithm and this would slightly slow down the algorithm.
Following the suggestion of #CrisLuengo I found the affine transform needed to avoid padding the image. Obviously it will depend on the image size and the application but for my case avoidding the padding is very interesting.
The modified script looks now like:
#rot_matrix=cv2.getRotationMatrix2D(rotation_center,angle,1.0)
kx=1.0
ky=1.0
if(M.shape[0]>M.shape[1]):
kx= float(M.shape[0]) / M.shape[1]
else:
ky=float(M.shape[1])/M.shape[0]
affine_transform = np.zeros([2, 3])
affine_transform[0, 0] = np.cos(angle)
affine_transform[0, 1] = np.sin(angle)*ky/kx
affine_transform[0, 2] = (1-np.cos(angle))*rotation_center[0]-ky/kx*np.sin(angle)*rotation_center[1]
affine_transform[1, 0] = -np.sin(angle)*kx/ky
affine_transform[1, 1] = np.cos(angle)
affine_transform[1, 2] = kx/ky*np.sin(angle)*rotation_center[0]+(1-np.cos(angle))*rotation_center[1]
FsM = fftshift(cv2.dft(sM,flags = cv2.DFT_COMPLEX_OUTPUT))
rFsM=cv2.warpAffine(FsM,affine_transform, (FsM.shape[1],FsM.shape[0]),flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
IrFsM = ifftshift(cv2.idft(ifftshift(rFsM),flags=cv2.DFT_REAL_OUTPUT))

i wanted to detect objects in a hsv image. but i keep getting an error,,Expected Ptr<cv::UMat> for argument '%s'

i was trying to create a trackbar window and get hsv value of the image by adjusting the trackbar. created a mask and then adjusted the trackbar to detect an object of the hsv image
enter code here
def nothing(x):
pass
cv.namedWindow("Tracking")
cv.createTrackbar("LH","Tracking",0,255,nothing)
cv.createTrackbar("LS","Tracking",0,255,nothing)
cv.createTrackbar("LV","Tracking",0,255,nothing)
cv.createTrackbar("UH","Tracking",255,255,nothing)
cv.createTrackbar("US","Tracking",255,255,nothing)
cv.createTrackbar("UV","Tracking",255,255,nothing)
while True:
frame = cv.imread("C:/Users/acer/Desktop/insects/New folder/ins.jpg")
hsv = cv.cvtColor(frame,cv.COLOR_BGR2HSV)
l_h = cv.getTrackbarPos("LH","Tracking")
l_s = cv.getTrackbarPos("LS","Tracking")
l_v = cv.getTrackbarPos("LV","Tracking")
u_h = cv.getTrackbarPos("UH","Tracking")
u_s = cv.getTrackbarPos("US","Tracking")
u_v = cv.getTrackbarPos("UV","Tracking")
l_b = np.array([l_h,l_s,l_v])
u_b = np.array([u_h,u_s,u_v])
mask = (hsv,l_b,u_b)
res = cv.bitwise_and(frame,frame,mask=mask)
cv.imshow("frame",frame)
cv.imshow("mask",mask)
cv.imshow("res",res)
key = cv.waitKey(1)
if key == 27:
break
cv.destroyAllWindows()
There are a few issues with your code:
1) You have no import statements. You need at least:
import cv2 as cv
import numpy as np
2) Your indentation is incorrect. Your function nothing() should not be indented.
3) You omitted to call inRange(), you need:
mask = cv.inRange(hsv,l_b,u_b)
4) You have scaled the Hue into the range 0..255 when it actually has the range 0..180 when used with uint8 images so that 360 degrees comes out as 180 degrees which is less than the 255 upper limit of uint8.
By the way, it is fairly poor practice to do "loop invariant" stuff inside a loop - I mean the part where you hit the disk every millisecond and re-read the image, re-decode the JPEG and convert it to HSV. All that can be done outside the loop, then inside it, just do a quick memory copy of the HSV image.

error: (-215:Assertion failed) totalSampleCount > 0 in function 'GMM::endLearning'

Im trying to use the opencv to remove the background of my pictures.
When Im running a single file. It works out.
The code as below:
def bgremove(name,count):
import cv2
import numpy as np
# cv2.namedWindow('image',cv2.WINDOW_NORMAL)
#Load the Image
imgo = cv2.imread(name)# the place to input picture path
height,width = imgo.shape[:2]
#Create a mask holder
mask = np.zeros(imgo.shape[:2],np.uint8)
#Grab Cut the object
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
#Hard Coding the Rect… The object must lie within this rect.
rect = (10,10,width-30,height-30)
cv2.grabCut(imgo,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
mask = np.where((mask==2)|(mask==0),0,1).astype('uint8')
img1 = imgo*mask[:,:,np.newaxis]
#Get the background
background = imgo-img1
#Change all pixels in the background that are not black to white
background[np.where((background > [0,0,0]).all(axis = 2))] = [255,255,255]
#Add the background and the image
final = background + img1
DP1=count
#To be done – Smoothening the edges….
cv2.imwrite("A%s.JPG"%DP1, final)
However, when I use the function in a for loop. it pops-up:
error: (-215:Assertion failed) totalSampleCount > 0 in function
'GMM::endLearning'
when Im generating a group of pictures
I encountered this problem and the issue was that the rectangle rect was too small. I don't know the dimensions of your image but try a bigger rectangle and it may solve this.

Finding largest blob in image

I am having some issues extracting a blob from an image using EmguCV. Everything I see online uses the Contours object, but I guess that was removed from EmguCV3.0? I get an exception every time I try to use it. I haven't found many recent/relevant SO topics that aren't out of date.
Basically, I have a picture of a leaf. The background might be white, green, black, etc. I want to essentially remove the background so that I can perform operations on the leaf without interference with the background. I'm just not sure where I'm going wrong here:
Image<Bgr, Byte> Original = Core.CurrentLeaf.GetImageBGR;
Image<Gray, Byte> imgBinary = Original.Convert<Gray, Byte>();
imgBinary.PyrDown().PyrUp(); // Smoothen a little bit
imgBinary = imgBinary.ThresholdBinaryInv(new Gray(100), new Gray(255)); // Apply inverse suppression
// Now, copy pixels from original image that are black in the mask, to a new Mat. Then scan?
Image<Gray, Byte> imgMask;
imgMask = imgBinary.Copy(imgBinary);
CvInvoke.cvCopy(Original, imgMask, imgBinary);
VectorOfVectorOfPoint contoursDetected = new VectorOfVectorOfPoint();
CvInvoke.FindContours(imgBinary, contoursDetected, null, Emgu.CV.CvEnum.RetrType.List, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
var contoursArray = new List<VectorOfPoint>();
int count = contoursDetected.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint currContour = contoursDetected[i])
{
contoursArray.Add(currContour);
}
}
With this, I get a black image with a tiny bit of white lines. I've racked my brain back and forth and haven't been able to come up with something. Any pointers would be much appreciated!
I think that you need to find which one is the largest area using ContourArea on each one of the contours.
After you find the largest contour you need to fill it (because the contour is just the putline of the blob and not all the pixel in it) using FillPoly and create a mask that as the leaf pixels with value 1 and the everything else with 0.
In the end use the mask to extract the leaf pixels from the original image
I am not so proficient in c# so i attach a code in python with opencv to give you some help.
The resulted image:
Hope this will be helpful enough.
import cv2
import numpy as np
# Read image
Irgb = cv2.imread('leaf.jpg')
R,G,B = cv2.split(Irgb)
# Do some denosiong on the red chnnale (The red channel gave better result than the gray because it is has more contrast
Rfilter = cv2.bilateralFilter(R,25,25,10)
# Threshold image
ret, Ithres = cv2.threshold(Rfilter,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Find the largest contour and extract it
im, contours, hierarchy = cv2.findContours(Ithres,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE )
maxContour = 0
for contour in contours:
contourSize = cv2.contourArea(contour)
if contourSize > maxContour:
maxContour = contourSize
maxContourData = contour
# Create a mask from the largest contour
mask = np.zeros_like(Ithres)
cv2.fillPoly(mask,[maxContourData],1)
# Use mask to crop data from original image
finalImage = np.zeros_like(Irgb)
finalImage[:,:,0] = np.multiply(R,mask)
finalImage[:,:,1] = np.multiply(G,mask)
finalImage[:,:,2] = np.multiply(B,mask)
cv2.imshow('final',finalImage)
I recommend you look into Otsu thresholding. It gives you a threshold which you can use to divide the image into two classes (background and foreground). using OpenCV's threshold method you can then create a mask if necessary.

Resources