Rounded corners dilation (Image Processing) - opencv

I want to perform an Dilation operation while having rounded corners.
Something like this :
What I tried :
import numpy as np
import cv2
img = cv2.imread(test.jpg)
kernel = np.array([[0,1,0],
[1,1,1],
[0,1,0]], dtype=np.uint8)
img_d = cv2.dilate(img, kernel, iterations=45)
Image used : test.jpg
I tried multiple kernels (with different sizes 3x3, 5x5...) but I didn't succeed to get rounded corners.
My Question is : Can we get rounded corners just by changing the kernel or should we add a further processing step to achieve this ?
NOTE : My goal is not to create a rounded square... I used this example just to explain the idea of getting rounded corners with a
dilation operation.

I figured out a way thanks to "Christoph Rackwitz" comment.
The idea is pretty simple.
We need to use a bigger kernel with a circle shape and reduce the number of Dilation iterations.
import numpy as np
import cv2
kernel = np.zeros((100,100), np.uint8)
cv2.circle(kernel, (50,50), 50, 255, -1)
plt.imshow(kernel, cmap="gray")
And then use this kernel with just one iteration :
img = cv2.imread(test.jpg)
img_d = cv2.dilate(img, kernel, iterations=1)
plt.imshow(kernel, cmap="gray")

OpenCV also allows you choose a kernel of your shape and size with cv2.getStructuringElement. From this page you can choose either a rectangle, ellipse or cross shaped kernels.
Since you needed rounded corners I chose the ellipse kernel cv2.MORPH_ELLIPSE of size 25 x 25:
img = cv2.imread('test.jpg', 0)
th_img = cv2.threshold(gray,127,255,cv2.THRESH_BINARY)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(25,25))
img_dil = cv2.dilate(th_img, kernel, iterations=5)
Not exactly the result you were hoping for but here is how it looks:

Related

Extract stripes from low contrast grayscale images

I want to extract stripes from this sample file sample file, and the result should look like this one similar result image. Then, I need to count the number of stripes on the right, and calculate the distance from the end of each left stripe to the end of each adjacent right stripe.
I tried with the following code, but my result my result fileis still a little bit away from my target. Here is what I do:
import numpy as np
import cv2
from matplotlib import pyplot as plt
gray = cv2.imread('input_file.png',cv2.IMREAD_UNCHANGED)
sobelY = cv2.Sobel(gray, cv2.CV_32F, 0, 1, ksize=3)
sobelY2 = cv2.Sobel(sobelY, cv2.CV_32F, 0, 1, ksize=3)
sobelY2[sobelY2<0]=0
mask = np.where(sobelY2==0,0,1)
sobelY2 = cv2.normalize(sobelY2, dst=None, alpha=0, beta=65535, norm_type=cv2.NORM_MINMAX).astype(np.uint16)
clahe=cv2.createCLAHE(clipLimit=6, tileGridSize=(8,8))
sobelY2_clahe = clahe.apply(sobelY2)
sobelY2_clahe = clahe.apply(sobelY2_clahe)
result = np.where(mask!=0,sobelY2_clahe,0)
fig = plt.figure(figsize=(10, 10))
ax = plt.subplot(121)
plt.imshow(gray, cmap='gray')
ax = plt.subplot(122)
plt.imshow(result, cmap='gray')
plt.show()
The input file is in 16 bits format, so I keep it unchanged for accuracy. I do second order Sobel operation in Y direction to high light those stripes, and then I do two times Clahe operations to balance the contrast. To keep the background pixels as 0, I use a mask to set the values back after the Clahe operations.
Any advice is appreciated!
For completeness, I am attaching another more challenged input file for referencemore challenged input file.
Edit:
The sobelY2 image pretty much reflects the stripes, but could we make it look better?
I just opened a new question about how to trim each of these stripes based on gray scale values.trim image based on grayscale values

how to fit lines to edges and find the center point (opencv)

I have an image to which I apply a bilateral filter, followed by adaptive thresholding to get the image below.
original image (this is a screenshot off the depth image of the object)
thresholded image
I would like to fit lines to the vertical parts/lines and find the center poiint, output like image below:
I cant seem to understand the output of the cv2.adaptiveThreshold(). How are the purple pixels (i.e my edges) represented? and how can a line be fitted? MWE:
import cv2
image = cv2.imread("depth_frame0009.jpg")
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
bilateral_filter = cv2.bilateralFilter(gray_image, 15, 50, 50)
plt.figure()
plt.imshow(bilateral_filter)
plt.title("bilateral filter")
#plt.imsave("2dimage_gaussianFilter.png",blurred)
plt.imsave("depthmap_image_bilateralFilter.png",bilateral_filter)
th3 = cv2.adaptiveThreshold(bilateral_filter,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,11,2)
plt.figure()
plt.imshow(th3)
========
edit:
Canny edges
contours
They are represented as an image, a matrix of uint8.
The reason it is purple and yellow is because matplotlib is applying a colormap to it.
I generally prefer to use some specific parameters when plotting image processing output images, eg
plt.imshow(th3, cmap='gray', interpolation='nearest')
If you are specifically interested in finding and fitting lines you may want to use a different representation, such as Hough lines. Once you have the lines in the image you can take the best fit lines and find your center point between them.

Identifying imperfect shapes with noisy backgrounds with OpenCV

I am trying to identify a rectangle underwater in a noisy environment. I implemented Canny to find the edges, and drew the found edges using cv2.circle. From here, I am trying to identify the imperfect rectangle in the image (the black one below the long rectangle that covers the top of the frame)
I have attempted multiple solutions, including thresholds, blurs and resizing the image to detect the rectangle. Below is the barebones code with just drawing the identified edges.
import numpy as np
import cv2
import imutils
img_text = 'img5.png'
img = cv2.imread(img_text)
original = img.copy()
min_value = 50
max_value = 100
# draw image and return coordinates of drawn pixels
image = cv2.Canny(img, min_value, max_value)
indices = np.where(image != 0)
coordinates = zip(indices[1], indices[0])
for point in coordinates:
cv2.circle(original, point, 1, (0, 0, 255), -1)
cv2.imshow('original', original)
cv2.waitKey(0)
cv2.destroyAllWindows()
Where the output displays this:
output
From here I want to be able to separately detect just the rectangle and draw another rectangle on top of the output in green, but I haven't been able to find a way to detect the original rectangle on its own.
For your specific image, I obtained quite good results with a simple thresholding on the blue channel.
image = cv2.imread("test.png")
t, img = cv2.threshold(image[:,:,0], 80, 255, cv2.THRESH_BINARY)
In order to adapt the threshold, I propose a simple way of varying the threshold until you get one component. I have also implemented the rectangle drawing:
def find_square(image):
markers = 0
threshold = 10
while np.amax(markers) == 0:
threshold += 5
t, img = cv2.threshold(image[:,:,0], threshold, 255, cv2.THRESH_BINARY_INV)
_, markers = cv2.connectedComponents(img)
kernel = np.ones((5,5),np.uint8)
img = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
img = cv2.morphologyEx(img, cv2.MORPH_DILATE, kernel)
nonzero = cv2.findNonZero(img)
x, y, w, h = cv2.boundingRect(nonzero)
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.imshow("image", image)
And the results on the provided example images:
The idea behind this approach is based on the observation that the most information is in the blue channel. If you separate the images in the channels, you will see that in the blue channel, the dark square has the best contrast. It is also the darkest region on this channel, which is why thresholding works. The problem remains the threshold setting. Based on the above intuition, we are looking for the lowest threshold that will bring up something (and hope that it will be the square). What I did is to simply increase gradually the threshold until something appears.
Then, I applied some morphology operations to eliminate other small points that may appear after thresholding and to make the square look a bit bigger (the edges of the square are lighter, and therefore not the entire square is captured). Then is was a matter of drawing the rectangle.
The code can be made much nicer (and more efficient) by doing some statistical analysis on the histogram. Simply compute the threshold such that 5% (or some percent) of the pixels are darker. You may require do so a connected component analysis to keep the biggest blob.
Also, my usage of connectedComponents is very poor and inefficient. Again, code written in a hurry to prove the concept.

watershed segmentation always return black image

I've been recently working at a segmentation process for corneal
endothelial cells, and I've found a pretty decent paper that describes ways to perform it with nice results. I have been trying to follow that paper and implement it all using scikit-image and openCV, but I've gotten stucked at the watershed segmentation.
I will briefly describe how is the process supposed to be:
First of all, you have the original endothelial cells image
original image
Then, they instruct you to perform a morphological grayscale reconstruction, in order to level a little bit the grayscale of the image (however, they do not explain how to get the markers for the grayscale, so I've been fooling around and tried to get some on my own way)
This is what the reconstructed image was supposed to look like:
desired reconstruction
This is what my reconstructed image (lets label it as r) looks like:
my reconstruction
The purpose is to use the reconstructed image to get the markers for the watershed segmentation, how do we do that?! We get the original image (lets label it as f), and perform a threshold in (f - r) to extract the h-domes of the cell, i.e., our markers.
This is what the hdomes image was supposed to look like:
desired hdomes
This is what my hdomes image looks like:
my hdomes
I believe that the hdomes I've got are as good as theirs, so, the final step is to finally perform the watershed segmentation on the original image, using the hdomes we've been working so hard to get!
As input image, we will use the inverted original image, and as markers, our markers.
This is the derised output:
desired output
However, I am only getting a black image, EVERY PIXEL IS BLACK and I have no idea of what's happening... I've also tried using their markers and inverted image, however, also getting black image. The paper I've been using is Luc M. Vincent, Barry R. Masters, "Morphological image processing and network analysis of cornea endothelial cell images", Proc. SPIE 1769
I apologize for the long text, however I really wanted to explain everything in detail of what is my understanding so far, btw, I've tried watershed segmentation from both scikit-image and opencv, both gave me the black image.
Here is the following code that I have been using
img = cv2.imread('input.png',0)
mask = img
marker = cv2.erode(mask, cv2.getStructuringElement(cv2.MORPH_ERODE,(3,3)), iterations = 3)
reconstructedImage = reconstruction(marker, mask)
hdomes = img - reconstructedImage
cell_markers = cv2.threshold(hdomes, 0, 255, cv2.THRESH_BINARY)[1]
inverted = (255 - img)
labels = watershed(inverted, cell_markers)
cv2.imwrite('test.png', labels)
plt.figure()
plt.imshow(labels)
plt.show()
Thank you!
Here's a rough example for the watershed segmentation of your image with scikit-image.
What is missing in your script is calculating the Euclidean distance (see here and here) and extracting the local maxima from it.
Note that the watershed algorithm outputs a piece-wise constant image where pixels in the same regions are assigned the same value. What is shown in your 'desired output' panel (e) are the edges between the regions instead.
import numpy as np
import cv2
import matplotlib.pyplot as plt
from skimage.morphology import watershed
from scipy import ndimage as ndi
from skimage.feature import peak_local_max
from skimage.filters import threshold_local
img = cv2.imread('input.jpg',0)
'''Adaptive thersholding
calculates thresholds in regions of size block_size surrounding each pixel
to handle the non-uniform background'''
block_size = 41
adaptive_thresh = threshold_local(img, block_size)#, offset=10)
binary_adaptive = img > adaptive_thresh
# Calculate Euclidean distance
distance = ndi.distance_transform_edt(binary_adaptive)
# Find local maxima of the distance map
local_maxi = peak_local_max(distance, labels=binary_adaptive, footprint=np.ones((3, 3)), indices=False)
# Label the maxima
markers = ndi.label(local_maxi)[0]
''' Watershed algorithm
The option watershed_line=True leave a one-pixel wide line
with label 0 separating the regions obtained by the watershed algorithm '''
labels = watershed(-distance, markers, watershed_line=True)
# Plot the result
plt.imshow(img, cmap='gray')
plt.imshow(labels==0,alpha=.3, cmap='Reds')
plt.show()

grayscale gradient with skimage or numpy

I have to create a linear grayscale gradient, with black shade on top and white shade at the bottom. I have to use skimage and numpy.
I've found on scikit the code for a color linear gradient that goes horizontally instead of vertically here: http://scikit-image.org/docs/dev/auto_examples/plot_tinting_grayscale_images.html.
I would like an explanation of this code and some hints on how put everything in grayscale and vertical instead of colored and horizontal. Thanks!
A grey-scale image can be represented as a two-dimensional matrix. Let's say we wanted to create a 100x100 gradient image. First, we use np.linspace to construct the gradient values, 100 values between 0 and 1. We then repeat this vector 100 times in the vertical direction (using np.tile) to form the gradient image. At this stage, the image is a gradient from left to right, so we use the transpose to flip it to be oriented up-down.
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 1, 100)
image = np.tile(x, (100, 1)).T
plt.imshow(image, cmap='gray')
plt.show()

Resources