I am totally new in image analysis and have tried alot with ImageJ or QuPath but unfortunately I can’t find a proper way into it. Here is an image example I would like to quantify:
Has anyone a recommendation which software I should use or how I can quantify those little “dots” also finding out their position?
I tried it with ImageJ, but the image quality is so bad that it does not allow thresholding…
With thresholding it seems impossible to quantify just the “little dots”…
Kind regards
you can solve it in simple way using OpenCV, or you can go further with more sophisticated approach like this one.
import cv2
import numpy as np
img = cv2.imread('img.png')
mask = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
th = 35
mask[mask<th] = 0
mask[mask>0] = 255
mask = np.stack([mask, mask, mask], axis=2)
result = np.hstack((img, mask))
cv2.namedWindow("peaks", cv2.WINDOW_NORMAL)
cv2.imshow("peaks", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
I find cv2 complicated and slow. I would do something like this:
from osgeo import gdal
thresh = 100
raster = gdal.Open(r'image.jpg')
# Extract raster band (each band is a primary colour)
img = raster.GetRasterBand(1).ReadAsArray(x_start,y_start,x_end,y_end) # reading the image on a specific window, keep empty for whole image.
binary_img = img > thresh
Gdal is a great package, but has many dependencies and is a bit harder to install, especially on windows.
Related
There is a electron microscope photo of some surface:
What I want to do is to use OpenCV to detect edges of pyramid in this image and measure their length. Currently I have to label them manually (the red lines in image).
I try to use Canny with Hough transform but the result turn out not very well.
import cv2
from matplotlib import pyplot as plt
import numpy as np
img = cv2.imread('../data/sample.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2BGRA)
kernel_size = 9
blur_gray = cv2.GaussianBlur(gray, (kernel_size, kernel_size), 0)
lo, hi = 15, 27
edges = cv2.Canny(blur_gray, lo, hi)
plt.figure()
plt.imshow(blur_gray)
plt.figure()
plt.imshow(edges)
lines
= cv2.HoughLinesP(edges, 1, np.pi / 180, 50, None, 50, 10)
It looks to me like there is too much noise in the original image. What I want to detect is the lateral edges of pyramid, but the base edges are also included with my code. I don't know how to remove them before using Canny and Hough transform. And there are some glitches in the result of edge detection which I don't know how to eliminate them. I though Gaussian blur should be enough but it turns out not working very well.
I have to confess that I have little knowledge of computer vision so I am not sure what's the right tool for this. It would be really appreciated to have someone shed some light on it. There is no need to find a 100% accuracy method as there are some abnormal structures in the image that have to be adjust manually, what I want is try to reduce the human effort as much as possible by using some computer vision technique.
I want to extract stripes from this sample file sample file, and the result should look like this one similar result image. Then, I need to count the number of stripes on the right, and calculate the distance from the end of each left stripe to the end of each adjacent right stripe.
I tried with the following code, but my result my result fileis still a little bit away from my target. Here is what I do:
import numpy as np
import cv2
from matplotlib import pyplot as plt
gray = cv2.imread('input_file.png',cv2.IMREAD_UNCHANGED)
sobelY = cv2.Sobel(gray, cv2.CV_32F, 0, 1, ksize=3)
sobelY2 = cv2.Sobel(sobelY, cv2.CV_32F, 0, 1, ksize=3)
sobelY2[sobelY2<0]=0
mask = np.where(sobelY2==0,0,1)
sobelY2 = cv2.normalize(sobelY2, dst=None, alpha=0, beta=65535, norm_type=cv2.NORM_MINMAX).astype(np.uint16)
clahe=cv2.createCLAHE(clipLimit=6, tileGridSize=(8,8))
sobelY2_clahe = clahe.apply(sobelY2)
sobelY2_clahe = clahe.apply(sobelY2_clahe)
result = np.where(mask!=0,sobelY2_clahe,0)
fig = plt.figure(figsize=(10, 10))
ax = plt.subplot(121)
plt.imshow(gray, cmap='gray')
ax = plt.subplot(122)
plt.imshow(result, cmap='gray')
plt.show()
The input file is in 16 bits format, so I keep it unchanged for accuracy. I do second order Sobel operation in Y direction to high light those stripes, and then I do two times Clahe operations to balance the contrast. To keep the background pixels as 0, I use a mask to set the values back after the Clahe operations.
Any advice is appreciated!
For completeness, I am attaching another more challenged input file for referencemore challenged input file.
Edit:
The sobelY2 image pretty much reflects the stripes, but could we make it look better?
I just opened a new question about how to trim each of these stripes based on gray scale values.trim image based on grayscale values
I have an image that is with the spikes/small triangles on the outline border, like this:
I would like to remove the un-wanted spikes/small triangles:
And output the image like this:
I have searched many posts on the web using OpenCV/Emgu CV but no luck.
The problem is the contour is not in equal spacing and I can not use any find peak functions to find them and remove them.
I have also used cubic spline to smooth the image, but it just destroyed the original image shape (too smooth) or just got a little effect on the spikes.
Could anyone who have ideas help me with this issue?
As suggested by Cris, a morphological closing is a good starting point.
In the picture below, I performed closing with an octognal kernel 49x49 (circular would be better), and took the difference with the original.
If you filter out the blobs by size (and possibly by shape), you will get the true spikes that you can subtract. The rest of the shape remains unchanged.
Something like this will also help.
Where:
#contours is your list of contours after findContrours()
#idx is the index of your contour
#eps regulates how much the contour is approximated.
cv::Mat approx;
double eps = cv::arcLength(contours[idx], true) * 0.05;
cv::approxPolyDP(contours[idx], approx, eps, true);
approx.copyTo(contours[idx]);
Maybe this is what you want (its not accurate at all)
OpenCV + Python
# Import preprocessors
import os
import cv2
import numpy as np
# Read image
dir = os.path.abspath(os.path.dirname(__file__))
im = cv2.imread(dir+'/im.png')
# Remove triangles
kernel = np.ones((5,5), np.uint8)
factor=11
im = cv2.dilate(im, kernel, iterations=factor)
im = cv2.erode(im, kernel, iterations=factor)
# Save the processed image
cv2.imwrite(dir+'/spike_res.png', im)
Update:
Maybe not related to OpenCV tag; but with .NET you can also use Erosion and Dialation of AForge.
I've been recently working at a segmentation process for corneal
endothelial cells, and I've found a pretty decent paper that describes ways to perform it with nice results. I have been trying to follow that paper and implement it all using scikit-image and openCV, but I've gotten stucked at the watershed segmentation.
I will briefly describe how is the process supposed to be:
First of all, you have the original endothelial cells image
original image
Then, they instruct you to perform a morphological grayscale reconstruction, in order to level a little bit the grayscale of the image (however, they do not explain how to get the markers for the grayscale, so I've been fooling around and tried to get some on my own way)
This is what the reconstructed image was supposed to look like:
desired reconstruction
This is what my reconstructed image (lets label it as r) looks like:
my reconstruction
The purpose is to use the reconstructed image to get the markers for the watershed segmentation, how do we do that?! We get the original image (lets label it as f), and perform a threshold in (f - r) to extract the h-domes of the cell, i.e., our markers.
This is what the hdomes image was supposed to look like:
desired hdomes
This is what my hdomes image looks like:
my hdomes
I believe that the hdomes I've got are as good as theirs, so, the final step is to finally perform the watershed segmentation on the original image, using the hdomes we've been working so hard to get!
As input image, we will use the inverted original image, and as markers, our markers.
This is the derised output:
desired output
However, I am only getting a black image, EVERY PIXEL IS BLACK and I have no idea of what's happening... I've also tried using their markers and inverted image, however, also getting black image. The paper I've been using is Luc M. Vincent, Barry R. Masters, "Morphological image processing and network analysis of cornea endothelial cell images", Proc. SPIE 1769
I apologize for the long text, however I really wanted to explain everything in detail of what is my understanding so far, btw, I've tried watershed segmentation from both scikit-image and opencv, both gave me the black image.
Here is the following code that I have been using
img = cv2.imread('input.png',0)
mask = img
marker = cv2.erode(mask, cv2.getStructuringElement(cv2.MORPH_ERODE,(3,3)), iterations = 3)
reconstructedImage = reconstruction(marker, mask)
hdomes = img - reconstructedImage
cell_markers = cv2.threshold(hdomes, 0, 255, cv2.THRESH_BINARY)[1]
inverted = (255 - img)
labels = watershed(inverted, cell_markers)
cv2.imwrite('test.png', labels)
plt.figure()
plt.imshow(labels)
plt.show()
Thank you!
Here's a rough example for the watershed segmentation of your image with scikit-image.
What is missing in your script is calculating the Euclidean distance (see here and here) and extracting the local maxima from it.
Note that the watershed algorithm outputs a piece-wise constant image where pixels in the same regions are assigned the same value. What is shown in your 'desired output' panel (e) are the edges between the regions instead.
import numpy as np
import cv2
import matplotlib.pyplot as plt
from skimage.morphology import watershed
from scipy import ndimage as ndi
from skimage.feature import peak_local_max
from skimage.filters import threshold_local
img = cv2.imread('input.jpg',0)
'''Adaptive thersholding
calculates thresholds in regions of size block_size surrounding each pixel
to handle the non-uniform background'''
block_size = 41
adaptive_thresh = threshold_local(img, block_size)#, offset=10)
binary_adaptive = img > adaptive_thresh
# Calculate Euclidean distance
distance = ndi.distance_transform_edt(binary_adaptive)
# Find local maxima of the distance map
local_maxi = peak_local_max(distance, labels=binary_adaptive, footprint=np.ones((3, 3)), indices=False)
# Label the maxima
markers = ndi.label(local_maxi)[0]
''' Watershed algorithm
The option watershed_line=True leave a one-pixel wide line
with label 0 separating the regions obtained by the watershed algorithm '''
labels = watershed(-distance, markers, watershed_line=True)
# Plot the result
plt.imshow(img, cmap='gray')
plt.imshow(labels==0,alpha=.3, cmap='Reds')
plt.show()
I'm implementing OMR system for test papers. But faced with problems when determining filled circles. I've succeeded in getting these grayscale regions of interest .
The problems are:
- Binary thresholding (adaptive and fixed) and counting non zero pixels gives a lot of errors because of letters in a circles and different brightness of photos made by mobile cameras.
- Also tried technique described in this survey that uses average grayscale values of a circle do mark it filled or not, but the brightness of an image is not uniform because of different light sources when people take photos be their cameras and I got a lot of wrong results.
- People also doesn't follow rules such us filling the whole circle, algorithm also need to be robust in such cases.
Sample images
I already have about 10 GBs of samples, so may be machine learning or other statistical methods will be useful.
Does anybody know other methods to classify a circle as filled?
Since it is not a straight forward problem, it needs lot of tweaking to make it robust. But I would like suggest you a good starting point. You can play with it and try to make it work.
import numpy as np
import cv2
image_ori = cv2.imread("circle_opt.png")
lower_bound = np.array([0, 0, 0])
upper_bound = np.array([255, 255, 195])
image = image_ori
mask = cv2.inRange(image_ori, lower_bound, upper_bound)
masked_red = cv2.bitwise_and(image, image, mask=mask)
kernel = np.ones((3,3),np.uint8)
closing = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
contours = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[0]
contours.sort(key=lambda x:cv2.boundingRect(x)[0])
print len(contours)
for c in contours:
(x,y),r = cv2.minEnclosingCircle(c)
center = (int(x),int(y))
r = int(r)
if 10 <= r <= 15:
cv2.circle(image,center,r,(0,255,0),2)
# cv2.imwrite('omr_processed.png', image_ori)
cv2.imshow("original",image_ori)
cv2.waitKey(0)
The result I got from my code on the image you shared was this
You can apply thresholds to these green circled patches and then count non-zeros to get if the circle is marked or not. You can play with lower and upper_bound to try to make the solution robust.
Hope this helps! Good luck on your problem solving :)