Count number of objects using watershed algorithm - Scikit-image - opencv

I am trying to find the number of objects in a given image using watershed segmentation. Consider for example the coins image. Here I would like to know the number of coins in the image. I implemented the code available at Scikit-image documentation and tweaked with it a little and got results similar to those displayed on the documentation page.
After looking at functions used in the code in detail I found out that ndimage.label() also returns number of unique objects found in the image (mentioned in it's documentation), but when I print that value I am getting 53 which is very high as compared to the number of coins in the actual image.
Can somebody suggest some method to find the number of objects in an image.

Here is a version of your code that counts the coins in one of two ways: a) by directly segmenting the distance image and b) by doing watershed first and rejecting tiny intersecting regions.
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from skimage import io, color, filter as filters
from scipy import ndimage
from skimage.morphology import watershed
from skimage.feature import peak_local_max
from skimage.measure import regionprops, label
image = color.rgb2gray(io.imread('water_coins.jpg', plugin='freeimage'))
image = image < filters.threshold_otsu(image)
distance = ndimage.distance_transform_edt(image)
# Here's one way to measure the number of coins directly
# from the distance map
coin_centres = (distance > 0.8 * distance.max())
print('Number of coins (method 1):', np.max(label(coin_centres)))
# Or you can proceed with the watershed labeling
local_maxi = peak_local_max(distance, indices=False, footprint=np.ones((3, 3)),
labels=image)
markers, num_features = ndimage.label(local_maxi)
labels = watershed(-distance, markers, mask=image)
# ...but then you have to clean up the tiny intersections between coins
regions = regionprops(labels)
regions = [r for r in regions if r.area > 50]
print('Number of coins (method 2):', len(regions) - 1)
fig, axes = plt.subplots(ncols=3, figsize=(8, 2.7))
ax0, ax1, ax2 = axes
ax0.imshow(image, cmap=plt.cm.gray, interpolation='nearest')
ax0.set_title('Overlapping objects')
ax1.imshow(-distance, cmap=plt.cm.jet, interpolation='nearest')
ax1.set_title('Distances')
ax2.imshow(labels, cmap=plt.cm.spectral, interpolation='nearest')
ax2.set_title('Separated objects')
for ax in axes:
ax.axis('off')
fig.subplots_adjust(hspace=0.01, wspace=0.01, top=1, bottom=0, left=0,
right=1)
plt.show()

Related

Is there an equivalent function or an implmentation of skimage.feature.peak_local_max in OpenCV?

I have been trying to segment biological cells in an image using watershed algorithm. I found an excellent article on pyimagesearch which clearly gives an overview of the algorithm and its implementation in python. The code uses both opencv and scikit-image for processing the image.
My goal is to convert the whole code into pure opencv. But the issues is that there's a function called scipy.feature.peak_local_max in scikit-image which does the job of finding local peaks in an image very efficiently. I couldn't find or devise such function in OpenCV.
Original Code(I have documented this snippet according to my understanding, please correct if am wrong):
import the necessary packages
from skimage.feature import peak_local_max
from skimage.morphology import watershed
from scipy import ndimage
import numpy as np
import argparse
import imutils
import cv2
from matplotlib import pyplot as plt
# load the image and perform pyramid mean shift filtering
# to aid the thresholding step
image = cv2.imread("test2.png")
shifted = cv2.pyrMeanShiftFiltering(image, 21, 51)
# Apply grayscale
gray = cv2.cvtColor(shifted, cv2.COLOR_BGR2GRAY)
# Convert to binary
thresh = cv2.threshold(gray, 0, 255,cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
# Watershed starts from here
# compute the exact Euclidean distance from every binary
# pixel to the nearest zero pixel, then find peaks in this
# distance map
D = ndimage.distance_transform_edt(thresh)
localMax = peak_local_max(D, indices=False, min_distance=10,labels=thresh)
# perform a connected component analysis on the local peaks,
# using 8-connectivity, then appy the Watershed algorithm
markers = ndimage.label(localMax, structure=np.ones((3, 3)))[0]
# Apply segmentation
labels = watershed(-D, markers, mask=thresh)
print("[INFO] {} unique segments found".format(len(np.unique(labels)) - 1))
cv2.imwrite("labels.png",labels)
# Contouring
for label in np.unique(labels):
# if the label is zero, we are examining the 'background'
# so simply ignore it
if label == 0:
continue
# otherwise, allocate memory for the label region and draw
# it on the mask
mask = np.zeros(gray.shape, dtype="uint8")
mask[labels == label] = 255
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
c = max(cnts, key=cv2.contourArea)
# draw a circle enclosing the object
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.018 * peri, True)
cv2.drawContours(image, [approx], -1, (0,0,255), 2)
cv2.imwrite("output.jpg",image)
Pure OpenCV Code till finding distance map:
# import the necessary packages
import numpy as np
import cv2
# load the image and perform pyramid mean shift filtering
# to aid the thresholding step
image = cv2.imread("1.png")
shifted = cv2.pyrMeanShiftFiltering(image, 21, 51)
# Apply grayscale
gray = cv2.cvtColor(shifted, cv2.COLOR_BGR2GRAY)
# Convert to binary
thresh = cv2.threshold(gray, 0, 255,cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
# Watershed starts from here
# compute the exact Euclidean distance from every binary
# pixel to the nearest zero pixel, then find peaks in this
# distance map
D = cv2.distanceTransform(thresh,cv2.DIST_L2,0)
The point till D, both the original code and the pure opencv code which I have tried have exactly the same outputs, the issue is I dont exactly have a clear idea on how to implement peak_local_max in opencv which would give identical result as scikit's function.
It would be really helpful if someone who has relavent knowledge could explain how this function works in finding those peaks in such a fine grained manner.
Input Image:
Peak Local max output in scikit-image(BGR format image):
Required output:

Geographic points extend beyond expected boundary

I have a point geometry of US locations contained in a GeoDataFrame.
I want to plot this as a scatterplot over the US map.
My code is:
import numpy as np
import geopandas as gpd
import libpysal
import contextily as ctx
import matplotlib.pyplot as plt
from shapely.ops import cascaded_union
gdf = gpd.GeoDataFrame(point_geometry, geometry='geometry')
boundary = gpd.read_file(libpysal.examples.get_path('us48.shp'))
fig, ax = plt.subplots(figsize=(50, 50))
boundary.plot(ax=ax, color="gray")
gdf.plot(ax=ax, markersize=3.5, color="black")
ax.axis("off")
plt.axis("equal")
plt.show()
Upon inspecting on the graph, the dots are out of my expected bounds.
Is there something I am missing?
Do I need to create a boundary to limit the scatter of the dots?
The plot looks good. I guess you want to exclude the points outside conterminous USA. Those points are clearly in Hawaii, Alaska, and Canada.
From your geodataframe with point geometry, gdf, and with polygon geometry, boundary, you can create a proper boundary that can be used to limit the scatter of the points.
# need this module
from shapely.ops import cascaded_union
# create the conterminous USA polygon
poly_union = cascaded_union([poly for poly in boundary.geometry])
# get a selection from `gdf`, taking points within `poly_union`
points_within = gdf[gdf.geometry.within(poly_union)]
Now, points_within is a geodataframe that you can use to plot instead of gdf.
points_within.plot(ax=ax, markersize=3.5, color="black")

Why does FFT not have an effect on my smoothed signal?

I'm playing with FFT at the moment and I try to get periods from noisy signals by recreating this example. While experimenting, I've noticed that after smoothing a quite noisy signal, the result of fft() is actually the same signal again - which is what I don't understand.
Here is a full example which can be run in an IPython Notebook (You can create a notebook here and run the code if you want).
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
figsize = (16,8)
n = 500
ls = np.linspace(0,2*np.pi, n)
x_target = np.sin(12*ls) + np.sin(52*ls)
x = np.sin(12*ls) + np.sin(52*ls) + np.random.rand(n) * 3.5
x = x - np.mean(x)
x_smooth = pd.rolling_mean(pd.DataFrame(x), 14).replace(np.nan, 0.0).as_matrix()
x_smooth = x_smooth - np.mean(x_smooth)
x_smooth = np.roll(x_smooth, -7)
# Getting shwifty and showing what we've got
plt.figure(figsize=(16,8))
plt.scatter(ls, x, s=3, c=[1.0,0.0,0.0,1.0])
plt.plot(ls, x_target, color=[1.0,0.0,0.0, 0.3])
plt.plot(ls, x_smooth)
plt.legend(["Target", "Smooth", "Noisy Data"])
# Target
x_fft = np.abs(np.fft.fft(x_target))
pd.DataFrame(x_fft).plot(figsize=figsize)
# Looks like it should
x_fft = np.abs(np.fft.fft(x))
pd.DataFrame(x_fft).plot(figsize=figsize)
# Plots the same signal?
x_fft = np.abs(np.fft.fft(x_smooth))
pd.DataFrame(x_fft).plot(figsize=figsize)
Below you find the resulting plots of this script.
Noisy data with smoothed signal:
FFT of the target function
FFT of the noisy data
FFT of the smoothed data
I don't really get why this is the case here. Can somebody explain this to me or am I doing something wrong here?
The critical difference is between:
x_fft = np.abs(np.fft.fft(x_smooth))
and
x_fft = np.abs(np.fft.fft(x_smooth.flatten()))
because it seems that x_smooth has gotten itself all 2-dimensional somewhere along the way. Its shape is (500,1) and because np.fft.fft works by default along axis=-1 (i.e. the highest dimension) it is taking the 500 separate FFTs of 500 different 1-sample signals. (Unsuprisingly enough, that returns only the DC component for each, so put them all together and you end up with the same signal you started with.)
The FFT from the smoothed signal really looks like this:

Scikit-learn PCA .fit_transform shape is inconsistent (n_samples << m_attributes)

I am getting different shapes for my PCA using sklearn. Why isn't my transformation resulting in an array of the same dimensions like the docs say?
fit_transform(X, y=None)
Fit the model with X and apply the dimensionality reduction on X.
Parameters:
X : array-like, shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
Returns:
X_new : array-like, shape (n_samples, n_components)
Check this out with the iris dataset which is (150, 4) where I'm making 4 PCs:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.preprocessing import StandardScaler
from sklearn import decomposition
import seaborn as sns; sns.set_style("whitegrid", {'axes.grid' : False})
%matplotlib inline
np.random.seed(0)
# Iris dataset
DF_data = pd.DataFrame(load_iris().data,
index = ["iris_%d" % i for i in range(load_iris().data.shape[0])],
columns = load_iris().feature_names)
Se_targets = pd.Series(load_iris().target,
index = ["iris_%d" % i for i in range(load_iris().data.shape[0])],
name = "Species")
# Scaling mean = 0, var = 1
DF_standard = pd.DataFrame(StandardScaler().fit_transform(DF_data),
index = DF_data.index,
columns = DF_data.columns)
# Sklearn for Principal Componenet Analysis
# Dims
m = DF_standard.shape[1]
K = m
# PCA (How I tend to set it up)
M_PCA = decomposition.PCA()
A_components = M_PCA.fit_transform(DF_standard)
#DF_standard.shape, A_components.shape
#((150, 4), (150, 4))
but then when I use the same exact approach on my actual dataset (76, 1989) as in 76 samples and 1989 attributes/dimensions I get a (76, 76) array instead of (76, 1989)
DF_centered = normalize(DF_mydata, method="center", axis=0)
m = DF_centered.shape[1]
# print(m)
# 1989
M_PCA = decomposition.PCA(n_components=m)
A_components = M_PCA.fit_transform(DF_centered)
DF_centered.shape, A_components.shape
# ((76, 1989), (76, 76))
normalize is just a wrapper I made that subtracts the mean from each dimension.
(Note: this answer is adapted from my answer on Cross Validated here: Why are there only n−1 principal components for n data points if the number of dimensions is larger or equal than n?)
PCA (as most typically run) creates a new coordinate system by:
shifting the origin to the centroid of your data,
squeezes and/or stretches the axes to make them equal in length, and
rotates your axes into a new orientation.
(For more details, see this excellent CV thread: Making sense of principal component analysis, eigenvectors & eigenvalues.) However, step 3 rotates your axes in a very specific way. Your new X1 (now called "PC1", i.e., the first principal component) is oriented in your data's direction of maximal variation. The second principal component is oriented in the direction of the next greatest amount of variation that is orthogonal to the first principal component. The remaining principal components are formed likewise.
With this in mind, let's examine a simple example (suggested by #amoeba in a comment). Here is a data matrix with two points in a three dimensional space:
X = [ 1 1 1
2 2 2 ]
Let's view these points in a (pseudo) three dimensional scatterplot:
So let's follow the steps listed above. (1) The origin of the new coordinate system will be located at (1.5,1.5,1.5). (2) The axes are already equal. (3) The first principal component will go diagonally from what used to be (0,0,0) to what was originally (3,3,3), which is the direction of greatest variation for these data. Now, the second principal component must be orthogonal to the first, and should go in the direction of the greatest remaining variation. But what direction is that? Is it from (0,0,3) to (3,3,0), or from (0,3,0) to (3,0,3), or something else? There is no remaining variation, so there cannot be any more principal components.
With N=2 data, we can fit (at most) N−1=1 principal components.

Horizontal Histogram in OpenCV

I am newbie to OpenCV,now I am making a senior project related Image processing. I have a question: Can I make a horizontal or vertical histogram with some functions of OpenCV?
Thanks,
Truong
The most efficient way to do this is by using the cvReduce function. There's a parameter to allow to select if you want an horizontal or vertical projection.
You can also do it by hand with the functions cvGetCol and cvGetRow combined with cvSum.
Based on the link you provided in a comment, this is what I believe you're trying to do.
You want to create an array with n elements, where n is the number of columns in the input image. The value of the nth element of the array is the sum of all the pixels in the nth column.
You can calculate this array by looping over the columns of the input image, using cvGetSubRect to access the pixels in that column, and cvSum to sum those pixels.
Here is some Python code that does that, assuming a grayscale image:
import cv
def verticalProjection(img):
"Return a list containing the sum of the pixels in each column"
(w,h) = cv.GetSize(img)
sumCols = []
for j in range(w):
col = cv.GetSubRect(img, (j,0,1,h))
sumCols.append(cv.Sum(col)[0])
return sumCols
Updating carnieri answer (some cv functions are not working today)
import numpy as np
import cv2
def verticalProjection(img):
"Return a list containing the sum of the pixels in each column"
(h, w) = img.shape[:2]
sumCols = []
for j in range(w):
col = img[0:h, j:j+1] # y1:y2, x1:x2
sumCols.append(np.sum(col))
return sumCols
Regards.
An example of using cv2.reduce with OpenCV 3 in Python :
import numpy as np
import cv2
img = cv2.imread("test_1.png")
x_sum = cv2.reduce(img, 0, cv2.REDUCE_SUM, dtype=cv2.CV_32S)
y_sum = cv2.reduce(img, 1, cv2.REDUCE_SUM, dtype=cv2.CV_32S)

Resources