Is it possible to store data on specific images on OpenCV? - opencv

I just wanted to know if this possible. For example, if I was to find contours in a specific image (http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html), could I store the data that represents the contours in the specific image? Then could I have another image and detect the contours and store them and then compare the contour data of each image to each other to see if there are objects with related geometric features?

Your question is not clear enough, so I apologize for my poor answer in advance. Anyway, let me try to answer them:
could I store the data that represents the contours in the specific image?
If you take a look at those docs, you might notice that findContours() uses one argument as input, and another as output, so you can't pass the input image to this method and also used it to store the output contours because the method will throw an exception (I've tried this in the past).
could I have another image and detect the contours and store them and then compare the contour data of each image to each other to see if there are objects with related geometric features?
It is possible to analyse 2 contours and compare them to each other. In fact, section 3. Match Shapes of this tutorial shares Python code that uses hu-moments to demonstrate how this can be achieved (invariant to translation, rotation and scale):
import cv2
import numpy as np
img1 = cv2.imread('star.jpg',0)
img2 = cv2.imread('star2.jpg',0)
ret, thresh = cv2.threshold(img1, 127, 255,0)
ret, thresh2 = cv2.threshold(img2, 127, 255,0)
contours,hierarchy = cv2.findContours(thresh,2,1)
cnt1 = contours[0]
contours,hierarchy = cv2.findContours(thresh2,2,1)
cnt2 = contours[0]
ret = cv2.matchShapes(cnt1,cnt2,1,0.0)
print ret

Related

Remove the spikes/triangles on a image

I have an image that is with the spikes/small triangles on the outline border, like this:
I would like to remove the un-wanted spikes/small triangles:
And output the image like this:
I have searched many posts on the web using OpenCV/Emgu CV but no luck.
The problem is the contour is not in equal spacing and I can not use any find peak functions to find them and remove them.
I have also used cubic spline to smooth the image, but it just destroyed the original image shape (too smooth) or just got a little effect on the spikes.
Could anyone who have ideas help me with this issue?
As suggested by Cris, a morphological closing is a good starting point.
In the picture below, I performed closing with an octognal kernel 49x49 (circular would be better), and took the difference with the original.
If you filter out the blobs by size (and possibly by shape), you will get the true spikes that you can subtract. The rest of the shape remains unchanged.
Something like this will also help.
Where:
#contours is your list of contours after findContrours()
#idx is the index of your contour
#eps regulates how much the contour is approximated.
cv::Mat approx;
double eps = cv::arcLength(contours[idx], true) * 0.05;
cv::approxPolyDP(contours[idx], approx, eps, true);
approx.copyTo(contours[idx]);
Maybe this is what you want (its not accurate at all)
OpenCV + Python
# Import preprocessors
import os
import cv2
import numpy as np
# Read image
dir = os.path.abspath(os.path.dirname(__file__))
im = cv2.imread(dir+'/im.png')
# Remove triangles
kernel = np.ones((5,5), np.uint8)
factor=11
im = cv2.dilate(im, kernel, iterations=factor)
im = cv2.erode(im, kernel, iterations=factor)
# Save the processed image
cv2.imwrite(dir+'/spike_res.png', im)
Update:
Maybe not related to OpenCV tag; but with .NET you can also use Erosion and Dialation of AForge.

Count lines in image

I am planning to use Opencv on Raspberry pi 3 with camera to count lines in the following image
It will be used in machine which produce threads. If one (or more) will be lost it will stop the machine.
Now i am wondering how to do that...?
I will do a loop to capture the images
I will crop the images to see only the part with lines
I will convert it to black&white
how to count them ? in a loop - check pixel value change? Or there is a better/faster idea?
Thank you for advice!
EDIT
P.S.
I used cv2.findContours (answer from Jeru Luke).
I've putted an A4 sheet with black lines in front of camera. I works ok in while loop BUT... i have 43 lines on sheet. When camera detects some differences i wrote the results to file. Sometimes i have 710,800,67 etc.
Pls look on file with values i have https://www.dropbox.com/s/jnn4w8mq3rrtppo/bledy.txt?dl=0
lines....The error is permanet for some few secounds. Tere is noting wronge when i have 43,43,43,43,44,43,43,43 (the one is wrong) because i watch few values before i put error. But when the are hundreds of bad values i have no idea...
I have something relatively simpler. It does not involve any for loops hence requires less time. I used the concept of counting the contours in the image after finding an appropriate threshold. I found the perfect threshold through trial-and-error.
I have the approach in python:
import cv2
path = 'C:/Users/Desktop/stack/contour/'
img = cv2.imread(path + 'lines.png', 0)
cv2.imshow('original Image', img)
ret, thresh = cv2.threshold(img, 80, 255, cv2.THRESH_BINARY_INV)
cv2.imshow('thresh1', thresh)
_, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
print('Number of lines:', len(contours))
cv2.waitKey(0)
cv2.destroyAllWindows()
Note:
As you can see there are no for loops involved. There is no need to count the number of pixel changes as well. Each presumed line becomes a contour. Using `len(contours) you get the number of lines present.
Using Hough Line transform would work well only If the lines are straight. Since the lines in the provided image are slanted it won't find perfect lines. This statement is more emphasized in the comments by #MarkSetchell.
Use Hough Lines Transform to detect the lines and just count the number of countours you find.
Here there's a tutorial for your problem (since you didn't specify the Language, this is in python).
OpenCV Tutorial Hough Lines

watershed segmentation always return black image

I've been recently working at a segmentation process for corneal
endothelial cells, and I've found a pretty decent paper that describes ways to perform it with nice results. I have been trying to follow that paper and implement it all using scikit-image and openCV, but I've gotten stucked at the watershed segmentation.
I will briefly describe how is the process supposed to be:
First of all, you have the original endothelial cells image
original image
Then, they instruct you to perform a morphological grayscale reconstruction, in order to level a little bit the grayscale of the image (however, they do not explain how to get the markers for the grayscale, so I've been fooling around and tried to get some on my own way)
This is what the reconstructed image was supposed to look like:
desired reconstruction
This is what my reconstructed image (lets label it as r) looks like:
my reconstruction
The purpose is to use the reconstructed image to get the markers for the watershed segmentation, how do we do that?! We get the original image (lets label it as f), and perform a threshold in (f - r) to extract the h-domes of the cell, i.e., our markers.
This is what the hdomes image was supposed to look like:
desired hdomes
This is what my hdomes image looks like:
my hdomes
I believe that the hdomes I've got are as good as theirs, so, the final step is to finally perform the watershed segmentation on the original image, using the hdomes we've been working so hard to get!
As input image, we will use the inverted original image, and as markers, our markers.
This is the derised output:
desired output
However, I am only getting a black image, EVERY PIXEL IS BLACK and I have no idea of what's happening... I've also tried using their markers and inverted image, however, also getting black image. The paper I've been using is Luc M. Vincent, Barry R. Masters, "Morphological image processing and network analysis of cornea endothelial cell images", Proc. SPIE 1769
I apologize for the long text, however I really wanted to explain everything in detail of what is my understanding so far, btw, I've tried watershed segmentation from both scikit-image and opencv, both gave me the black image.
Here is the following code that I have been using
img = cv2.imread('input.png',0)
mask = img
marker = cv2.erode(mask, cv2.getStructuringElement(cv2.MORPH_ERODE,(3,3)), iterations = 3)
reconstructedImage = reconstruction(marker, mask)
hdomes = img - reconstructedImage
cell_markers = cv2.threshold(hdomes, 0, 255, cv2.THRESH_BINARY)[1]
inverted = (255 - img)
labels = watershed(inverted, cell_markers)
cv2.imwrite('test.png', labels)
plt.figure()
plt.imshow(labels)
plt.show()
Thank you!
Here's a rough example for the watershed segmentation of your image with scikit-image.
What is missing in your script is calculating the Euclidean distance (see here and here) and extracting the local maxima from it.
Note that the watershed algorithm outputs a piece-wise constant image where pixels in the same regions are assigned the same value. What is shown in your 'desired output' panel (e) are the edges between the regions instead.
import numpy as np
import cv2
import matplotlib.pyplot as plt
from skimage.morphology import watershed
from scipy import ndimage as ndi
from skimage.feature import peak_local_max
from skimage.filters import threshold_local
img = cv2.imread('input.jpg',0)
'''Adaptive thersholding
calculates thresholds in regions of size block_size surrounding each pixel
to handle the non-uniform background'''
block_size = 41
adaptive_thresh = threshold_local(img, block_size)#, offset=10)
binary_adaptive = img > adaptive_thresh
# Calculate Euclidean distance
distance = ndi.distance_transform_edt(binary_adaptive)
# Find local maxima of the distance map
local_maxi = peak_local_max(distance, labels=binary_adaptive, footprint=np.ones((3, 3)), indices=False)
# Label the maxima
markers = ndi.label(local_maxi)[0]
''' Watershed algorithm
The option watershed_line=True leave a one-pixel wide line
with label 0 separating the regions obtained by the watershed algorithm '''
labels = watershed(-distance, markers, watershed_line=True)
# Plot the result
plt.imshow(img, cmap='gray')
plt.imshow(labels==0,alpha=.3, cmap='Reds')
plt.show()

Having difficulties detecting small objects in noisy background. Any ways to fix this?

I am trying to make a computer vision program in which it would detect litter and random trash in a noisy background such as the beach (noisy due to sand).
Original Image:
Canny Edge detection without any image processing:
I realize that a certain combination of image processing technique will help me accomplish my goal of ignoring the noisy sandy background and detect all trash and objects on the ground.
I tried to preform median blurring, play around and tune the parameters, and it gave me this:
It preforms well in terms of ignoring the sandy background, but it fails to detect some of the other many objects on the ground, possibly because it is blurred out (not too sure).
Is there any way of improving my algorithm or image processing techniques that will ignore the noisy sandy background while allowing canny edge detection to find all objects and have the program detect and draw contours on all objects.
Code:
from pyimagesearch.transform import four_point_transform
from matplotlib import pyplot as plt
import numpy as np
import cv2
import imutils
im = cv2.imread('images/beach_trash_3.jpg')
#cv2.imshow('Original', im)
# Histogram equalization to improve contrast
###
#im = np.fliplr(im)
im = imutils.resize(im, height = 500)
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
# Contour detection
#ret,thresh = cv2.threshold(imgray,127,255,0)
#imgray = cv2.GaussianBlur(imgray, (5, 5), 200)
imgray = cv2.medianBlur(imgray, 11)
cv2.imshow('Blurred', imgray)
'''
hist,bins = np.histogram(imgray.flatten(),256,[0,256])
plt_one = plt.figure(1)
cdf = hist.cumsum()
cdf_normalized = cdf * hist.max()/ cdf.max()
cdf_m = np.ma.masked_equal(cdf,0)
cdf_m = (cdf_m - cdf_m.min())*255/(cdf_m.max()-cdf_m.min())
cdf = np.ma.filled(cdf_m,0).astype('uint8')
imgray = cdf[imgray]
cv2.imshow('Histogram Normalization', imgray)
'''
'''
imgray = cv2.adaptiveThreshold(imgray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,\
cv2.THRESH_BINARY,11,2)
'''
thresh = imgray
#imgray = cv2.medianBlur(imgray,5)
#imgray = cv2.Canny(imgray,10,500)
thresh = cv2.Canny(imgray,75,200)
#thresh = imgray
cv2.imshow('Canny', thresh)
contours, hierarchy = cv2.findContours(thresh.copy(),cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(contours, key = cv2.contourArea, reverse = True)[:5]
test = im.copy()
cv2.drawContours(test, cnts, -1,(0,255,0),2)
cv2.imshow('All contours', test)
print '---------------------------------------------'
##### Code to show each contour #####
main = np.array([[]])
for c in cnts:
epsilon = 0.02*cv2.arcLength(c,True)
approx = cv2.approxPolyDP(c,epsilon,True)
test = im.copy()
cv2.drawContours(test, [approx], -1,(0,255,0),2)
#print 'Contours: ', contours
if len(approx) == 4:
print 'Found rectangle'
print 'Approx.shape: ', approx.shape
print 'Test.shape: ', test.shape
# frame_f = frame_f[y: y+h, x: x+w]
frame_f = test[approx[0,0,1]:approx[2,0,1], approx[0,0,0]:approx[2,0,0]]
print 'frame_f.shape: ', frame_f.shape
main = np.append(main, approx[None,:][None,:])
print 'main: ', main
# Uncomment in order to show all rectangles in image
#cv2.imshow('Show Ya', test)
#print 'Approx: ', approx.shape
#cv2.imshow('Show Ya', frame_f)
cv2.waitKey()
print '---------------------------------------------'
cv2.drawContours(im, cnts, -1,(0,255,0),2)
print main.shape
print main
cv2.imshow('contour-test', im)
cv2.waitKey()
what i am understanding from your problem is: you want to segment out the foreground objects from a background which is variable in nature(sand gray level is depending on many other conditions).
there are various ways to approach this kind of problem:
Approach 1:
From your image one thing is clear that, background color pixels will always much more in numbers than foreground, simplest method to start initial segmentation is:
Convert the image into gray.
Create its histogram.
Find the peak index of the histogram, i.e. index which have maximum pixels.
above three steps give you an idea of background BUT the game is not ends here, now you can put this index value in the center and take a range of values around it like 25 above and below, for example: if your peak index is 207 (as in your case) choose a range of gray level from 75 to 225 and threshold image, As according to nature of your background above method can be used for foreground object detection, after segmentation you have to perform some post processing steps like morphological analysis to segment out different objects after extraction of objects you can apply some classification stuff for finer level of segmentation to remove false positive.
Approach 2:
Play with some statistics of the image pixels, like make a small data set of gray values and
Label them class 1 and 2, for example 1 for sand and 2 for foreground,
Find out mean and variance(std deviation) of pixels from both the classes, and also calculate probability for both the class ( num_pix_per_class/total_num_pix), now store these stats for later use,
Now come back to image and take every pixel one by one and apply a gaussian pdf: 1/2*pisigma(exp(-(pix - mean)/2*sigma)); at the place of mean put the mean calculated earlier and at the sigma put std deviation calculated earlier.
after applying stage 3 you will get two probability value for each pixel for two classes, just choose the class which have higher probability.
Approach 3:
Approach 3 is more complex than above two: you can use some texture based operation to segment out sand type texture, but for applying texture based method i will recommend supervised classification than unsupervised(like k-means).
Different texture feature which you can use are:
Basic:
Range of gray levels in a defined neighborhood.
local mean and variance or entropy.
Gray Level Co-occurrence Matrices (GLCM).
Advanced:
Local Binary Patterns.
Wavelet Transform.
Gabor Transform. etc.
PS: In my opinion you should give a try to approach 1 and 2. it can solve lot of work. :)
For better results you should apply many algorithms. The OpenCV-tutorials focus always on one feature of OpenCV. The real CV-applications should use as many as possible techniques and algorithms.
I've used to detect biological cells in noisy pictures and I gained very good results applying some contextual information:
Expected size of cells
The fact that all cells have similar size
Expected number of cells
So I changed many parameters and tried to detect what I'm looking for.
If using edge detection, the sand would give rather random shapes. Try to change the canny parameters and detect lines, rects, circles, ets. - any shapes more probable for litter. Remember the positions of detected objects for each parameters-set and at the and give the priority to those positions (areas) where the shapes were detected most times.
Use color-separation. The peaks in color-histogram could be the hints to the litter, as the distribution of sand-colors should be more even.
For some often appearing, small objects like cigarette-stubs you can apply object matching.
P.S:
Cool application! Jus out of curiosity, are yoou going to scan the beach with a quadcopter?
If you want to detect objects on such uniform background, you should start by detecting the main color in the image. Like that you will detect all the sand, and the objects will be in the remaining parts. You can take a look to papers published by Arnaud LeTrotter and Ludovic Llucia who both used this type of "main color detection".

Get 1 contour per sign through find_contour and retrieve its Humoments in cv2

It is possible obtain only 5 objects (one per sign) by applying find_contour (opencv module) in this image: https://docs.google.com/file/d/0ByS6Z5WRz-h2WHEzNnJucDlRR2s/edit ?
Now I obtain 64 objects
After that I want to retrieve Humoments and make a comparison with other images.
For now i'd try only with the same image a little bit translated, for testing it returns they are the same.
My question I how can I obtain only 5 objects for applying humoments or if there are other solutions to calculate humoments fot the image?
import cv2
im = cv2.imread('Sassatelli 1984 n. 165 mod1.jpg')
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(imgray, (0,0), 5)
cv2.imshow('Blur', blur)
cv2.waitKey()
th = 20
edges = cv2.Canny(blur, th, th*3)
cv2.imshow('canny',edges)
cv2.waitKey()
contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
print('objects found')
print(len(contours))
cnt = contours[0]
cv2.drawContours(blur,contours,-1,(0,255,0),3)
cv2.imshow('draw contours',blur)
cv2.waitKey()
moments = cv2.moments(cnt)
Case 1: Problem with saving image in jpg format
When you save a black-and-white-only (ie pixel values 0 and 255 only) image in jpg format, there is lossy compression, which changes the pixel values. If you want to see it, create such an image, save it in jpg, open the saved image and zoom to black-white edge. You can see a pixel value change.
So when you find contours, you expect there is only white objects, but in reality, there is some mid-values also, which is also considered as contours. It increases number of contours.
So to avoid this problem,
Better save images in png or any other lossless format etc.
Apply a threshold, (with a values of 127 or as you like) to make image real binary one before finding contours.
This is much more explained here : What does result of 'list(contour)' denote?
Case 2: Problem with white background
OpenCV findcontours() is designed to find white objects in black background. So if your background is white, it is also treated as one object. So invert the image before finding contours.
Case 3 : Problem with holes in objects
If you have holes in your object, it is also considered as an object. So if you want only external boundary of the objects, use cv2.RETR_EXTERNAL flag for findcontours() function.
Sample Code:
import cv2
import numpy as np
img = cv2.imread('sof.jpg')
gray = cv2.imread('sof.jpg',0)
ret,thresh = cv2.threshold(gray,127,255,cv2.THRESH_BINARY_INV)
thresholded and inverted image :
Now find the contours, draw it, check the number of contours:
cv2.drawContours(img,contours,-1,(0,255,0),2)
cv2.imshow('img',img),cv2.waitKey(0),cv2.destroyAllWindows()
Result :
NOTE :
Here, I have taken only external contours. If you want to remove internal holes from these objects, you will need to use cv2.RETR_TREE or cv2.RETR_CCOMP flags, and check their hierarchy, and remove them. It is explained in this link : Contours 5 : Hierarchy

Resources