HoughLinesP combining very similar lines into one - opencv

I have an image that is a simple picture and I want to extract the end points of the lines. However, there are other lines that do overlap, so, my lines have 'gaps' in them.
I am trying to use HoughLinesP to extract the parameterization of these ten lines and though the visual result is reasonable, it still gives me 43 individual lines.
I have tried smoothing the lines, skeleton representation of the lines, redrawing the lines after each of those, I'm working with countours right now. I have adjusted my parameters (line length, max gap, threshold, etc...) and I can not get this to reduce to ten lines. In my current code, I subtract the first image from itself to make a new black space to draw my HoughLines on, maybe not the most efficient, but its effective. Here is my code:
import numpy as np
import cv2
img = cv2.imread('masked.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
minLineLength = 100
img2 = cv2.subtract(img, img)
lines = cv2.HoughLinesP(gray,1,np.pi/180,10,minLineLength,maxLineGap=100)
print(len(lines))
for n in range(len(lines)):
for x1,y1,x2,y2 in lines[n]:
cv2.line(img2,(x1,y1),(x2,y2),(0,255,0),1,4)
cv2.imshow('result.png', img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Is there another approach that would allow me to fill in these gaps and pull out ten equations of lines? I'm using Python, OpenCV and Numpy right now.

Related

Extract stripes from low contrast grayscale images

I want to extract stripes from this sample file sample file, and the result should look like this one similar result image. Then, I need to count the number of stripes on the right, and calculate the distance from the end of each left stripe to the end of each adjacent right stripe.
I tried with the following code, but my result my result fileis still a little bit away from my target. Here is what I do:
import numpy as np
import cv2
from matplotlib import pyplot as plt
gray = cv2.imread('input_file.png',cv2.IMREAD_UNCHANGED)
sobelY = cv2.Sobel(gray, cv2.CV_32F, 0, 1, ksize=3)
sobelY2 = cv2.Sobel(sobelY, cv2.CV_32F, 0, 1, ksize=3)
sobelY2[sobelY2<0]=0
mask = np.where(sobelY2==0,0,1)
sobelY2 = cv2.normalize(sobelY2, dst=None, alpha=0, beta=65535, norm_type=cv2.NORM_MINMAX).astype(np.uint16)
clahe=cv2.createCLAHE(clipLimit=6, tileGridSize=(8,8))
sobelY2_clahe = clahe.apply(sobelY2)
sobelY2_clahe = clahe.apply(sobelY2_clahe)
result = np.where(mask!=0,sobelY2_clahe,0)
fig = plt.figure(figsize=(10, 10))
ax = plt.subplot(121)
plt.imshow(gray, cmap='gray')
ax = plt.subplot(122)
plt.imshow(result, cmap='gray')
plt.show()
The input file is in 16 bits format, so I keep it unchanged for accuracy. I do second order Sobel operation in Y direction to high light those stripes, and then I do two times Clahe operations to balance the contrast. To keep the background pixels as 0, I use a mask to set the values back after the Clahe operations.
Any advice is appreciated!
For completeness, I am attaching another more challenged input file for referencemore challenged input file.
Edit:
The sobelY2 image pretty much reflects the stripes, but could we make it look better?
I just opened a new question about how to trim each of these stripes based on gray scale values.trim image based on grayscale values

Remove the spikes/triangles on a image

I have an image that is with the spikes/small triangles on the outline border, like this:
I would like to remove the un-wanted spikes/small triangles:
And output the image like this:
I have searched many posts on the web using OpenCV/Emgu CV but no luck.
The problem is the contour is not in equal spacing and I can not use any find peak functions to find them and remove them.
I have also used cubic spline to smooth the image, but it just destroyed the original image shape (too smooth) or just got a little effect on the spikes.
Could anyone who have ideas help me with this issue?
As suggested by Cris, a morphological closing is a good starting point.
In the picture below, I performed closing with an octognal kernel 49x49 (circular would be better), and took the difference with the original.
If you filter out the blobs by size (and possibly by shape), you will get the true spikes that you can subtract. The rest of the shape remains unchanged.
Something like this will also help.
Where:
#contours is your list of contours after findContrours()
#idx is the index of your contour
#eps regulates how much the contour is approximated.
cv::Mat approx;
double eps = cv::arcLength(contours[idx], true) * 0.05;
cv::approxPolyDP(contours[idx], approx, eps, true);
approx.copyTo(contours[idx]);
Maybe this is what you want (its not accurate at all)
OpenCV + Python
# Import preprocessors
import os
import cv2
import numpy as np
# Read image
dir = os.path.abspath(os.path.dirname(__file__))
im = cv2.imread(dir+'/im.png')
# Remove triangles
kernel = np.ones((5,5), np.uint8)
factor=11
im = cv2.dilate(im, kernel, iterations=factor)
im = cv2.erode(im, kernel, iterations=factor)
# Save the processed image
cv2.imwrite(dir+'/spike_res.png', im)
Update:
Maybe not related to OpenCV tag; but with .NET you can also use Erosion and Dialation of AForge.

Count lines in image

I am planning to use Opencv on Raspberry pi 3 with camera to count lines in the following image
It will be used in machine which produce threads. If one (or more) will be lost it will stop the machine.
Now i am wondering how to do that...?
I will do a loop to capture the images
I will crop the images to see only the part with lines
I will convert it to black&white
how to count them ? in a loop - check pixel value change? Or there is a better/faster idea?
Thank you for advice!
EDIT
P.S.
I used cv2.findContours (answer from Jeru Luke).
I've putted an A4 sheet with black lines in front of camera. I works ok in while loop BUT... i have 43 lines on sheet. When camera detects some differences i wrote the results to file. Sometimes i have 710,800,67 etc.
Pls look on file with values i have https://www.dropbox.com/s/jnn4w8mq3rrtppo/bledy.txt?dl=0
lines....The error is permanet for some few secounds. Tere is noting wronge when i have 43,43,43,43,44,43,43,43 (the one is wrong) because i watch few values before i put error. But when the are hundreds of bad values i have no idea...
I have something relatively simpler. It does not involve any for loops hence requires less time. I used the concept of counting the contours in the image after finding an appropriate threshold. I found the perfect threshold through trial-and-error.
I have the approach in python:
import cv2
path = 'C:/Users/Desktop/stack/contour/'
img = cv2.imread(path + 'lines.png', 0)
cv2.imshow('original Image', img)
ret, thresh = cv2.threshold(img, 80, 255, cv2.THRESH_BINARY_INV)
cv2.imshow('thresh1', thresh)
_, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
print('Number of lines:', len(contours))
cv2.waitKey(0)
cv2.destroyAllWindows()
Note:
As you can see there are no for loops involved. There is no need to count the number of pixel changes as well. Each presumed line becomes a contour. Using `len(contours) you get the number of lines present.
Using Hough Line transform would work well only If the lines are straight. Since the lines in the provided image are slanted it won't find perfect lines. This statement is more emphasized in the comments by #MarkSetchell.
Use Hough Lines Transform to detect the lines and just count the number of countours you find.
Here there's a tutorial for your problem (since you didn't specify the Language, this is in python).
OpenCV Tutorial Hough Lines

watershed segmentation always return black image

I've been recently working at a segmentation process for corneal
endothelial cells, and I've found a pretty decent paper that describes ways to perform it with nice results. I have been trying to follow that paper and implement it all using scikit-image and openCV, but I've gotten stucked at the watershed segmentation.
I will briefly describe how is the process supposed to be:
First of all, you have the original endothelial cells image
original image
Then, they instruct you to perform a morphological grayscale reconstruction, in order to level a little bit the grayscale of the image (however, they do not explain how to get the markers for the grayscale, so I've been fooling around and tried to get some on my own way)
This is what the reconstructed image was supposed to look like:
desired reconstruction
This is what my reconstructed image (lets label it as r) looks like:
my reconstruction
The purpose is to use the reconstructed image to get the markers for the watershed segmentation, how do we do that?! We get the original image (lets label it as f), and perform a threshold in (f - r) to extract the h-domes of the cell, i.e., our markers.
This is what the hdomes image was supposed to look like:
desired hdomes
This is what my hdomes image looks like:
my hdomes
I believe that the hdomes I've got are as good as theirs, so, the final step is to finally perform the watershed segmentation on the original image, using the hdomes we've been working so hard to get!
As input image, we will use the inverted original image, and as markers, our markers.
This is the derised output:
desired output
However, I am only getting a black image, EVERY PIXEL IS BLACK and I have no idea of what's happening... I've also tried using their markers and inverted image, however, also getting black image. The paper I've been using is Luc M. Vincent, Barry R. Masters, "Morphological image processing and network analysis of cornea endothelial cell images", Proc. SPIE 1769
I apologize for the long text, however I really wanted to explain everything in detail of what is my understanding so far, btw, I've tried watershed segmentation from both scikit-image and opencv, both gave me the black image.
Here is the following code that I have been using
img = cv2.imread('input.png',0)
mask = img
marker = cv2.erode(mask, cv2.getStructuringElement(cv2.MORPH_ERODE,(3,3)), iterations = 3)
reconstructedImage = reconstruction(marker, mask)
hdomes = img - reconstructedImage
cell_markers = cv2.threshold(hdomes, 0, 255, cv2.THRESH_BINARY)[1]
inverted = (255 - img)
labels = watershed(inverted, cell_markers)
cv2.imwrite('test.png', labels)
plt.figure()
plt.imshow(labels)
plt.show()
Thank you!
Here's a rough example for the watershed segmentation of your image with scikit-image.
What is missing in your script is calculating the Euclidean distance (see here and here) and extracting the local maxima from it.
Note that the watershed algorithm outputs a piece-wise constant image where pixels in the same regions are assigned the same value. What is shown in your 'desired output' panel (e) are the edges between the regions instead.
import numpy as np
import cv2
import matplotlib.pyplot as plt
from skimage.morphology import watershed
from scipy import ndimage as ndi
from skimage.feature import peak_local_max
from skimage.filters import threshold_local
img = cv2.imread('input.jpg',0)
'''Adaptive thersholding
calculates thresholds in regions of size block_size surrounding each pixel
to handle the non-uniform background'''
block_size = 41
adaptive_thresh = threshold_local(img, block_size)#, offset=10)
binary_adaptive = img > adaptive_thresh
# Calculate Euclidean distance
distance = ndi.distance_transform_edt(binary_adaptive)
# Find local maxima of the distance map
local_maxi = peak_local_max(distance, labels=binary_adaptive, footprint=np.ones((3, 3)), indices=False)
# Label the maxima
markers = ndi.label(local_maxi)[0]
''' Watershed algorithm
The option watershed_line=True leave a one-pixel wide line
with label 0 separating the regions obtained by the watershed algorithm '''
labels = watershed(-distance, markers, watershed_line=True)
# Plot the result
plt.imshow(img, cmap='gray')
plt.imshow(labels==0,alpha=.3, cmap='Reds')
plt.show()

Finding location of rectangles in an image with OpenCV

I'm trying to use OpenCV to "parse" screenshots from the iPhone game Blocked. The screenshots are cropped to look like this:
I suppose for right now I'm just trying to find the coordinates of each of the 4 points that make up each rectangle. I did see the sample file squares.c that comes with OpenCV, but when I run that algorithm on this picture, it comes up with 72 rectangles, including the rectangular areas of whitespace that I obviously don't want to count as one of my rectangles. What is a better way to approach this? I tried doing some Google research, but for all of the search results, there is very little relevant usable information.
The similar issue has already been discussed:
How to recognize rectangles in this image?
As for your data, rectangles you are trying to find are the only black objects. So you can try to do a threshold binarization: black pixels are those ones which have ALL three RGB values less than 40 (I've found it empirically). This simple operation makes your picture look like this:
After that you could apply Hough transform to find lines (discussed in the topic I referred to), or you can do it easier. Compute integral projections of the black pixels to X and Y axes. (The projection to X is a vector of x_i - numbers of black pixels such that it has the first coordinate equal to x_i). So, you get possible x and y values as the peaks of the projections. Then look through all the possible segments restricted by the found x and y (if there are a lot of black pixels between (x_i, y_j) and (x_i, y_k), there probably is a line probably). Finally, compose line segments to rectangles!
Here's a complete Python solution. The main idea is:
Apply pyramid mean shift filtering to help threshold accuracy
Otsu's threshold to get a binary image
Find contours and filter using contour approximation
Here's a visualization of each detected rectangle contour
Results
import cv2
image = cv2.imread('1.png')
blur = cv2.pyrMeanShiftFiltering(image, 11, 21)
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.015 * peri, True)
if len(approx) == 4:
x,y,w,h = cv2.boundingRect(approx)
cv2.rectangle(image,(x,y),(x+w,y+h),(36,255,12),2)
cv2.imshow('thresh', thresh)
cv2.imshow('image', image)
cv2.waitKey()
I wound up just building on my original method and doing as Robert suggested in his comment on my question. After I get my list of rectangles, I then run through and calculate the average color over each rectangle. I check to see if the red, green, and blue components of the average color are each within 10% of the gray and blue rectangle colors, and if they are I save the rectangle, if they aren't I discard it. This process gives me something like this:
From this, it's trivial to get the information I need (orientation, starting point, and length of each rectangle, considering the game window as a 6x6 grid).
The blocks look like bitmaps - why don't you use simple template matching with different templates for each block size/color/orientation?
Since your problem is the small rectangles I would start by removing them.
Since those lines are much thinner than the borders of the rectangles I would start by applying morphological operations on the image.
Using a structural element that looks like this:
element = [ 1 1
1 1 ]
should remove lines that are less than two pixels wide. After the small lines are removed the rectangle finding algorithm of OpenCV will most likely do the rest of the job for you.
The erosion can be done in OpenCV by the function cvErode
Try one of the many corner detectors like harris corner detector. also it is in general a good idea to try that at multiple resolutions : so do some preprocessing of of varying magnification.
It appears that you want some sort of color dominated square then you can suppress the other colors, by first using something like cvsplit .....and then thresholding the color...so only that region remains....follow that with a cropping operation ...I think that could work as well ....

Resources