Why isn't Python OpenCV HoughP Transform able to identify all the spaced lines? - opencv

When we have spaced lines on 1px. HoughP transform of python opencv doesn't mark all the points.
I used:
cv2.HoughLinesP(img,1,np.pi/180,400)
Theoretically it should be working fine be it dashed or non dashed. In this case it doesn't mark all the lines if they are on the same height.
HoughP Transfrom Sample Output
The Green Lines indicate the white lines that were identified.
I changed the parameters to this:
cv2.HoughLinesP(img,1,np.pi/180,10,10,10)
And got this output, as you can see the detection is still missing some parts. Its unclear how, for a straight line, a shorter line is marked but not a longer line.
*** After the method suggested!
After method suggested by Robert
Input Image: Input Image
Here is the code:
import numpy as np
import cv2
import time
img=cv2.imread("in.PNG")
img2=np.abs(img)
img=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY)
lines = cv2.HoughLinesP(img,rho = 1,theta = 1*np.pi/180,threshold =
10,minLineLength = 10,maxLineGap = 10)
N = lines.shape[0]
print lines
for i in range(N):
x1 = lines[i][0][0]
y1 = lines[i][0][1]
x2 = lines[i][0][2]
y2 = lines[i][0][3]
cv2.line(img2,(x1,y1),(x2,y2),(0,255,0),1)
#cv2.imshow("Window",thresh1)
cv2.imwrite("out.PNG",img2)

Related

i wanted to detect objects in a hsv image. but i keep getting an error,,Expected Ptr<cv::UMat> for argument '%s'

i was trying to create a trackbar window and get hsv value of the image by adjusting the trackbar. created a mask and then adjusted the trackbar to detect an object of the hsv image
enter code here
def nothing(x):
pass
cv.namedWindow("Tracking")
cv.createTrackbar("LH","Tracking",0,255,nothing)
cv.createTrackbar("LS","Tracking",0,255,nothing)
cv.createTrackbar("LV","Tracking",0,255,nothing)
cv.createTrackbar("UH","Tracking",255,255,nothing)
cv.createTrackbar("US","Tracking",255,255,nothing)
cv.createTrackbar("UV","Tracking",255,255,nothing)
while True:
frame = cv.imread("C:/Users/acer/Desktop/insects/New folder/ins.jpg")
hsv = cv.cvtColor(frame,cv.COLOR_BGR2HSV)
l_h = cv.getTrackbarPos("LH","Tracking")
l_s = cv.getTrackbarPos("LS","Tracking")
l_v = cv.getTrackbarPos("LV","Tracking")
u_h = cv.getTrackbarPos("UH","Tracking")
u_s = cv.getTrackbarPos("US","Tracking")
u_v = cv.getTrackbarPos("UV","Tracking")
l_b = np.array([l_h,l_s,l_v])
u_b = np.array([u_h,u_s,u_v])
mask = (hsv,l_b,u_b)
res = cv.bitwise_and(frame,frame,mask=mask)
cv.imshow("frame",frame)
cv.imshow("mask",mask)
cv.imshow("res",res)
key = cv.waitKey(1)
if key == 27:
break
cv.destroyAllWindows()
There are a few issues with your code:
1) You have no import statements. You need at least:
import cv2 as cv
import numpy as np
2) Your indentation is incorrect. Your function nothing() should not be indented.
3) You omitted to call inRange(), you need:
mask = cv.inRange(hsv,l_b,u_b)
4) You have scaled the Hue into the range 0..255 when it actually has the range 0..180 when used with uint8 images so that 360 degrees comes out as 180 degrees which is less than the 255 upper limit of uint8.
By the way, it is fairly poor practice to do "loop invariant" stuff inside a loop - I mean the part where you hit the disk every millisecond and re-read the image, re-decode the JPEG and convert it to HSV. All that can be done outside the loop, then inside it, just do a quick memory copy of the HSV image.

Is it possible to vectorize this calculation in numpy?

Can the following expression of numpy arrays be vectorized for speed-up?
k_lin1x = [2*k_lin[i]*k_lin[i+1]/(k_lin[i]+k_lin[i+1]) for i in range(len(k_lin)-1)]
Is it possible to vectorize this calculation in numpy?
x1 = k_lin
x2 = k_lin
s = len(k_lin)-1
np.roll(x2, -1) #do this do bring the column one position right
result1 = x2[:s]+x1[:s] #your divider. You add everything but the last element
result2 = x2[:s]*x1[:s] #your upper part
# in one line
result = 2*x2[:s]*x1[:s] / (x2[:s]+x1[:s])
You last column wont be added or taken into the calculations and you can do this by simply using np.roll to shift the columns. x2[0] = x1[1], x2[1] = x1[2].
This is barely a demo of how you should approach google numpy roll. Also instead of using s on x2 you can simply drop the last column since it's useless for the calculations.

Resize selected area of image

I want to increase / decrease the height of the image for the selected area only (The area between the white lines) as depicted in the image and not the outside of that area.
This is the same functionality which is performed in the app Manly - Body Muscle Editor Pro
How can I achieve that? Any help is appreciated.
I've never written code for IOS but I know OpenCV also works in IOS. Here I use the cv2.resize.
import cv2
import numpy as np
img = cv2.imread("1.jpg")
print(img.shape)
h = img.shape[0]
w = img.shape[1]
part_to_resize = img[120:240,:]
old_height = 120 #240-120
new_height = 200
final_result = np.zeros((h-(240-120)+new_height,w,3),dtype='uint8')
final_result[0:119,:] = img[0:119,:]
final_result[120:320,:] = cv2.resize(part_to_resize, (w, new_height))
final_result[321:h-old_height+new_height,:] = img[241:h,:]
cv2.imshow("final_result", final_result)
cv2.imshow("img", img)
cv2.waitKey()

Get blue colored contours using scikit-image/opencv

I'm trying to get blue colored contours using scikit-image. I'm sure there are functions in opencv that are also available in scikit-image.
I am aware of the find_contours method which works well however it gets ALL colors of contours. I just wnat to get the blue contours.
http://scikit-image.org/docs/dev/api/skimage.measure.find_contours.html
Any ideas of how to do this? My guess is to preprocess the image somehow to remove every color other than blue.
Your suggestion of first suppressing all other colors is a good one. Here's some code for doing that:
from skimage import io, color, exposure, img_as_float
import matplotlib.pyplot as plt
# http://www.publicdomainpictures.net/view-image.php?image=26890&picture=color-wheel
image = img_as_float(io.imread('color-wheel.jpg'))
blue_lab = color.rgb2lab([[[0, 0, 1.]]])
light_blue_lab = color.rgb2lab([[[0, 1, 1.]]])
red_lab = color.rgb2lab([[[1, 0, 0.]]])
image_lab = color.rgb2lab(image)
distance_blue = color.deltaE_cmc(blue_lab, image_lab, kL=0.5, kC=0.5)
distance_light_blue = color.deltaE_cmc(light_blue_lab, image_lab, kL=0.5, kC=0.5)
distance_red = color.deltaE_cmc(red_lab, image_lab, kL=0.5, kC=0.5)
distance = distance_blue + distance_light_blue - distance_red
distance = exposure.rescale_intensity(distance)
image_blue = image.copy()
image_blue[distance > 0.3] = 0
f, (ax0, ax1, ax2) = plt.subplots(1, 3, figsize=(20, 10))
ax0.imshow(image)
ax1.imshow(distance, cmap='gray')
ax2.imshow(image_blue)
plt.show()

How to import and use scipy.spatial.distance functions correctly?

from scipy.spatial.distance import seuclidean #imports abridged
import scipy
img = np.asarray(Image.open("testtwo.tif").convert('L'))
img = 1 * (img < 127)
area = (img == 0).sum() # computing white pixel area
print area
areasplit = np.split(img, 24) # splitting image array
print areasplit
for i in areasplit:
result = (i == 0).sum()
print result #computing white pixel area for every single array
minimal = result.min()
maximal = result.max()
dist = seuclidian(minimal, maximal)
print dist
I want to compute distances between array elements, produced from splitting an image. Python can`t recognize the name of a distance functions (I have tried several of them and variuos approaches to importing modules). How to import and call these functions correctly? Thank you
You haven't stated what the error is, but you are using numpy as well and I can't see an import for that
Try
import numpy as np
import scipy
Then try
dist = scipy.spatial.distance.euclidian(minimal, maximal)
dists = scipy.spatial.distance.seuclidian(minimal, maximal, variances)
Note - the standardised euclidean distance takes a third parameter.

Resources