OpenCV HoughLinesP threshold parameter - opencv

C++: void HoughLinesP(InputArray image, OutputArray lines, double rho, double theta, int threshold, double minLineLength=0, double maxLineGap=0 )
I have difficulties understanding the parameter in below. Could some explain it like one would for dummies type thing?.
threshold – Accumulator threshold parameter. Only those lines are returned that get enough votes ( >\texttt{threshold} ).

Seems you have not understood the Hough Algorithm. I hope a intro or short description on hough line detection algo below will surely help you.
Ref: Wiki,Tutorial.
Consider a edge (sobel or canny's output) image.
For each edge point (Xi,Yi) in the image calculate: Pi = XicosT+YiSinT
Increment the accumulator at A(Pi,T) = A(Pi,T) + 1.
where, Pi - Distance and T (theta) - Angle. Theta ranges from 0-360.
A(Pi,T) is referred as hough space. By finding the accumulator bins with the highest values, typically by looking for local maxima in the accumulator space, the most likely lines can be extracted. Usually finding highest values are done by a parameter threshold.
Try changing the threshold values, you will find significant change in the line detection.

Related

Explanation of rho and theta parameters in HoughLines

Can you give me a quick definition of rho and theta parameters in OpenCV's HoughLines function
void cv::HoughLines ( InputArray image,
OutputArray lines,
double rho,
double theta,
int threshold,
double srn = 0,
double stn = 0,
double min_theta = 0,
double max_theta = CV_PI
)
The only thing I found in the doc is:
rho: Distance resolution of the accumulator in pixels.
theta: Angle resolution of the accumulator in radians.
Do this mean that if I set rho=2 then 1/2 of my image's pixels will be ignored ... a kind of stride=2 ?
I have searched for this for hours and still haven't found a place where it is neatly explained. But picking up the pieces, I think I got it.
The algorithm goes over every edge pixel (result of Canny, for example) and calculates ρ using the equation ρ = x * cosθ + y * sinθ, for many values of θ.
The actual step of θ is defined by the function parameter, so if you use the usual math.pi / 180.0 value of theta, the algorithm will compute ρ 180 times in total for just one edge pixel in the image. If you would use a larger theta, there would be fewer calculations, fewer accumulator columns/buckets and therefore fewer lines found.
The other parameter ρ defines how "fat" a row of the accumulator is. With a value of 1, you are saying that you want the number of accumulator rows to be equal to the biggest ρ possible, which is the diagonal of the image you're processing. So if for some two values of θ you get close values for ρ, they will still go into separate accumulator buckets because you are going for precision. For a larger value of the parameter rho, those two values might end up in the same bucket, which will ultimately give you more lines because more buckets will have a large vote count and therefore exceed the threshold.
Some helpful resources:
http://docs.opencv.org/3.1.0/d6/d10/tutorial_py_houghlines.html
https://www.mathworks.com/help/vision/ref/houghtransform.html
https://www.youtube.com/watch?v=2oGYGXJfjzw
To detect lines with Hough Transform, the best way is to represents lines with an equation of two parameters rho and theta as shown on this image. The equation is the following :
x cos⁡(θ)+y sin⁡(θ)=ρ
where (x,y) are line parameters.
This writing in (θ,ρ) parameters allow the detection to be less position-depending than a writing as y=a*x+b
(θ,ρ) in this context give the discretization for these two parameters

Rotating 1d FFT to get 2D FFT?

I have a blurry image with a sharp edge and I want to use the profile of that sharp edge to estimate the point spread function (PSF) of the imaging system (assuming that it is symmetric). The profile of the edge gives me the "edge spread function" (ESF) and the derivative of that gives me the "line spread function" (LSF). I am trying to follow these directions that I found in an old paper on how to convert from the LSF to the PSF:
"If we form the one-dimensional Fourier transform of the LSF and rotate the resulting curve about its vertical axis, the surface thus generated proves to be the two-dimensional fourier transform of the PSF. Hence it is merely necessary to take a two-dimensional inverse Fourier transform to obtain the PSF"
I can't seem to get this to work. The 2D FFT of a PSF-like function (for example a 2d gaussian) has lots of alternative positive and negative values, but if I rotate a 1D FFT, I get concentric rings of positive or negative values and the inverse transform looks nothing like a point-spread function. Am I missing a step or misunderstanding something? Any help would be appreciated! Thanks!
Edit: Here is some code showing my attempt to follow the procedure described
;generate x array
x=findgen(1000)/999*50-25
;generate gaussian test function in 1D
;P[0] = peak value
;P[1] = centroid
;P[2] = sigma
;P[3] = base level
P=[1.0,0.0,4.0,0.0]
test1d=gaussian_1d(x,P)
;Take the FFT of the test function
fft1d=fft(test1d)
;create an array with the frequency values for the FFT array, following the conventions used by IDL
;This piece of code to find freq is straight from IDL documentation: http://www.exelisvis.com/docs/FFT.html
N=n_elements(fft1d)
T=x[1]-x[0] ;T = sampling interval
fftx=(findgen((N-1)/2)+1)
is_N_even=(N MOD 2) EQ 0
if (is_N_even) then $
freq=[0.0,fftx,N/2,-N/2+fftx]/(N*T) $
else $
freq=[0.0,fftx,-(N/2+1)+fftx]/(N*T)
;Create a 1000x1000 array where each element holds the distance from the center
dim=1000
center=[(dim-1)/2.0,(dim-1)/2.0]
xarray=cmreplicate(findgen(dim),dim)
yarray=transpose(cmreplicate(findgen(dim),dim))
rarray=sqrt((xarray-center[0])^2+(yarray-center[1])^2)
rarray=rarray/max(rarray)*max(freq) ;scale rarray so max value is equal to highest freq in 1D FFT
;rotate the 1d FFT about zero to get a 2d array by interpolating the 1D function to the frequency values in the 2d array
fft2d=rarray*0.0
fft2d(findgen(n_elements(rarray)))=interpol(fft1d,freq,rarray(findgen(n_elements(rarray))))
;Take the inverse fourier transform of the 2d array
psf=fft(fft2d,/inverse)
;shift the PSF to be centered in the image
psf=shift(psf,500,500)
window,0,xsize=1000,ysize=1000
tvscl,abs(psf) ;visualize the absolute value of the result from the inverse 2d FFT
I don't know IDL, but I think your problem here is that you're taking the FFT of signals that are centered, where by default the function expects 0-frequency components at the beginning of the array.
A quick search for the proper way to do this in IDL indicates the CENTER keyword is what you're looking for.
CENTER
Set this keyword to shift the zero-frequency component to the center of the spectrum. In the forward direction, the resulting Fourier transform has the zero-frequency component shifted to the center of the array. In the reverse direction, the input is assumed to be a centered Fourier transform, and the coefficients are shifted back before performing the inverse transform.
Without letting the FFT routine know where the center of your signal is, it will seem shifted by N/2. In the converse domain this is a strong phase shift that will appear as if values are alternating positive and negative.
Ok, looks like I have solved the problem. The main issue seems to be that I needed to use the absolute value of the FFT results, rather than the complex array that is returned by default. Using the /CENTER keyword also helped make the indexing of the FFT result much simpler than IDL's default. Here is the working version of the code:
;generate x array
x=findgen(1000)/999*50-25
;generate lorentzian test function in 1D
;P[0] = peak value
;P[1] = centroid
;P[2] = fwhm
;P[3] = base level
P=[1.0,0.0,2,0.0]
test1d=lorentzian_1d(x,P)
;Take the FFT of the test function
fft1d=abs(fft(test1d,/center))
;Create an array of frequencies corresponding to the FFT result
N=n_elements(fft1d)
T=x[1]-x[0] ;T = sampling interval
freq=findgen(N)/(N*T)-N/(2*N*T)
;Create an array where each element holds the distance from the center
dim=1000
center=[(dim-1)/2.0,(dim-1)/2.0]
xarray=cmreplicate(findgen(dim),dim)
yarray=transpose(cmreplicate(findgen(dim),dim))
rarray=sqrt((xarray-center[0])^2+(yarray-center[1])^2)
rarray=rarray/max(rarray)*max(freq) ;scale rarray so max value is equal to highest freq in 1D FFT
;rotate the 1d FFT about zero to get a 2d array by interpolating the 1D function to the frequency values in the 2d array
fft2d=rarray*0.0
fft2d(findgen(n_elements(rarray)))=interpol(fft1d,freq,rarray(findgen(n_elements(rarray))))
;Take the inverse fourier transform of the 2d array
psf=abs(fft(fft2d,/inverse,/center))
;shift the PSF to be centered in the image
psf=shift(psf,dim/2.0,dim/2.0)
psf=psf/max(psf)
window,0,xsize=1000,ysize=1000
tvscl,real_part(psf) ;visualize the resulting PSF
;Test the performance by integrating the PSF in one dimension to recover the LSF
psftotal=total(psf,1)
plot,x*sqrt(2),psftotal/max(psftotal),thick=2,linestyle=2
oplot,x,test1d

Hough transformation for circles and rectangle

I am looking to identify the circles in an image. The circles are the tyres of a vehicle that is present in the image. However, using Hough's transformation, many circles are appearing in the image, but not the ones around the tyres. Not sure if there is a better approach to the same.
Also, is there a way to identify the biggest rectangle in the image i.e. the vehicle's storage container.
Any pointers would be of great help.
Regards
Vijay
I think you need to play with the parameters to filter out the unwanted circles.
void HoughCircles(InputArray image, OutputArray circles, int method, double dp, double minDist, double param1=100, double param2=100, int minRadius=0, int maxRadius=0 )
minDist – Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.
param1 – First method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the higher threshold of the two passed to the Canny() edge detector (the lower one is twice smaller).
param2 – Second method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first.
minRadius – Minimum circle radius.
maxRadius – Maximum circle radius
For the first question, you can try 'Fast Circle Detection' algorithm on below paper.
Fast Circle Detection Using Gradient Pair Vectors
I got very good result with it on my previous project, which was a real-time processing of finding the borders between iris and sclera of human eye.
hope this helps.

Find straight line segments in image using OpenCV

Using OpenCV's findContours() I have a list of contours in an image. I'm interested only in the straight lines, so if they are too 'squiggly' they should be rejected. The question is how to evaluate how straight each contour is?
I looked at fitLine(), but there doesn't appear to be a goodness-of-fit measure returned. I could evaluate this myself using the returned line.
I looked at arcLength() with the aim to compare this to the bounding rectangle dimensions, but even for somewhat straight lines, the arc length can be relatively long if the contour points are dense.
I could find the convex hull and compare to the bounding rectangle dimensions, but I'd have to analyze the convexity defects.
Is there a moment that would be useful here?
Find the contours as you are doing now
Find the straight lines in the image using HoughLines()
Compute the overlap between the contours and the straight lines
Take two points (with for instance cv::approxPoly) on your contour and compute their absolute distance. Then go through the contour points between the two points and add up all the distances. If the difference between distance over the contour and the absolute distance is bigger than a certain threshold you can reject it.
The function, findContours() already approximated contours with line segments somehow. Each contour is represented by a list of points around it. For your purpose, simply computing the distances of each pair of consecutive points in the contour would give you all line segment lengths.
Here is an example:
c = cnts[0]
#d is the points in contour c shifted by one with wraparound (numpy.roll)
d = np.roll(c, 1, axis=0)
np.linalg.norm(c - d, axis = -1)

corner detection using Chris Harris & Mike Stephens

I am not able to under stand the formula ,
What is W (window) and intensity in the formula mean,
I found this formula in opencv doc
http://docs.opencv.org/trunk/doc/py_tutorials/py_feature2d/py_features_harris/py_features_harris.html
For a grayscale image, intensity levels (0-255) tells you how bright is the pixel..hope that you already know about it.
So, now the explanation of your formula is below:
Aim: We want to find those points which have maximum variation in terms of intensity level in all direction i.e. the points which are very unique in a given image.
I(x,y): This is the intensity value of the current pixel which you are processing at the moment.
I(x+u,y+v): This is the intensity of another pixel which lies at a distance of (u,v) from the current pixel (mentioned above) which is located at (x,y) with intensity I(x,y).
I(x+u,y+v) - I(x,y): This equation gives you the difference between the intensity levels of two pixels.
W(u,v): You don't compare the current pixel with any other pixel located at any random position. You prefer to compare the current pixel with its neighbors so you chose some value for "u" and "v" as you do in case of applying Gaussian mask/mean filter etc. So, basically w(u,v) represents the window in which you would like to compare the intensity of current pixel with its neighbors.
This link explains all your doubts.
For visualizing the algorithm, consider the window function as a BoxFilter, Ix as a Sobel derivative along x-axis and Iy as a Sobel derivative along y-axis.
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/sobel_derivatives/sobel_derivatives.html will be useful to understand the final equations in the above pdf.

Resources