Hough Circle Transform Implementation using python - image-processing

I am implementing the Hough circle transform and trying my code on a binary image that contains only one circle circumference, however for any radius I try, I get the same number of accumulated points, here is the code:
y0array, x0array= np.nonzero(image1)
r=8
acc_cells = np.zeros((100,100), dtype=np.uint64)
for i in range( len(x0array)):
y0= y0array[i]
x0= x0array[i]
for angle in range(0,360):
b = int(y0 - (r * s[angle]) ) //s is an array of sine of angles from 0 to 360
a = int(x0 - (r * c[angle]) ) //c is an array of cosine of angles from 0 to 360
if a >= 0 and a < 100 and b >= 0 and b < 100:
acc_cells[a, b] += 1
acc_cell_max = np.amax(acc_cells)
print(r, acc_cell_max)
Why is this behaviour happening?

You have to find out the center of the circles as you did. you have to find each edge coordinates
You can check python implementation of hough circles in detectCircles function
https://github.com/PavanGJ/Circle-Hough-Transform/blob/master/main.py
Also, take a look at hough circle implementation of Matlab functions
http://www.mathworks.com/matlabcentral/fileexchange/4985-circle-detection-via-standard-hough-transform
function [y0detect,x0detect,Accumulator] = houghcircle(Imbinary,r,thresh)
%HOUGHCIRCLE - detects circles with specific radius in a binary image. This
%is just a standard implementaion of Hough transform for circles in order
%to show how this method works.
%
%Comments:
% Function uses Standard Hough Transform to detect circles in a binary image.
% According to the Hough Transform for circles, each pixel in image space
% corresponds to a circle in Hough space and vise versa.
% upper left corner of image is the origin of coordinate system.
%
%Usage: [y0detect,x0detect,Accumulator] = houghcircle(Imbinary,r,thresh)
%
%Arguments:
% Imbinary - A binary image. Image pixels with value equal to 1 are
% candidate pixels for HOUGHCIRCLE function.
% r - Radius of the circles.
% thresh - A threshold value that determines the minimum number of
% pixels that belong to a circle in image space. Threshold must be
% bigger than or equal to 4(default).
%
%Returns:
% y0detect - Row coordinates of detected circles.
% x0detect - Column coordinates of detected circles.
% Accumulator - The accumulator array in Hough space.
%
%Written by :
% Amin Sarafraz
% Computer Vision Online
% http://www.computervisiononline.com
% amin#computervisiononline.com
%
% Acknowledgement: Thanks to CJ Taylor and Peter Bone for their constructive comments
%
%May 5,2004 - Original version
%November 24,2004 - Modified version,faster and better performance (suggested by CJ Taylor)
%Aug 31,2012 - Implemented suggestion by Peter Bone/ Better documentation
if nargin == 2
thresh = 4; % set threshold to default value
end
if thresh < 4
error('HOUGHCIRCLE:: Treshold value must be bigger or equal to 4');
end
%Voting
Accumulator = zeros(size(Imbinary)); % initialize the accumulator
[yIndex xIndex] = find(Imbinary); % find x,y of edge pixels
numRow = size(Imbinary,1); % number of rows in the binary image
numCol = size(Imbinary,2); % number of columns in the binary image
r2 = r^2; % square of radius, to prevent its calculation in the loop
for cnt = 1:numel(xIndex)
low=xIndex(cnt)-r;
high=xIndex(cnt)+r;
if (low<1)
low=1;
end
if (high>numCol)
high=numCol;
end
for x0 = low:high
yOffset = sqrt(r2-(xIndex(cnt)-x0)^2);
y01 = round(yIndex(cnt)-yOffset);
y02 = round(yIndex(cnt)+yOffset);
if y01 < numRow && y01 >= 1
Accumulator(y01,x0) = Accumulator(y01,x0)+1;
end
if y02 < numRow && y02 >= 1
Accumulator(y02,x0) = Accumulator(y02,x0)+1;
end
end
end
% Finding local maxima in Accumulator
y0detect = []; x0detect = [];
AccumulatorbinaryMax = imregionalmax(Accumulator);
[Potential_y0 Potential_x0] = find(AccumulatorbinaryMax == 1);
Accumulatortemp = Accumulator - thresh;
for cnt = 1:numel(Potential_y0)
if Accumulatortemp(Potential_y0(cnt),Potential_x0(cnt)) >= 0
y0detect = [y0detect;Potential_y0(cnt)];
x0detect = [x0detect;Potential_x0(cnt)];
end
end

Related

Simulating a simple optical flow

I am currently trying to simulate an optical flow using the following equation:
Below is a basic example where I have a 7x7 image where the central pixel is illuminated. The velocity I am applying is a uniform x-velocity of 2.
using Interpolations
using PrettyTables
# Generate grid
nx = 7 # Image will be 7x7 pixels
x = zeros(nx, nx)
yy = repeat(1:nx, 1, nx) # grid of y-values
xx = yy' # grid of x-values
# In this example x is the image I in the above equation
x[(nx-1)÷2 + 1, (nx-1)÷2 + 1] = 1.0 # set central pixel equal to 1
# Initialize velocity
velocity = 2;
vx = velocity .* ones(nx, nx); # vx=2
vy = 0.0 .* ones(nx, nx); # vy=0
for t in 1:1
# create 2d grid interpolator of the image
itp = interpolate((collect(1:nx), collect(1:nx)), x, Gridded(Linear()));
# create 2d grid interpolator of vx and vy
itpvx = interpolate((collect(1:nx), collect(1:nx)), vx, Gridded(Linear()));
itpvy = interpolate((collect(1:nx), collect(1:nx)), vy, Gridded(Linear()));
∇I_x = Array{Float64}(undef, nx, nx); # Initialize array for ∇I_x
∇I_y = Array{Float64}(undef, nx, nx); # Initialize array for ∇I_y
∇vx_x = Array{Float64}(undef, nx, nx); # Initialize array for ∇vx_x
∇vy_y = Array{Float64}(undef, nx, nx); # Initialize array for ∇vy_y
for i=1:nx
for j=1:nx
# gradient of image in x and y directions
Gx = Interpolations.gradient(itp, i, j);
∇I_x[i, j] = Gx[2];
∇I_y[i, j] = Gx[1];
Gvx = Interpolations.gradient(itpvx, i, j) # gradient of vx in both directions
Gvy = Interpolations.gradient(itpvy, i, j) # gradient of vy in both directions
∇vx_x[i, j] = Gvx[2];
∇vy_y[i, j] = Gvy[1];
end
end
v∇I = (vx .* ∇I_x) .+ (vy .* ∇I_y) # v dot ∇I
I∇v = x .* (∇vx_x .+ ∇vy_y) # I dot ∇v
x = x .- (v∇I .+ I∇v) # I(x, y, t+dt)
pretty_table(x)
end
What I expect is that the illuminated pixel in x will shift two pixels to the right in x_predicted. What I am seeing is the following:
where the original illuminated pixel's value is moved to the neighboring pixel twice rather than being shifted two pixels to the right. I.e. the neighboring pixel goes from being 0 to 2 and the original pixel goes from a value of 1 to -1. I'm not sure if I'm messing up the equation or if I'm thinking of velocity in the wrong way here. Any ideas?
Without looking into it too deeply, I think there are a couple of potential issues here:
Violation of the Courant Condition
The code you originally posted (I've edited it now) simulates a single timestep. I would not expect a cell 2 units away from your source cell to be activated in a single timestep. Doing so would voilate the Courant condition. From wikipedia:
The principle behind the condition is that, for example, if a wave is moving across a discrete spatial grid and we want to compute its amplitude at discrete time steps of equal duration, then this duration must be less than the time for the wave to travel to adjacent grid points.
The Courant condition requires that uΔt/Δx <= 1 (for an explicit time-marching solver such as the one you've implemented). Plugging in u=2, Δt=1, Δx=1 gives 2, which is greater than 1, so you have a mathematical problem. The general way of fixing this problem is to make Δt smaller. You probably want something like:
x = x .- Δt*(v∇I .+ I∇v) # I(x, y, t+dt)
Missing gradients?
I'm a little concerned about what's going on here:
Gvx = Interpolations.gradient(itpvx, i, j) # gradient of vx in both directions
Gvy = Interpolations.gradient(itpvy, i, j) # gradient of vy in both directions
∇vx_x[i, j] = Gvx[2];
∇vy_y[i, j] = Gvy[1];
You're able to pull two gradients out of both Gvx and Gvy, but you're only using one from each of them. Does that mean you're throwing information away?
https://scicomp.stackexchange.com/ is likely to provide better help with this.

Generating a Histogram by Harmonic Number

I am trying to create a program in GNU Octave to draw a histogram showing the fundamental and harmonics of a modified sinewave (the output from an SCR dimmer, which consists of a sinewave which is at zero until part way through the wave).
I've been able to generate the waveform and perform FFT to get a set of Frequency vs Amplitude points, however I am not sure how to convert this data into bins suitable for generating a histogram.
Sample code and an image of what I'm after below - thanks for the help!
clear();
vrms = 120;
freq = 60;
nCycles = 2;
level = 25;
vpeak = sqrt(2) * vrms;
sampleinterval = 0.00001;
num_harmonics = 10
disp("Start");
% Draw the waveform
x = 0 : sampleinterval : nCycles * 1 / freq; % time in sampleinterval increments
dimmed_wave = [];
undimmed_wave = [];
for i = 1 : columns(x)
rad_value = x(i) * 2 * pi * freq;
off_time = mod(rad_value, pi);
on_time = pi*(100-level)/100;
if (off_time < on_time)
dimmed_wave = [dimmed_wave, 0]; % in the dimmed period, value is zero
else
dimmed_wave = [dimmed_wave, sin(rad_value)]; % when not dimmed, value = sine
endif
undimmed_wave = [undimmed_wave, sin(rad_value)];
endfor
y = dimmed_wave * vpeak; % calculate instantaneous voltage
undimmed = undimmed_wave * vpeak;
subplot(2,1,1)
plot(x*1000, y, '-', x*1000, undimmed, '--');
xlabel ("Time (ms)");
ylabel ("Voltage");
% Fourier Transform to determine harmonics
subplot(2,1,2)
N = length(dimmed_wave); % number of points
fft_vals = abs(fftshift(fft(dimmed_wave))); % perform fft
frequency = [ -(ceil((N-1)/2):-1:1) ,0 ,(1:floor((N-1)/2)) ] * 1 / (N *sampleinterval);
plot(frequency, fft_vals);
axis([0,400]);
xlabel ("Frequency");
ylabel ("Amplitude");
You know your base frequency (fundamental tone), let's call it F. 2*F is the second harmonic, 3*F the third, etc. You want to set histogram bin edges halfway between these: 1.5*F, 2.5*F, etc.
You have two periods in your input signal, therefore your (integer) base frequency is k=2 (the value at fft_vals[k+1], the first peak in your plot). The second harmonic is at k=4, the third one at k=6, etc.
So you would set your bins edges at k = 1:2:end.
In general, this would be k = nCycles/2:nCycles:end.
You can compute your bar graph according to our computed bin edges as follows:
fft_vals = abs(fft(dimmed_wave));
nHarmonics = 9;
edges = nCycles/2 + (0:nHarmonics)*nCycles;
H = cumsum(fft_vals);
H = diff(H(edges));
bar(1:nHarmonics,H);

Line detection in noisy image (and no detection when it is not present)

I have tried to extract the dark line inside very noisy images without success. Some tips?
My current steps for the first example:
1) Clahe: with clip_limit = 10 and grid_size = (8,8)
2) Box Filter: with size = (5,5)
3) Inverted Image: 255 - image
4) Threshold: when inverted_image < 64
UPDATE
I have performed some preprocessing steps to improve the quality of tested images. I adjusted my ROI mask to crop top and down (because they are low intensities) and added a illumination correction to see better the line. Follow below the current images:
Even though the images are noisy, you are only looking for straight lines towards the north of image. So, why don't use some kind of matched filter with morphological operations?
EDIT: I have modified it.
1) Use median filter along the x and y axis, and normalize the images.
2) Matched filter with all possible orientations of lines.
% im=imread('JwXON.png');
% im=imread('Fiy72.png');
% im=imread('Ya9AN.png');
im=imread('OcgaIt8.png');
imOrig=im;
matchesx = fl(im, 1);
matchesy = fl(im, 0);
matches = matchesx + matchesy;
[x, y] = find(matches);
figure(1);
imagesc(imOrig), axis image
hold on, plot(y, x, 'r.', 'MarkerSize',5)
colormap gray
%----------
function matches = fl(im, direc)
if size(im,3)~=1
im = double(rgb2gray(im));
else
im=double(im);
end
[n, m] = size(im);
mask = bwmorph(imfill(im>0,'holes'),'thin',10);
indNaN=find(im==0); im=255-im; im(indNaN)=0;
N = n - numel(find(im(:,ceil(m/2))==0));
N = ceil(N*0.8); % possible line length
% Normalize the image with median filter
if direc
background= medfilt2(im,[1,30],'symmetric');
thetas = 31:149;
else
background= medfilt2(im,[30,1],'symmetric');
thetas = [1:30 150:179];
end
normIm = im - background;
normIm(normIm<0)=0;
% initialize matched filter result
matches=im*0;
% search for different angles of lines
for theta=thetas
normIm2 = imclose(normIm>0,strel('line',5,theta));
normIm3 = imopen(normIm2>0,strel('line',N,theta));
matches = matches + normIm3;
end
% eliminate false alarms
matches = imclose(matches,strel('disk',2));
matches = matches>3 & mask;
matches = bwareaopen(matches,100);

Choosing Lines From Hough Lines

I'm using Hough Lines to do corner detection for this image. i plan to find the intersection of the lines as the corner.
This is the image.
Unfortunately, Hough return lots of lines for each line I expect
How do I tune the Hough Lines so there is only four lines each corresponds to actual line on the image?
OpenCVs hough transform really could use some better Non-Maximum Suppression. Without that, you get this phenomenon of duplicate lines. Unfortunately I know of no easy way to tune that, besides reimplementing your own hough transform. (Which is a valid option. Hough transform is fairly simple)
Fortunately it is easy to fix in post-processing:
For the non-probabilistic hough transform, OpenCv will return the lines in order of their confidence, with the strongest line first. So simply take the first four lines that differ strongly in either rho or theta.
so, add the first line found by HoughLines into a new List: strong_lines
for each line found by HoughLines:
test whether its rho and theta are close to any strong_line (e.g. rho is within 50 pixels and theta is within 10° of the other line)
if not, put it into the list of strong_lines
if you have found 4 strong_lines, break
I implemented the approach described by HugoRune and though I would share my code as an example of how I implemented this. I used a tolerance of 5 degrees and 10 pixels.
strong_lines = np.zeros([4,1,2])
minLineLength = 2
maxLineGap = 10
lines = cv2.HoughLines(edged,1,np.pi/180,10, minLineLength, maxLineGap)
n2 = 0
for n1 in range(0,len(lines)):
for rho,theta in lines[n1]:
if n1 == 0:
strong_lines[n2] = lines[n1]
n2 = n2 + 1
else:
if rho < 0:
rho*=-1
theta-=np.pi
closeness_rho = np.isclose(rho,strong_lines[0:n2,0,0],atol = 10)
closeness_theta = np.isclose(theta,strong_lines[0:n2,0,1],atol = np.pi/36)
closeness = np.all([closeness_rho,closeness_theta],axis=0)
if not any(closeness) and n2 < 4:
strong_lines[n2] = lines[n1]
n2 = n2 + 1
EDIT: The code was updated to reflect the comment regarding a negative rho value
Collect the intersection of all line
for (int i = 0; i < lines.size(); i++)
{
for (int j = i + 1; j < lines.size(); j++)
{
cv::Point2f pt = computeIntersectionOfTwoLine(lines[i], lines[j]);
if (pt.x >= 0 && pt.y >= 0 && pt.x < image.cols && pt.y < image.rows)
{
corners.push_back(pt);
}
}
}
You can google the algorithm to find the intersection of two lines.
Once you collect all the intersection points you can easily determine the min max which will give you top-left and bottom right points. From these two points you can easily get the rectangle.
Here Sorting 2d point array to find out four corners & http://opencv-code.com/tutorials/automatic-perspective-correction-for-quadrilateral-objects/ Refer these two links.
Here is a complete solution written in python 2.7.x using OpenCV 2.4.
It is based on ideas from this thread.
Method: Detect all lines. Assume that the Hough function returns highest ranked lines first. Filter the lines to keep those that are separated by some minimum distance and/or angle.
Image of all Hough lines:
https://i.ibb.co/t3JFncJ/all-lines.jpg
Filtered lines:
https://i.ibb.co/yQLNxXT/filtered-lines.jpg
Code:
http://codepad.org/J57oVIzs
"""
Detect the best 4 lines for a rounded rectangle.
"""
import numpy as np
import cv2
input_image = cv2.imread("image.jpg")
def drawLines(img, lines):
"""
Draw lines on an image
"""
for line in lines:
for rho,theta in line:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(img, (x1,y1), (x2,y2), (0,0,255), 1)
input_image_grey = cv2.cvtColor(input_image, cv2.COLOR_BGR2GRAY)
edged = input_image_grey
rho = 1 # 1 pixel
theta = 1.0*0.017 # 1 degree
threshold = 100
lines = cv2.HoughLines(edged, rho, theta, threshold)
# Fix negative angles
num_lines = lines.shape[1]
for i in range(0, num_lines):
line = lines[0,i,:]
rho = line[0]
theta = line[1]
if rho < 0:
rho *= -1.0
theta -= np.pi
line[0] = rho
line[1] = theta
# Draw all Hough lines in red
img_with_all_lines = np.copy(input_image)
drawLines(img_with_all_lines, lines)
cv2.imshow("Hough lines", img_with_all_lines)
cv2.waitKey()
cv2.imwrite("all_lines.jpg", img_with_all_lines)
# Find 4 lines with unique rho & theta:
num_lines_to_find = 4
filtered_lines = np.zeros([1, num_lines_to_find, 2])
if lines.shape[1] < num_lines_to_find:
print("ERROR: Not enough lines detected!")
# Save the first line
filtered_lines[0,0,:] = lines[0,0,:]
print("Line 1: rho = %.1f theta = %.3f" % (filtered_lines[0,0,0], filtered_lines[0,0,1]))
idx = 1 # Index to store the next unique line
# Initialize all rows the same
for i in range(1,num_lines_to_find):
filtered_lines[0,i,:] = filtered_lines[0,0,:]
# Filter the lines
num_lines = lines.shape[1]
for i in range(0, num_lines):
line = lines[0,i,:]
rho = line[0]
theta = line[1]
# For this line, check which of the existing 4 it is similar to.
closeness_rho = np.isclose(rho, filtered_lines[0,:,0], atol = 10.0) # 10 pixels
closeness_theta = np.isclose(theta, filtered_lines[0,:,1], atol = np.pi/36.0) # 10 degrees
similar_rho = np.any(closeness_rho)
similar_theta = np.any(closeness_theta)
similar = (similar_rho and similar_theta)
if not similar:
print("Found a unique line: %d rho = %.1f theta = %.3f" % (i, rho, theta))
filtered_lines[0,idx,:] = lines[0,i,:]
idx += 1
if idx >= num_lines_to_find:
print("Found %d unique lines!" % (num_lines_to_find))
break
# Draw filtered lines
img_with_filtered_lines = np.copy(input_image)
drawLines(img_with_filtered_lines, filtered_lines)
cv2.imshow("Filtered lines", img_with_filtered_lines)
cv2.waitKey()
cv2.imwrite("filtered_lines.jpg", img_with_filtered_lines)
The above approach (proposed by #HugoRune's and implemented by #Onamission21) is correct but has a little bug. cv2.HoughLines may return negative rho and theta upto pi. Notice for example that the line (r0,0) is very close to the line (-r0,pi-epsilon) but they would not be found in the above closeness test.
I simply treated negative rhos by applying rho*=-1, theta-=pi before closeness calculations.

Square of each element of imageData in OpenCV's IplImage

I'm using the OpenCV C APIs. Now I need to implement the following Matlab code in C using OpenCV.
Matlab Code:
function [cx, cy] = foe(Vx, Vy)
ofs = 10;
% get sub image (using offsets at border) --> STEP 1
subVx = Vx(ofs:end-ofs, ofs:end-ofs);
subVy = Vy(ofs:end-ofs, ofs:end-ofs);
% compute vertical and horizontal magnitudes --> STEP 2
subVx = subVx.^2;
subVy = subVy.^2;
% find index of minimum sums for vertical and horizontal magnitudes --> STEP 3
[v, cy] = min(sum(subVx'));
[v, cx] = min(sum(subVy));
% Calculate the Focus Of Expansion --> STEP 4
cy = cy + ofs;
cx = cx + ofs;
Step 1 is very easily done. I just set the ROI of the image.
Now for Step 2, I need to square each element of the IplImage's imageData element as follows:
for(i = 0; i < subvx->width * subvx->height; i++) {
?????
}
What should I write in place of ????? to square each element of imageData? imageData is char*, so max limit of each element would be 255. Squares of each element would most likely exceed this value.
How do I implement the above Matlab code in C in this case?
Also for Step 3, how do I create a transpose of imageData (considered as 2-dim matrix)?

Resources