Related
A time series (x, y, t) in 3D space (X, Y, T) satisfies:
x(t) = f1(t), y(t) = f2(t),
where t = 1, 2, 3,....
In other words, coordinates (x, y) vary with timestamp t. It is easy to compute the FFT of x(t) or y(t), but how do you calculate the FFT of (x, y)? I assume it should NOT be computed as a 2D-FFT, because that is for an image, whereas (x, y) is just a series. Any suggestion? Thank you.
use
fftn
for example: Y = fftn(X) returns the multidimensional Fourier transform of an N-D array using a fast Fourier transform algorithm. The N-D transform is equivalent to computing the 1-D transform along each dimension of X. The output Y is the same size as X.
for 3-D transform:
Create a 3-D signal X. The size of X is 20-by-20-by-20
x = (1:20)';
y = 1:20;
z = reshape(1:20,[1 1 20]);
X = cos(2*pi*0.01*x) + sin(2*pi*0.02*y) + cos(2*pi*0.03*z);
Compute the 3-D Fourier transform of the signal, which is also a 20-by-20-by-20 array.
Y = fftn(X)
Pad X with zeros to compute a 32-by-32-by-32 transform.
m = nextpow2(20);
Y = fftn(X,[2^m 2^m 2^m]);
size(Y)
also you can use this code:
first You might use SINGLE intead of DOUBLE
psi = single(psi);
fftpsi = fft(psi,[],3);
Next might be working slide by slide
psi=rand(10,10,10);
% costly way
fftpsi=fftn(psi);
% This might save you some RAM, to be tested
[m,n,p] = size(psi);
for k=1:p
psi(:,:,k) = fftn(psi(:,:,k));
end
psi = reshape(psi,[m*n p]);
for i=1:m*n % you might work on bigger row-block to increase speed
psi(i,:) = fft(psi(i,:));
end
psi = reshape(psi,[m n p]);
% Check
norm(psi(:)-fftpsi(:))
I hope it will be useful for you
Hello to everyone. The above image is sum of two images in which i did feature matching and draw all matching points. I also found the contours of the pcb parts in the first image (half left image-3 contours). The question is, how could i draw only the matching points that is inside those contours in the first image instead this blue mess? I'm using python 2.7 and opencv 2.4.12.
I wrote a function for draw matches cause in opencv 2.4.12 there isn't any implemented method for that. If i didn't include something please tell me. Thank you in advance!
import numpy as np
import cv2
def drawMatches(img1, kp1, img2, kp2, matches):
# Create a new output image that concatenates the two images
# (a.k.a) a montage
rows1 = img1.shape[0]
cols1 = img1.shape[1]
rows2 = img2.shape[0]
cols2 = img2.shape[1]
# Create the output image
# The rows of the output are the largest between the two images
# and the columns are simply the sum of the two together
# The intent is to make this a colour image, so make this 3 channels
out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')
# Place the first image to the left
out[:rows1,:cols1] = np.dstack([img1, img1, img1])
# Place the next image to the right of it
out[:rows2,cols1:] = np.dstack([img2, img2, img2])
# For each pair of points we have between both images
# draw circles, then connect a line between them
for mat in matches:
# Get the matching keypoints for each of the images
img1_idx = mat.queryIdx
img2_idx = mat.trainIdx
# x - columns
# y - rows
(x1,y1) = kp1[img1_idx].pt
(x2,y2) = kp2[img2_idx].pt
# Draw a small circle at both co-ordinates
# radius 4
# colour blue
# thickness = 1
cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)
cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)
# Draw a line in between the two points
# thickness = 1
# colour blue
cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255,0,0), 1)
# Show the image
cv2.imshow('Matched Features', out)
cv2.imwrite("shift_points.png", out)
cv2.waitKey(0)
cv2.destroyWindow('Matched Features')
# Also return the image if you'd like a copy
return out
img1 = cv2.imread('pic3.png', 0) # Original image - ensure grayscale
img2 = cv2.imread('pic1.png', 0) # Rotated image - ensure grayscale
sift = cv2.SIFT()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# Create matcher
bf = cv2.BFMatcher()
# Perform KNN matching
matches = bf.knnMatch(des1, des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
# Add first matched keypoint to list
# if ratio test passes
good.append(m)
# Show only the top 10 matches - also save a copy for use later
out = drawMatches(img1, kp1, img2, kp2, good)
Based on what you are asking I am assuming you mean you have some sort of closed contour outlining the areas you want to bound your data point pairs to.
This is fairly simple for polygonal contours and more math is required for more complex curved lines but the solution is the same.
You draw a line from the point in question to infinity. Most people draw out a line to +x infinity, but any direction works. If there are an odd number of line intersections, the point is inside the contour.
See this article:
http://www.geeksforgeeks.org/how-to-check-if-a-given-point-lies-inside-a-polygon/
For point pairs, only pairs where both points are inside the contour are fully inside the contour. For complex contour shapes with concave sections, if you also want to test that the linear path between the points does not cross the contour, you perform a similar test with just the line segment between the two points, if there are any line intersections the direct path between the points crosses outside the contour.
Edit:
Since your contours are rectangles, a simpler approach will suffice for determining if your points are inside the rectangle.
If your rectangles are axis aligned (they are straight and not rotated), then you can use your values for top,left and bottom,right to check.
Let point A = Top,Left, point B = Bottom,Right, and point C = your test point.
I am assuming an image based coordinate system where 0,0 is the left,top of the image, and width,height is the bottom right. (I'm writing in C#)
bool PointIsInside(Point A, Point B, Point C)
{
if (A.X <= C.X && B.X >= C.X && A.Y <= C.Y && B.Y >= C.Y)
return true;
return false;
}
if your rectangle is NOT axis aligned, then you can perform four half-space tests to determine if your point is inside the rectangle.
Let Point A = Top,Left, Point B = Bottom,Right, double W = Width, double H = Height, double N = rotation angle, and Point C = test point.
for an axis aligned rectangle, Top,Right can be calculated by taking the vector (1,0) , multiplying by Width, and adding that vector to Top,Left. For Bottom,Right We take the vector (0,1), multiply by height, and add to Top,Right.
(1,0) is the equivalent of a Unit Vector (length of 1) at Angle 0. Similarly, (0,1) is a unit vector at angle 90 degrees. These vectors can also be considered the direction the line is pointing. This also means these same vectors can be used to go from Bottom,Left to Bottom,Right, and from Top,Left to Bottom,Left as well.
We need to use different unit vectors, at the angle provided. To do this we simply need to take the Cosine and Sine of the angle provided.
Let Vector X = direction from Top,Left to Top,Right, Vector Y = direction from Top,Right to Bottom,Right.
I am using angles in degrees for this example.
Vector X = new Vector();
Vector Y = new Vector();
X.X = Math.Cos(R);
X.Y = Math.Sin(R);
Y.X = Math.Cos(R+90);
Y.Y = Math.Sin(R+90);
Since we started with Top,Left, we can find Bottom,Right by simply adding the two vectors to Top,Left
Point B = new Point();
B = A + X + Y;
We now want to do a half-space test using the dot product for our test point. The first two test will use the test point, and Top,Left, the other two will use the test point, and Bottom,Right.
The half-space test is inherently based on directionality. Is the point in front, behind, or perpendicular to a given direction? We have the two directions we need, but they are directions based on the top,left point of the rectangle, not the full space of the image, so we need to get a vector from the top,left, to the point in question, and another from the bottom, right, since those are the two points we test against.
This is simple to calculate, as it is just Destination - Origin.
Let Vector D = Top,Left to test point C, and Vector E = Bottom,Right to test point.
Vector D = C - A;
Vector E = C - B;
The dot product is x1 * x2 + y1*y2 of the two vectors. if the result is positive, the two directions have an absolute angle of less than 90 degrees, or are roughly going in the same direction, a result of zero means they are perpendicular. In our case it means the test point is directly on a side of the rectangle we are testing against. Less than zero means an absolute angle of greater than 90 degrees, or they are roughly going opposite directions.
If a point is inside the rectangle, then the dot products from top left will be >= 0, while the dot products from bottom right will be <= 0. In essence the test point is closer to bottom right when testing from top left, but when taking the same directions when we are already at bottom right, it will be going away, back toward top,left.
double DotProd(Vector V1, Vector V2)
{
return V1.X * V2.X + V1.Y * V2.Y;
}
and so our test ends up as:
if( DotProd(X, D) >= 0 && DotProd(Y, D) >= 0 && DotProd(X, E) <= 0 && DotProd(Y, E) <= 0)
then the point is inside the rectangle. Do this for both points, if both are true then the line is inside the rectangle.
Suppose I have an image A, I applied Gaussian Blur on it with Sigam=3 So I got another Image B. Is there a way to know the applied sigma if A,B is given?
Further clarification:
Image A:
Image B:
I want to write a function that take A,B and return Sigma:
double get_sigma(cv::Mat const& A,cv::Mat const& B);
Any suggestions?
EDIT1: The suggested approach doesn't work in practice in its original form(i.e. using only 9 equations for a 3 x 3 kernel), and I realized this later. See EDIT1 below for an explanation and EDIT2 for a method that works.
EDIT2: As suggested by Humam, I used the Least Squares Estimate (LSE) to find the coefficients.
I think you can estimate the filter kernel by solving a linear system of equations in this case. A linear filter weighs the pixels in a window by its coefficients, then take their sum and assign this value to the center pixel of the window in the result image. So, for a 3 x 3 filter like
the resulting pixel value in the filtered image
result_pix_value = h11 * a(y, x) + h12 * a(y, x+1) + h13 * a(y, x+2) +
h21 * a(y+1, x) + h22 * a(y+1, x+1) + h23 * a(y+1, x+2) +
h31 * a(y+2, x) + h32 * a(y+2, x+1) + h33 * a(y+2, x+2)
where a's are the pixel values within the window in the original image. Here, for the 3 x 3 filter you have 9 unknowns, so you need 9 equations. You can obtain those 9 equations using 9 pixels in the resulting image. Then you can form an Ax = b system and solve for x to obtain the filter coefficients. With the coefficients available, I think you can find the sigma.
In the following example I'm using non-overlapping windows as shown to obtain the equations.
You don't have to know the size of the filter. If you use a larger size, the coefficients that are not relevant will be close to zero.
Your result image size is different than the input image, so i didn't use that image for following calculation. I use your input image and apply my own filter.
I tested this in Octave. You can quickly run it if you have Octave/Matlab. For Octave, you need to load the image package.
I'm using the following kernel to blur the image:
h =
0.10963 0.11184 0.10963
0.11184 0.11410 0.11184
0.10963 0.11184 0.10963
When I estimate it using a window size 5, I get the following. As I said, the coefficients that are not relevant are close to zero.
g =
9.5787e-015 -3.1508e-014 1.2974e-015 -3.4897e-015 1.2739e-014
-3.7248e-014 1.0963e-001 1.1184e-001 1.0963e-001 1.8418e-015
4.1825e-014 1.1184e-001 1.1410e-001 1.1184e-001 -7.3554e-014
-2.4861e-014 1.0963e-001 1.1184e-001 1.0963e-001 9.7664e-014
1.3692e-014 4.6182e-016 -2.9215e-014 3.1305e-014 -4.4875e-014
EDIT1:
First of all, my apologies.
This approach doesn't really work in the practice. I've used the filt = conv2(a, h, 'same'); in the code. The resulting image data type in this case is double, whereas in the actual image the data type is usually uint8, so there's loss of information, which we can think of as noise. I simulated this with the minor modification filt = floor(conv2(a, h, 'same'));, and then I don't get the expected results.
The sampling approach is not ideal, because it's possible that it results in a degenerated system. Better approach is to use random sampling, avoiding the borders and making sure the entries in the b vector are unique. In the ideal case, as in my code, we are making sure the system Ax = b has a unique solution this way.
One approach would be to reformulate this as Mv = 0 system and try to minimize the squared norm of Mv under the constraint squared-norm v = 1, which we can solve using SVD. I could be wrong here, and I haven't tried this.
Another approach is to use the symmetry of the Gaussian kernel. Then a 3x3 kernel will have only 3 unknowns instead of 9. I think, this way we impose additional constraints on v of the above paragraph.
I'll try these out and post the results, even if I don't get the expected results.
EDIT2:
Using the LSE, we can find the filter coefficients as pinv(A'A)A'b. For completion, I'm adding a simple (and slow) LSE code.
Initial Octave Code:
clear all
im = double(imread('I2vxD.png'));
k = 5;
r = floor(k/2);
a = im(:, :, 1); % take the red channel
h = fspecial('gaussian', [3 3], 5); % filter with a 3x3 gaussian
filt = conv2(a, h, 'same');
% use non-overlapping windows to for the Ax = b syatem
% NOTE: boundry error checking isn't performed in the code below
s = floor(size(a)/2);
y = s(1);
x = s(2);
w = k*k;
y1 = s(1)-floor(w/2) + r;
y2 = s(1)+floor(w/2);
x1 = s(2)-floor(w/2) + r;
x2 = s(2)+floor(w/2);
b = [];
A = [];
for y = y1:k:y2
for x = x1:k:x2
b = [b; filt(y, x)];
f = a(y-r:y+r, x-r:x+r);
A = [A; f(:)'];
end
end
% estimated filter kernel
g = reshape(A\b, k, k)
LSE method:
clear all
im = double(imread('I2vxD.png'));
k = 5;
r = floor(k/2);
a = im(:, :, 1); % take the red channel
h = fspecial('gaussian', [3 3], 5); % filter with a 3x3 gaussian
filt = floor(conv2(a, h, 'same'));
s = size(a);
y1 = r+2; y2 = s(1)-r-2;
x1 = r+2; x2 = s(2)-r-2;
b = [];
A = [];
for y = y1:2:y2
for x = x1:2:x2
b = [b; filt(y, x)];
f = a(y-r:y+r, x-r:x+r);
f = f(:)';
A = [A; f];
end
end
g = reshape(A\b, k, k) % A\b returns the least squares solution
%g = reshape(pinv(A'*A)*A'*b, k, k)
I have an OpenGL program (written in Delphi) that lets user draw a polygon. I want to automatically revolve (lathe) it around an axis (say, Y asix) and get a 3D shape.
How can I do this?
For simplicity, you could force at least one point to lie on the axis of rotation. You can do this easily by adding/subtracting the same value to all the x values, and the same value to all the y values, of the points in the polygon. It will retain the original shape.
The rest isn't really that hard. Pick an angle that is fairly small, say one or two degrees, and work out the coordinates of the polygon vertices as it spins around the axis. Then just join up the points with triangle fans and triangle strips.
To rotate a point around an axis is just basic Pythagoras. At 0 degrees rotation you have the points at their 2-d coordinates with a value of 0 in the third dimension.
Lets assume the points are in X and Y and we are rotating around Y. The original 'X' coordinate represents the hypotenuse. At 1 degree of rotation, we have:
sin(1) = z/hypotenuse
cos(1) = x/hypotenuse
(assuming degree-based trig functions)
To rotate a point (x, y) by angle T around the Y axis to produce a 3d point (x', y', z'):
y' = y
x' = x * cos(T)
z' = x * sin(T)
So for each point on the edge of your polygon you produce a circle of 360 points centered on the axis of rotation.
Now make a 3d shape like so:
create a GL 'triangle fan' by using your center point and the first array of rotated points
for each successive array, create a triangle strip using the points in the array and the points in the previous array
finish by creating another triangle fan centered on the center point and using the points in the last array
One thing to note is that usually, the kinds of trig functions I've used measure angles in radians, and OpenGL uses degrees. To convert degrees to radians, the formula is:
degrees = radians / pi * 180
Essentially the strategy is to sweep the profile given by the user around the given axis and generate a series of triangle strips connecting adjacent slices.
Assume that the user has drawn the polygon in the XZ plane. Further, assume that the user intends to sweep around the Z axis (i.e. the line X = 0) to generate the solid of revolution, and that one edge of the polygon lies on that axis (you can generalize later once you have this simplified case working).
For simple enough geometry, you can treat the perimeter of the polygon as a function x = f(z), that is, assume there is a unique X value for every Z value. When we go to 3D, this function becomes r = f(z), that is, the radius is unique over the length of the object.
Now, suppose we want to approximate the solid with M "slices" each spanning 2 * Pi / M radians. We'll use N "stacks" (samples in the Z dimension) as well. For each such slice, we can build a triangle strip connecting the points on one slice (i) with the points on slice (i+1). Here's some pseudo-ish code describing the process:
double dTheta = 2.0 * pi / M;
double dZ = (zMax - zMin) / N;
// Iterate over "slices"
for (int i = 0; i < M; ++i) {
double theta = i * dTheta;
double theta_next = (i+1) * dTheta;
// Iterate over "stacks":
for (int j = 0; j <= N; ++j) {
double z = zMin + i * dZ;
// Get cross-sectional radius at this Z location from your 2D model (was the
// X coordinate in the 2D polygon):
double r = f(z); // See above definition
// Convert 2D to 3D by sweeping by angle represented by this slice:
double x = r * cos(theta);
double y = r * sin(theta);
// Get coordinates of next slice over so we can join them with a triangle strip:
double xNext = r * cos(theta_next);
double yNext = r * sin(theta_next);
// Add these two points to your triangle strip (heavy pseudocode):
strip.AddPoint(x, y, z);
strip.AddPoint(xNext, yNext, z);
}
}
That's the basic idea. As sje697 said, you'll possibly need to add end caps to keep the geometry closed (i.e. a solid object, rather than a shell). But this should give you enough to get you going. This can easily be generalized to toroidal shapes as well (though you won't have a one-to-one r = f(z) function in that case).
If you just want it to rotate, then:
glRotatef(angle,0,1,0);
will rotate it around the Y-axis. If you want a lathe, then this is far more complex.
This is a formula for LoG filtering:
(source: ed.ac.uk)
Also in applications with LoG filtering I see that function is called with only one parameter:
sigma(σ).
I want to try LoG filtering using that formula (previous attempt was by gaussian filter and then laplacian filter with some filter-window size )
But looking at that formula I can't understand how the size of filter is connected with this formula, does it mean that the filter size is fixed?
Can you explain how to use it?
As you've probably figured out by now from the other answers and links, LoG filter detects edges and lines in the image. What is still missing is an explanation of what σ is.
σ is the scale of the filter. Is a one-pixel-wide line a line or noise? Is a line 6 pixels wide a line or an object with two distinct parallel edges? Is a gradient that changes from black to white across 6 or 8 pixels an edge or just a gradient? It's something you have to decide, and the value of σ reflects your decision — the larger σ is the wider are the lines, the smoother the edges, and more noise is ignored.
Do not get confused between the scale of the filter (σ) and the size of the discrete approximation (usually called stencil). In Paul's link σ=1.4 and the stencil size is 9. While it is usually reasonable to use stencil size of 4σ to 6σ, these two quantities are quite independent. A larger stencil provides better approximation of the filter, but in most cases you don't need a very good approximation.
This was something that confused me too, and it wasn't until I had to do the same as you for a uni project that I understood what you were supposed to do with the formula!
You can use this formula to generate a discrete LoG filter. If you write a bit of code to implement that formula, you can then to generate a filter for use in image convolution. To generate, say a 5x5 template, simply call the code with x and y ranging from -2 to +2.
This will generate the values to use in a LoG template. If you graph the values this produces you should see the "mexican hat" shape typical of this filter, like so:
(source: ed.ac.uk)
You can fine tune the template by changing how wide it is (the size) and the sigma value (how broad the peak is). The wider and broader the template the less affected by noise the result will be because it will operate over a wider area.
Once you have the filter, you can apply it to the image by convolving the template with the image. If you've not done this before, check out these few tutorials.
java applet tutorials more mathsy.
Essentially, at each pixel location, you "place" your convolution template, centred at that pixel. You then multiply the surrounding pixel values by the corresponding "pixel" in the template and add up the result. This is then the new pixel value at that location (typically you also have to normalise (scale) the output to bring it back into the correct value range).
The code below gives a rough idea of how you might implement this. Please forgive any mistakes / typos etc. as it hasn't been tested.
I hope this helps.
private float LoG(float x, float y, float sigma)
{
// implement formula here
return (1 / (Math.PI * sigma*sigma*sigma*sigma)) * //etc etc - also, can't remember the code for "to the power of" off hand
}
private void GenerateTemplate(int templateSize, float sigma)
{
// Make sure it's an odd number for convenience
if(templateSize % 2 == 1)
{
// Create the data array
float[][] template = new float[templateSize][templatesize];
// Work out the "min and max" values. Log is centered around 0, 0
// so, for a size 5 template (say) we want to get the values from
// -2 to +2, ie: -2, -1, 0, +1, +2 and feed those into the formula.
int min = Math.Ceil(-templateSize / 2) - 1;
int max = Math.Floor(templateSize / 2) + 1;
// We also need a count to index into the data array...
int xCount = 0;
int yCount = 0;
for(int x = min; x <= max; ++x)
{
for(int y = min; y <= max; ++y)
{
// Get the LoG value for this (x,y) pair
template[xCount][yCount] = LoG(x, y, sigma);
++yCount;
}
++xCount;
}
}
}
Just for visualization purposes, here is a simple Matlab 3D colored plot of the Laplacian of Gaussian (Mexican Hat) wavelet. You can change the sigma(σ) parameter and see its effect on the shape of the graph:
sigmaSq = 0.5 % Square of σ parameter
[x y] = meshgrid(linspace(-3,3), linspace(-3,3));
z = (-1/(pi*(sigmaSq^2))) .* (1-((x.^2+y.^2)/(2*sigmaSq))) .*exp(-(x.^2+y.^2)/(2*sigmaSq));
surf(x,y,z)
You could also compare the effects of the sigma parameter on the Mexican Hat doing the following:
t = -5:0.01:5;
sigma = 0.5;
mexhat05 = exp(-t.*t/(2*sigma*sigma)) * 2 .*(t.*t/(sigma*sigma) - 1) / (pi^(1/4)*sqrt(3*sigma));
sigma = 1;
mexhat1 = exp(-t.*t/(2*sigma*sigma)) * 2 .*(t.*t/(sigma*sigma) - 1) / (pi^(1/4)*sqrt(3*sigma));
sigma = 2;
mexhat2 = exp(-t.*t/(2*sigma*sigma)) * 2 .*(t.*t/(sigma*sigma) - 1) / (pi^(1/4)*sqrt(3*sigma));
plot(t, mexhat05, 'r', ...
t, mexhat1, 'b', ...
t, mexhat2, 'g');
Or simply use the Wavelet toolbox provided by Matlab as follows:
lb = -5; ub = 5; n = 1000;
[psi,x] = mexihat(lb,ub,n);
plot(x,psi), title('Mexican hat wavelet')
I found this useful when implementing this for edge detection in computer vision. Although not the exact answer, hope this helps.
It appears to be a continuous circular filter whose radius is sqrt(2) * sigma. If you want to implement this for image processing you'll need to approximate it.
There's an example for sigma = 1.4 here: http://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm