I am trying to calculate the original equation using a dft.
DFT on (1,0,0,0) gives (1,1,1,1)
So what is the equation of wave representing the dataset (1,0,0,0)? I mean something as follows.
f(t)=sin(t)+0.13sin(3t)
DFT of an impulse is just a flat spectrum in the frequency domain, so you have amplitude 1 at each frequency bin:
f(t) = 1 + cos(2πt/4) + cos(4πt/4) + cos(6πt/4)
= 1 + cos(πt/2) + cos(πt) + cos(3πt/2)
Related
I am new to Machine learning . I am trying to find out MSE for given regression model linear regression line (model): y=7.93+1.12x. The data values for X and Y are (23, 41), (34, 45), (45, 49), (56,67), (67, 84), (78, 100).
That must be simple !!!
MSE means mean square loss error.
so assume your regression function is f(x) where x is the feature vector of dimension d.
the output of f(x) is scaler.
square error for one data sample(let it be x1,y1 ; x1 is a vector in d-dimensional space and y1 is scaler) is ( f(x1) - y )^2.
To calculate MSE, calculate the square error of each data point and then, add all square errors, divide the sum of square error by the number of data samples.
In your case, the dimension of the feature vector(x) is 1.
and f(x) = 7.93 + 1.12*x.
----CODE----
X = (23,34,45,56,67,78)
Y = (41,45,49,67,84,100)
SE = 0.0
for x,y in X,Y :
SE = SE + ( 7.93 + 1.12*x - y)**2
MSE = SE/ len(X)
I would like to generate a polynomial 'fit' to the cluster of colored pixels in the image here
(The point being that I would like to measure how much that cluster approximates an horizontal line).
I thought of using grabit or something similar and then treating this as a cloud of points in a graph. But is there a quicker function to do so directly on the image file?
thanks!
Here is a Python implementation. Basically we find all (xi, yi) coordinates of the colored regions, then set up a regularized least squares system where the we want to find the vector of weights, (w0, ..., wd) such that yi = w0 + w1 xi + w2 xi^2 + ... + wd xi^d "as close as possible" in the least squares sense.
import numpy as np
import matplotlib.pyplot as plt
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
def feature(x, order=3):
"""Generate polynomial feature of the form
[1, x, x^2, ..., x^order] where x is the column of x-coordinates
and 1 is the column of ones for the intercept.
"""
x = x.reshape(-1, 1)
return np.power(x, np.arange(order+1).reshape(1, -1))
I_orig = plt.imread("2Md7v.jpg")
# Convert to grayscale
I = rgb2gray(I_orig)
# Mask out region
mask = I > 20
# Get coordinates of pixels corresponding to marked region
X = np.argwhere(mask)
# Use the value as weights later
weights = I[mask] / float(I.max())
# Convert to diagonal matrix
W = np.diag(weights)
# Column indices
x = X[:, 1].reshape(-1, 1)
# Row indices to predict. Note origin is at top left corner
y = X[:, 0]
We want to find vector w that minimizes || Aw - y ||^2
so that we can use it to predict y = w . x
Here are 2 versions. One is a vanilla least squares with l2 regularization and the other is weighted least squares with l2 regularization.
# Ridge regression, i.e., least squares with l2 regularization.
# Should probably use a more numerically stable implementation,
# e.g., that in Scikit-Learn
# alpha is regularization parameter. Larger alpha => less flexible curve
alpha = 0.01
# Construct data matrix, A
order = 3
A = feature(x, order)
# w = inv (A^T A + alpha * I) A^T y
w_unweighted = np.linalg.pinv( A.T.dot(A) + alpha * np.eye(A.shape[1])).dot(A.T).dot(y)
# w = inv (A^T W A + alpha * I) A^T W y
w_weighted = np.linalg.pinv( A.T.dot(W).dot(A) + alpha * \
np.eye(A.shape[1])).dot(A.T).dot(W).dot(y)
The result
# Generate test points
n_samples = 50
x_test = np.linspace(0, I_orig.shape[1], n_samples)
X_test = feature(x_test, order)
# Predict y coordinates at test points
y_test_unweighted = X_test.dot(w_unweighted)
y_test_weighted = X_test.dot(w_weighted)
# Display
fig, ax = plt.subplots(1, 1, figsize=(10, 5))
ax.imshow(I_orig)
ax.plot(x_test, y_test_unweighted, color="green", marker='o', label="Unweighted")
ax.plot(x_test, y_test_weighted, color="blue", marker='x', label="Weighted")
fig.legend()
fig.savefig("curve.png")
For simple straight line fit, set the argument order of feature to 1. You can then use the gradient of the line to get a sense of how close it is to a horizontal line (e.g., by checking the angle of its slope).
It is also possible to set this to any degree of polynomial you want. I find that degree 3 looks pretty good. In this case, the 6 times the absolute value of the coefficient corresponding to x^3 (w_unweighted[3] or w_weighted[3]) is one measure of the curvature of the line.
See A measure for the curvature of a quadratic polynomial in Matlab for additional details.
I have been studying the algorithms of card.io and I have some difficulties when reading the Hough transform script.
For Hough transform, it is required to setup the accumulator, a grid that stores the vote in (rho, theta) space.
In card.io-dmz/cv/hough.cpp (https://github.com/card-io/card.io-dmz/blob/master/cv/hough.cpp#L99) line 99, It saids the number of rho numrho is given by
numrho = cvRound(((width + height) * 2 + 1) / rho);
Here width and height are the dimension of the ROI and rho is the distance resolution.
Question: I do not understand why the numerator is (width + height) * 2 + 1.
My guess is that + 1 is to count the zero value, and * 2 is to count both +ve rho and -ve rho.
But I still don't understand why width + height appears. I think it is more intuitive to replace it by sqrt(width*width + height*height), which is the largest possible value in rho.
This setting is also used in OpenCV (see this link: https://github.com/opencv/opencv/blob/master/modules/imgproc/src/hough.cpp#L128)
Any help would be appreciated. Thanks
I have found this way to calculate elongation basing on image moments
#ELONGATION
def elongation(m):
x = m['mu20'] + m['mu02']
y = 4 * m['mu11']**2 + (m['mu20'] - m['mu02'])**2
return (x + y**0.5) / (x - y**0.5)
mom = cv2.moments(unicocnt, 1)
elongation = elongation(mom)
How can i calculate elongation of a Convex Hull?
hull = cv2.convexHull(unicocnt)
where 'unicocnt' is a contour that was taken with find contours.
By default, convexHull output a vector of indices of points. You have to set the returnPoints argument to 1 to output a vector of points that you can then pass to cv2.moments.
A one to one point matching has already been established
between the blue dots on the two images.
The image2 is the distorted version of the image1. The distortion model seems to be
eyefish lens distortion. The question is:
Is there any way to compute a transformation matrix which describes this transition.
In fact a matrix which transforms the blue
dots on the first image to their corresponding blue dots on the second image?
The problem here is that we don’t know the focal length(means images are uncalibrated), however we do have
perfect matching between around 200 points on the two images.
the distorted image:
I think what you're trying to do can be treated as a distortion correction problem, without the need of the rest of a classic camera calibration.
A matrix transformation is a linear one and linear transformations map always straight lines into straight lines (http://en.wikipedia.org/wiki/Linear_map). It is apparent from the picture that the transformation is nonlinear so you cannot describe it with a matrix operation.
That said, you can use a lens distortion model like the one used by OpenCV (http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html) and obtaining the coefficients shouldn't be very difficult. Here is what you can do in Matlab:
Call (x, y) the coordinates of an original point (top picture) and (xp, yp) the coordinates of a distorted point (bottom picture), both shifted to the center of the image and divided by a scaling factor (same for x and y) so they lie more or less in the [-1, 1] interval. The distortion model is:
x = ( xp*(1 + k1*r^2 + k2*r^4 + k3*r^6) + 2*p1*xp*yp + p2*(r^2 + 2*xp^2));
y = ( yp*(1 + k1*r^2 + k2*r^4 + k3*r^6) + 2*p2*xp*yp + p1*(r^2 + 2*yp^2));
Where
r = sqrt(x^2 + y^2);
You have 5 parameters: k1, k2, k3, p1, p2 for radial and tangential distortion and 200 pairs of points, so you can solve the nonlinear system.
Be sure the x, y, xp and yp arrays exist in the workspace and declare them global:
global x y xp yp
Write a function to evaluate the mean square error given a set of arbitrary distortion coefficients, say it's called 'dist':
function val = dist(var)
global x y xp yp
val = zeros(size(xp));
k1 = var(1);
k2 = var(2);
k3 = var(3);
p1 = var(4);
p2 = var(5);
r = sqrt(xp.*xp + yp.*yp);
temp1 = x - ( xp.*(1 + k1*r.^2 + k2*r.^4 + k3*r.^6) + 2*p1*xp.*yp + p2*(r.^2 + 2*xp.^2));
temp2 = y - ( yp.*(1 + k1*r.^2 + k2*r.^4 + k3*r.^6) + 2*p2*xp.*yp + p1*(r.^2 + 2*yp.^2));
val = sqrt(temp1.*temp1 + temp2.*temp2);
Solve the system with 'fsolve":
[coef, fval] = fsolve(#dist, zeros(5,1));
The values in 'coef' are the distortion coefficients you're looking for. To correct the distortion of new points (xp, yp) not present in the original set, use the equations:
r = sqrt(xp.*xp + yp.*yp);
x_corr = xp.*(1 + k1*r.^2 + k2*r.^4 + k3*r.^6) + 2*p1*xp.*yp + p2*(r.^2 + 2*xp.^2);
y_corr = yp.*(1 + k1*r.^2 + k2*r.^4 + k3*r.^6) + 2*p2*xp.*yp + p1*(r.^2 + 2*yp.^2);
Results will be shifted to the center of the image and scaled by the factor you used above.
Notes:
Coordinates must be shifted to the center of the image as the distortion is symmetric with respect to it.
It should't be necessary to normalize to the interval [-1, 1] but it is comon to do it so the distortion coefficients obtained are more or less of the same order of magnitude (working with powers 2, 4 and 6 of pixel coordinates would need very small coefficients).
This method doesn't require the points in the image to be in an uniform grid.