I am trying to plot a direction field. I wish to plot a normalized vector field.
% initialise vector field
f=#(x,y)(2.x-9).(cos(y)).^2;
A=#(x,y)x-x+1;
B=#(x,y)f(x,y);
% plot the slope field
hold on
[x, y]=meshgrid(a:(b-a)/m:b , c:(d-c)/n:d);
quiver(x,y,A(x,y),B(x,y),1,'color','blue','linewidth',1.2)
[x,y] = ode45(f,[x0 b],y0);
plot(x,y,'linewidth',1,'color','black')
I have searched the forums and they suggest the following code
dy = f(x,y)
dx = ones(size(dy))
dyu = dy./sqrt(dx.^2+dy.^2)
dxu = dx./sqrt(dx.^2+dy.^2)
I have tried it but it does not work due to size error. Is there anyone who can suggest a modification to my code so that I can plot a normalized vector field.
Many thanks in advance!
Related
hough transformation using the normal equation of a line.
For calculating the value of r while keeping the values of x and y same for different θ the following formula is used.
r = sin(θ)y + cos(θ)x
But the results are not the same as shown in the slides. Am I missing something? I am a newbie please be gentle.
As #Ash explained in the comment section above the angles are expected to be expressed in radians not degree.
So r = sin(θ*pi/180)y + cos(θ*pi/180)x will give the correct answer.
I have three points: (1,1), (2,3), (3, 3.123). I assume the hypothesis is , and I want to do linear regression on the three points. I have two methods to calculate θ:
Method-1: Least Square
import numpy as np
# get an approximate solution using least square
X = np.array([[1,1],[2,1],[3,1]])
y = np.array([1,3,3.123])
theta = np.linalg.lstsq(X,y)[0]
print theta
Method-2: Matrix multiplication
We have the following derivation process:
# rank(X)=2, rank(X|y)=3, so there is no exact solution.
print np.linalg.matrix_rank(X)
print np.linalg.matrix_rank(np.c_[X,y])
theta = np.linalg.inv(X.T.dot(X)).dot(X.T.dot(y))
print theta
Both method-1 and method-2 can get result [ 1.0615 0.25133333], it seems that method-2 is equivalent to least square. But, I don't know why, can anyone reveal the underlying principle of their equivalence?
Both approaches are equivalent, because least squares method is
θ = argmin (Xθ-Y)'(Xθ-Y) = argmin ||(Xθ-Y)||^2 = argmin ||(Xθ-Y)||, that means that you try to minimize length of vector (Xθ-Y), so you try to minimize distance between Xθ and Y. X is a constant matrix, so Xθ is vector from column space of X. That means the shortest distance between these two vectors is when Xθ is equal to projection of vector Y to column space of X (can be easy observed from picture). That results to Y^(hat) = Xθ = X(X'X)^(-1)X'Y, where X(X'X)^(-1)X' is the projection matrix to column space of X. After some changes you can observe that this is equivalent with (X'X)θ = X'Y. You can find exact proof in any linear algebra book.
I have an input device that gives me the pressures of a 2d array of sensors.
If I treat these pressures as the Z dimension, and the column and row of the sensor as the X and Y, what is the opencv way to find the center of mass, assuming a uniform density?
This is just a thought experiment for a method, so don't be too harsh. So you have a set of points each with a location of X,Y and a weight Z.
Take any two points and find the center of mass for them. Replace these two points with a single point at the new X,Y and a new Z.
Keep doing this until you have only one point left, which is the answer you seek.
This seems to be the opencv way of calculating the mass center. For a 2x2 array set to 5,5,5,10:
Mat m(2, 2, CV_8UC1);
m.at<unsigned char>(0,0) = 5;
m.at<unsigned char>(0,1) = 5;
m.at<unsigned char>(1,0) = 5;
m.at<unsigned char>(1,1) = 10;
Moments mm = moments(m,false);
Point3f mass_center(mm.m10/mm.m00,mm.m01/mm.m00,mm.m00/m.total()/2);
The mass center would be at .6,.6,3.125.
I'd like to compute a sort of direction field on a 2D image, as (poorly) illustrated from this photoshop mockup. NOTE: This is NOT a vector field as you learn about in differential equations. Instead, this is something that draws along the lines that one would see if they computed level sets of the image.
Are there known methods of obtaining this type of direction field (red lines) of an image? It seems like it almost behaves like the normal to the gradient, but this isn't exactly it, either, since there are places where the gradient is zero and I'd like direction fields at these locations as well.
I was able to find a paper on how to do this for fingerprint processing that went into enough detail that their results were repeatable. It's unfortunately behind a paywall, but here it is for anyone interested and able to access the full text:
Systematic methods for the computation of the directional fields and singular points of fingerprints
EDIT: As requested, here is a quick and dirty summary (in Python) of how this is achieved in the above paper.
A naive approach would be to average the gradient in a small square neighborhood around the target pixel, much like the superimposed grid on the image in the question, and then compute the normal. However, if you simply average the gradient, it's possible that opposite gradients in the region will cancel each other (e.g. when computing the orientation along a ridge). Thus, it is common to compute with squared gradients, since gradients pointing in opposite directions would then be aligned. There is a clever formula for the squared gradient based on the original gradient. I won't give the derivation, but here is the formula:
Now, take the sum of squared gradients over the region (modulo some piece-wise defined compensations for the way angles work). Finally, through some arctangent magic, you'll get the orientation field.
If you run the following code on a smooth grayscale bitmap image with the grid-size chosen appropriately and then plot the orientation field O alongside your original image, you'll see how the orientation field more or less gives the angles I asked about in my original question.
from scipy import misc
import numpy as np
import math
# Import the grayscale image
bmp = misc.imread('path/filename.bmp')
# Compute the gradient - VERY important to convert to floats!
grad = np.gradient(bmp.astype(float))
# Set the block size (superimposed grid on the sample image in the question)
blockRadius=5
# Compute the orientation field. Result will be a matrix of angles in [0, \pi), one for each pixel in the original (grayscale) image.
O = np.zeros(bmp.shape)
for x in range(0,bmp.shape[0]):
for y in range(0,bmp.shape[1]):
numerator = 0.
denominator = 0.
for i in range(max(0,x-blockRadius),min(bmp.shape[0],x+blockRadius)):
for j in range(max(0,y-blockRadius),min(bmp.shape[0],y+blockRadius)):
numerator = numerator + 2.*grad[0][i,j]*grad[1][i,j]
denominator = denominator + (math.pow(grad[0][i,j],2.) - math.pow(grad[1][i,j],2.))
if denominator==0:
O[x,y] = 0
elif denominator > 0:
O[x,y] = (1./2.)*math.atan(numerator/denominator)
elif numerator >= 0:
O[x,y] = (1./2.)*(math.atan(numerator/denominator)+math.pi)
elif numerator < 0:
O[x,y] = (1./2.)*(math.atan(numerator/denominator)-math.pi)
for x in range(0,bmp.shape[0]):
for y in range(0,bmp.shape[1]):
if O[x,y] <= 0:
O[x,y] = O[x,y] + math.pi
else:
O[x,y] = O[x,y]
Cheers!
I created a simple test application to perform translation (T) and rotation (R) estimation from the essential matrix.
Generate 50 random Points.
Calculate projection pointSet1.
Transform Points via matrix (R|T).
Calculate new projection pointSet2.
Then calculate fundamental matrix F.
Extract essential matrix like E = K2^T F K1 (K1, K2 - internal camera matrices).
Use SVD to get UDV^T.
And calculate restoredR1 = UWV^T, restoredR2 = UW^T. And see that one of them equal to initial R.
But when I calculate translation vector, restoredT = UZU^T, I get normalized T.
restoredT*max(T.x, T.y, T.z) = T
How to restore correct translation vector?
I understand! I don't need real length estimation on this step.
When i get first image, i must set metric transformation (scale factor) or estimate it from calibration from known object. After, when i recieve second frame, i calculate normilized T, and using known 3d coordinates from first frame to solve equation (sx2, sy2, 1) = K(R|lambdaT)(X,Y,Z); and find lambda - than lambdaT will be correct metric translation...
I check it, and this is true/ So... maybe who know more simple solution?