I am using opencv::solvePnP to return a camera pose. I run PnP, and it returns the rvec and tvec values.(rotation vector and position).
I then run this function to convert the values to the camera pose:
void GetCameraPoseEigen(cv::Vec3d tvecV, cv::Vec3d rvecV, Eigen::Vector3d &Translate, Eigen::Quaterniond &quats)
{
Mat R;
Mat tvec, rvec;
tvec = DoubleMatFromVec3b(tvecV);
rvec = DoubleMatFromVec3b(rvecV);
cv::Rodrigues(rvec, R); // R is 3x3
R = R.t(); // rotation of inverse
tvec = -R*tvec; // translation of inverse
Eigen::Matrix3d mat;
cv2eigen(R, mat);
Eigen::Quaterniond EigenQuat(mat);
quats = EigenQuat;
double x_t = tvec.at<double>(0, 0);
double y_t = tvec.at<double>(1, 0);
double z_t = tvec.at<double>(2, 0);
Translate.x() = x_t * 10;
Translate.y() = y_t * 10;
Translate.z() = z_t * 10;
}
This works, yet at some rotation angles, the converted rotation values flip randomly between positive and negative values. Yet, the source rvecV value does not. I assume this means I am going wrong with my conversion. How can i get a stable Quaternion from the PnP returned cv::Vec3d?
EDIT: This seems to be Quaternion flipping, as mentioned here:
Quaternion is flipping sign for very similar rotations?
Based on that, i have tried adding:
if(quat.w() < 0)
{
quat = quat.Inverse();
}
But I see the same flipping.
Both quat and -quat represent the same rotation. You can check that by taking a unit quaternion, converting it to a rotation matrix, then doing
quat.coeffs() = -quat.coeffs();
and converting that to a rotation matrix as well.
If for some reason you always want a positive w value, negate all coefficients if w is negative.
The sign should not matter...
... rotation-wise, as long as all four fields of the 4D quaternion are getting flipped. There's more to it explained here:
Quaternion to EulerXYZ, how to differentiate the negative and positive quaternion
Think of it this way:
Angle/axis both flipped mean the same thing
and mind the clockwise to counterclockwise transition much like in a mirror image.
There may be convention to keep the quat.w() or quat[0] component positive and change other components to opposite accordingly. Assume w = cos(angle/2) then setting w > 0 just means: I want angle to be within the (-pi, pi) range. So that the -270 degrees rotation becomes +90 degrees rotation.
Doing the quat.Inverse() is probably not what you want, because this creates a rotation in the opposite direction. That is -quat != quat.Inverse().
Also: check that both systems have the same handedness (chirality)! Test if your rotation matrix determinant is +1 or -1.
(sry for the image link, I don't have enough reputation to embed them).
Related
In OpenCV how do you calculate the average gradient strength in a Mat and the average gradient direction?
I have sourced the below methods by googling but I want to confirm I am actually doing this correctly before moving onto the next step.
Is this correct?
Mat img = imread('foo.png', CV_8UC); // read image as grayscale single channel
// Calculate the mean intensity and the std deviation
// Any errors here or am I doing this correctly?
Scalar sMean, sStdDev;
meanStdDev(src, sMean, sStdDev);
double mean = sMean[0];
double stddev = sStdDev[0];
// Calculate the average gradient magnitude/strength across the image
// Any errors here or am I doing this correctly?
Mat dX, dY, magnitude;
Sobel(src, dX, CV_32F, 1, 0, 1);
Sobel(src, dY, CV_32F, 0, 1, 1);
magnitude(dX, dY, magnitude);
Scalar sMMean, sMStdDev;
meanStdDev(magnitude, sMMean, sMStdDev);
double magnitudeMean = sMMean[0];
double magnitudeStdDev = sMStdDev[0];
// Calculate the average gradient direction across the image
// Any errors here or am I doing this correctly?
Scalar avgHorizDir = mean(dX);
Scalar avgVertDir = mean(dY);
double avgDir = atan2(-avgVertDir[0], avgHorizDir[0]);
float blurriness = cv::videostab::calcBlurriness(src); // low values = sharper. High values = blurry
Technically those are the correct ways of obtaining the two averages.
The way you compute mean direction uses weighted directional statistics, meaning that pixels without a strong gradient have less influence on the average.
However, for most images this average direction is not very meaningful, as there exist edges in all directions and cancel out.
If your image is of a single edge, then this will work great.
If your image has lines in it, containing edges in opposite directions, this will not work. In this case, you want to average the double angle (average orientations). The obvious way of doing this is to compute the direction per pixel as an angle, double them, then use directional statistics to average (ie convert back to vectors and average those). Doubling the angle causes opposite directions to be mapped to the same value, thus averaging doesn’t cancel these out.
Another simple way to average orientations is to take the average of the tensor field obtained by the outer product of the gradient field with itself, and determine the direction of the eigenvector corresponding to the largest eigenvalue. The tensor field is obtained as follows:
Mat Sxx = dX * dX;
Mat Syy = dY * dY;
Mat Sxy = dX * dY;
This should then be averaged:
Scalar mSxx = mean(sXX);
Scalar mSyy = mean(sYY);
Scalar mSxy = mean(sXY);
These values form a 2x2 real-valued symmetric matrix:
| mSxx mSxy |
| mSxy mSyy |
It is relatively straight-forward to determine its eigendecomposition, and can be done analytically. I don’t have the equations on hand right now, so I’ll leave it as an exercise to the reader. :)
I've two images captured from two cameras of same make placed some distance apart, capturing the same scene. I want to calculate the real world rotation and translation among the two cameras. In order to achieve this, I first extracted the SIFT features of both of the images and matched them.
I now have fundamental matrix as well as homography matrix. However unable to proceed further, lots of confusion. Can anybody help me to estimate the rotation and translation in between two cameras?
I'm using OpenCV for feature extraction and matching, homography calculations.
If you have the Homography then you also have the rotation. Once you have homography it is easy to get rotation and translation matrix.
For example, if you are using OpenCV c++:
param[in] H
param[out] pose
void cameraPoseFromHomography(const Mat& H, Mat& pose)
{
pose = Mat::eye(3, 4, CV_32FC1); // 3x4 matrix, the camera pose
float norm1 = (float)norm(H.col(0));
float norm2 = (float)norm(H.col(1));
float tnorm = (norm1 + norm2) / 2.0f; // Normalization value
Mat p1 = H.col(0); // Pointer to first column of H
Mat p2 = pose.col(0); // Pointer to first column of pose (empty)
cv::normalize(p1, p2); // Normalize the rotation, and copies the column to pose
p1 = H.col(1); // Pointer to second column of H
p2 = pose.col(1); // Pointer to second column of pose (empty)
cv::normalize(p1, p2); // Normalize the rotation and copies the column to pose
p1 = pose.col(0);
p2 = pose.col(1);
Mat p3 = p1.cross(p2); // Computes the cross-product of p1 and p2
Mat c2 = pose.col(2); // Pointer to third column of pose
p3.copyTo(c2); // Third column is the crossproduct of columns one and two
pose.col(3) = H.col(2) / tnorm; //vector t [R|t] is the last column of pose
}
This function calculates de camera pose from homography, in which rotation is contained. For further theoretical info follow this thread.
I'm studied the pARK example project (http://developer.apple.com/library/IOS/#samplecode/pARk/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011083) so I can apply some of its fundamentals in an app i'm working on. I understand nearly everything, except:
The way it has to calculate if a point of interest must appear or not. It gets the attitude, multiply it with the projection matrix (to get the rotation in GL coords?), then multiply that matrix with the coordinates of the point of interest and, at last, look at the last coordinate of that vector to find out if the point of interest must be shown. Which are the mathematic fundamentals of this?
Thanks a lot!!
I assume you are referring to the following method:
- (void)drawRect:(CGRect)rect
{
if (placesOfInterestCoordinates == nil) {
return;
}
mat4f_t projectionCameraTransform;
multiplyMatrixAndMatrix(projectionCameraTransform, projectionTransform, cameraTransform);
int i = 0;
for (PlaceOfInterest *poi in [placesOfInterest objectEnumerator]) {
vec4f_t v;
multiplyMatrixAndVector(v, projectionCameraTransform, placesOfInterestCoordinates[i]);
float x = (v[0] / v[3] + 1.0f) * 0.5f;
float y = (v[1] / v[3] + 1.0f) * 0.5f;
if (v[2] < 0.0f) {
poi.view.center = CGPointMake(x*self.bounds.size.width, self.bounds.size.height-y*self.bounds.size.height);
poi.view.hidden = NO;
} else {
poi.view.hidden = YES;
}
i++;
}
}
This is performing an OpenGL like vertex transformation on the places of interest to check if they are in a viewable frustum. The frustum is created in the following line:
createProjectionMatrix(projectionTransform, 60.0f*DEGREES_TO_RADIANS, self.bounds.size.width*1.0f / self.bounds.size.height, 0.25f, 1000.0f);
This sets up a frustum with a 60 degree field of view, a near clipping plane of 0.25 and a far clipping plane of 1000. Any point of interest that is further away than 1000 units will then not be visible.
So, to step through the code, first the projection matrix that sets up the frustum, and the camera view matrix, which simply rotates the object so it is the right way up relative to the camera, are multiplied together. Then, for each place of interest, its location is multiplied by the viewProjection matrix. This will project the location of the place of interest into the view frustum, applying rotation and perspective.
The next two lines then convert the transformed location of the place into whats known as normalized device coordinates. The 4 component vector needs to be collapsed to 3 dimensional space, this is achieved by projecting it onto the plane w == 1, by dividing the vector by its w component, v[3]. It is then possible to determine if the point lies within the projection frustum by checking if its coordinates lie in the cube with side length 2 with origin [0, 0, 0]. In this case, the x and y coordinates are being biased from the range [-1 1] to [0 1] to match up with the UIKit coordinate system, by adding 1 and dividing by 2.
Next, the v[2] component, z, is checked to see if it is greater than 0. This is actually incorrect as it has not been biased, it should be checked to see if it is greater than -1. This will detect if the place of interest is in the first half of the projection frustum, if it is then the object is deemed visible and displayed.
If you are unfamiliar with vertex projection and coordinate systems, this is a huge topic with a fairly steep learning curve. There is however a lot of material online covering it, here are a couple of links to get you started:
http://www.falloutsoftware.com/tutorials/gl/gl0.htm
http://www.opengl.org/wiki/Vertex_Transformation
Good luck//
The iOS 5 documentation reveals that GLKMatrix4MakeLookAt operates the same as gluLookAt.
The definition is provided here:
static __inline__ GLKMatrix4 GLKMatrix4MakeLookAt(float eyeX, float eyeY, float eyeZ,
float centerX, float centerY, float centerZ,
float upX, float upY, float upZ)
{
GLKVector3 ev = { eyeX, eyeY, eyeZ };
GLKVector3 cv = { centerX, centerY, centerZ };
GLKVector3 uv = { upX, upY, upZ };
GLKVector3 n = GLKVector3Normalize(GLKVector3Add(ev, GLKVector3Negate(cv)));
GLKVector3 u = GLKVector3Normalize(GLKVector3CrossProduct(uv, n));
GLKVector3 v = GLKVector3CrossProduct(n, u);
GLKMatrix4 m = { u.v[0], v.v[0], n.v[0], 0.0f,
u.v[1], v.v[1], n.v[1], 0.0f,
u.v[2], v.v[2], n.v[2], 0.0f,
GLKVector3DotProduct(GLKVector3Negate(u), ev),
GLKVector3DotProduct(GLKVector3Negate(v), ev),
GLKVector3DotProduct(GLKVector3Negate(n), ev),
1.0f };
return m;
}
I'm trying to extract camera information from this:
1. Read the camera position
GLKVector3 cPos = GLKVector3Make(mx.m30, mx.m31, mx.m32);
2. Read the camera right vector as `u` in the above
GLKVector3 cRight = GLKVector3Make(mx.m00, mx.m10, mx.m20);
3. Read the camera up vector as `u` in the above
GLKVector3 cUp = GLKVector3Make(mx.m01, mx.m11, mx.m21);
4. Read the camera look-at vector as `n` in the above
GLKVector3 cLookAt = GLKVector3Make(mx.m02, mx.m12, mx.m22);
There are two questions:
The look-at vector seems negated as they defined it, since they perform (eye - center) rather than (center - eye). Indeed, when I call GLKMatrix4MakeLookAt with a camera position of (0,0,-10) and a center of (0,0,1) my extracted look at is (0,0,-1), i.e. the negative of what I expect. So should I negate what I extract?
The camera position I extract is the result of the view transformation matrix premultiplying the view rotation matrix, hence the dot products in their definition. I believe this is incorrect - can anyone suggest how else I should calculate the position?
Many thanks for your time.
Per its documentation, gluLookAt calculates centre - eye, uses that for some intermediate steps, then negatives it for placement into the resulting matrix. So if you want centre - eye back, the taking negative is explicitly correct.
You'll also notice that the result returned is equivalent to a multMatrix with the rotational part of the result followed by a glTranslate by -eye. Since the classic OpenGL matrix operations post multiply, that means gluLookAt is defined to post multiply the rotational by the translational. So Apple's implementation is correct, and the same as first moving the camera, then rotating it — which is correct.
So if you define R = (the matrix defining the rotational part of your instruction), T = (the translational analogue), you get R.T. If you want to extract T you could premultiply by the inverse of R and then pull the results out of the final column, since matrix multiplication is associative.
As a bonus, because R is orthonormal, the inverse is just the transpose.
I'm currently working on this [opencv sample]
The interesting part is at line 89 warpPerspectiveRand method. I want to set the rotation angle, translation, scaling and other transformation values manually instead of using random generated values. But I don't know how to calculate the matrix elements.
A simple calculation example would be helpful.
Thanks
double ang = 0.1;
double xscale = 1.2;
double yscale = 1.5;
double xTranslation = 100;
double yTranslation = 200;
cv::Mat t(3,3,CV_64F);
t=0;
t.at<double>(0,0) = xscale*cos(ang);
t.at<double>(1,1) = yscale*cos(ang);
t.at<double>(0,1) = -sin(ang);
t.at<double>(1,0) = sin(ang);
t.at<double>(0,2) = xTranslation ;
t.at<double>(1,2) = yTranslation;
t.at<double>(2,2) = 1;
EDIT:
Rotation is always around (0,0). If you would like to rotated around a different point, you need to translate(move), rotate, and move back. It can be done by creating two matrices, one for rotation (A) and one for translation(T), and building a new Matrix M as:
M = inv(T) * A * T
What you're looking for is a projection matrix
http://en.wikipedia.org/wiki/3D_projection
There are different matrix styles, some of them are 4x4 (the complete theoretical projection matrix), some are 3x3 (as in OpenCV), because they consider the projection as a transform from a planar surface to another planar surface, and this constraint allows one to express the trasform by a 3x3 matrix.