Please bear with me, I'm really awful at matrix math. I have a layer that I want to remain "stationary" with gravity at the referenceAttitude while the phone rotates to other attitudes. I have a motionManager working nicely, am using multiplyByInverseOfAttitude on the current attitude, and applying the resulting delta as a rotation to my layer using a CMRotationMatrix (doing separate CATransform3DRotates for the pitch, roll, and yaw caused considerable wackiness near the axes). It's basically inspired by code like this example.
I concat this with another transform to apply the m34 perspective trick before I apply the rotation to my layer.
[attitude multiplyByInverseOfAttitude:referenceAttitude];
CATransform3D t = CATransform3DIdentity;
CMRotationMatrix r = attitude.rotationMatrix;
t.m11=r.m11; t.m12=r.m12; t.m13=r.m13; t.m14=0;
t.m21=r.m21; t.m22=r.m22; t.m23=r.m23; t.m24=0;
t.m31=r.m31; t.m32=r.m32; t.m33=r.m33; t.m34=0;
t.m41=0; t.m42=0; t.m43=0; t.m44=1;
CATransform3D perspectiveTransform = CATransform3DIdentity;
perspectiveTransform.m34 = 1.0 / -650;
t = CATransform3DConcat(t, perspectiveTransform);
myUIImageView.layer.transform = t;
The result is pretty and works like you'd expect, the layer staying stationary with gravity as I move my phone around, except for a single axis, the y-axis; holding the phone flat and rolling it, the layer rolls double-time in the direction the phone is, instead of remaining stationary.
I don't know why this one axis moves wrong while the other moves correctly after applying the multiplyByInverseOfAttitude. When using separate CATransform3DRotates for the pitch, yaw, roll, I was able to easily correct the problem by multiplying the roll vector by -1, but I have no idea how to apply that to a rotation matrix. The problem obviously is only visible once you introduce perspective into the equation, so perhaps I'm doing that wrong. Inverting my m34 value fixes the roll but creates the same problem on the pitch. I either need to figure out why the rotation on this axis is backwards, invert the rotation on that axis via my matrix, or correct the perspective somehow.
You have to take into account the following:
In your case, CMRotationMatrix needs to be transposed (http://en.wikipedia.org/wiki/Transpose) which means swapping columns and rows.
You don't need to set the transform as CATransform3DIdentity, because you're overwriting each value, so you can start with an empty matrix. If you want to use CATransform3DIdentity you can omit setting 0s and 1s since they've already been defined. (CATransform3DIdentity is an identity matrix, see http://en.wikipedia.org/wiki/Identity_matrix)
To also correct the rotation around the Y axis, you need to multiply your vector with [1 0 0 0; 0 -1 0 0; 0 0 1 0; 0 0 0 1].
Make the following changes to your code:
CMRotationMatrix r = attitude.rotationMatrix;
CATransform3D t;
t.m11=r.m11; t.m12=r.m21; t.m13=r.m31; t.m14=0;
t.m21=r.m12; t.m22=r.m22; t.m23=r.m32; t.m24=0;
t.m31=r.m13; t.m32=r.m23; t.m33=r.m33; t.m34=0;
t.m41=0; t.m42=0; t.m43=0; t.m44=1;
CATransform3D perspectiveTransform = CATransform3DIdentity;
perspectiveTransform.m34 = 1.0 / -650;
t = CATransform3DConcat(t, perspectiveTransform);
t = CATransform3DConcat(t, CATransform3DMakeScale(1.0, -1.0, 1.0));
Or, with setting t to CATransform3DIdentity just leave the 0s and 1s out:
...
CATransform3D t = CATransform3DIdentity;
t.m11=r.m11; t.m12=r.m21; t.m13=r.m31;
t.m21=r.m12; t.m22=r.m22; t.m23=r.m32;
t.m31=r.m13; t.m32=r.m23; t.m33=r.m33;
....
Related
I am using opencv::solvePnP to return a camera pose. I run PnP, and it returns the rvec and tvec values.(rotation vector and position).
I then run this function to convert the values to the camera pose:
void GetCameraPoseEigen(cv::Vec3d tvecV, cv::Vec3d rvecV, Eigen::Vector3d &Translate, Eigen::Quaterniond &quats)
{
Mat R;
Mat tvec, rvec;
tvec = DoubleMatFromVec3b(tvecV);
rvec = DoubleMatFromVec3b(rvecV);
cv::Rodrigues(rvec, R); // R is 3x3
R = R.t(); // rotation of inverse
tvec = -R*tvec; // translation of inverse
Eigen::Matrix3d mat;
cv2eigen(R, mat);
Eigen::Quaterniond EigenQuat(mat);
quats = EigenQuat;
double x_t = tvec.at<double>(0, 0);
double y_t = tvec.at<double>(1, 0);
double z_t = tvec.at<double>(2, 0);
Translate.x() = x_t * 10;
Translate.y() = y_t * 10;
Translate.z() = z_t * 10;
}
This works, yet at some rotation angles, the converted rotation values flip randomly between positive and negative values. Yet, the source rvecV value does not. I assume this means I am going wrong with my conversion. How can i get a stable Quaternion from the PnP returned cv::Vec3d?
EDIT: This seems to be Quaternion flipping, as mentioned here:
Quaternion is flipping sign for very similar rotations?
Based on that, i have tried adding:
if(quat.w() < 0)
{
quat = quat.Inverse();
}
But I see the same flipping.
Both quat and -quat represent the same rotation. You can check that by taking a unit quaternion, converting it to a rotation matrix, then doing
quat.coeffs() = -quat.coeffs();
and converting that to a rotation matrix as well.
If for some reason you always want a positive w value, negate all coefficients if w is negative.
The sign should not matter...
... rotation-wise, as long as all four fields of the 4D quaternion are getting flipped. There's more to it explained here:
Quaternion to EulerXYZ, how to differentiate the negative and positive quaternion
Think of it this way:
Angle/axis both flipped mean the same thing
and mind the clockwise to counterclockwise transition much like in a mirror image.
There may be convention to keep the quat.w() or quat[0] component positive and change other components to opposite accordingly. Assume w = cos(angle/2) then setting w > 0 just means: I want angle to be within the (-pi, pi) range. So that the -270 degrees rotation becomes +90 degrees rotation.
Doing the quat.Inverse() is probably not what you want, because this creates a rotation in the opposite direction. That is -quat != quat.Inverse().
Also: check that both systems have the same handedness (chirality)! Test if your rotation matrix determinant is +1 or -1.
(sry for the image link, I don't have enough reputation to embed them).
I have a SCNNode which i keep on rotating with pan gesture by 90deg increments. The first few rotations are working as intended, by with a scenario where the nodes axes rotate in the oposite direction, the rotation is executed in wrong direction. How can i determine the orientation of the axes after each rotation?
Scenario (using a cube for simplicity):
i rotate the cube 90deg along Y axis. Y points still up, X now points to the camera, Z points right
i rotate the cube again 90deg along Y. X now points left, Z to the camera
PROBLEM - i now try to rotate 90deg along X axis. Because X got rotated 180 degrees, the rotation is now reversed.
How can i understand when to rotate (-1,0,0) and when (1,0,0) ?
I'm quite new to the world of 3D math, i hope i explained my issue correctly.
After further research i realised i chose completely wrong approach. The way to achieve what i want was to rotate the node using the axes of the rootNode - this way i don't need to worry about local axes of my node.
EDIT: updated code based on Xartec's suggestions
let rotation = SCNMatrix4MakeRotation(angle, x, y, Float(0))
let newTransform = SCNMatrix4Mult(bricksNode.transform, rotation)
let animation = CABasicAnimation(keyPath: "transform")
animation.fromValue = bricksNode.transform
animation.toValue = scnScene.rootNode.convertTransform(newTransform,from: nil)
animation.duration = 0.5
bricksNode.addAnimation(animation, forKey: nil)
I'm working on a 3D Pose estimation system. I used OpenCVs function cvPosit to calculate the rotation matrix and the translation vector.
I also need the angles of the rotation matrix, but no algorithms seem to be working.
The function cv::RQDecomp3x3(), which was the answer of topic "in opencv : how to get yaw, roll, pitch from POSIT rotation matrix" cannot work, because the function needs the 3x3 matrix of the projection matrix.
Furthermore I tried to use algorithms from the links below, but nothing worked.
visionopen.com/cv/vosm/doc/html/recognitionalgs_8cpp_source.html
stackoverflow.com/questions/16266740/in-opencv-how-to-get-yaw-roll-pitch-from-posit-rotation-matrix
quad08pyro.groups.et.byu.net/vision.htm
stackoverflow.com/questions/13565625/opencv-c-posit-why-are-my-values-always-nan-with-small-focal-lenght
www.c-plusplus.de/forum/308773-full
I used the most common Posit Tutorial and an own example with Blender, so I could render an image to retreive the image points and to know the exact angles. The object's Z-Axis in Blender was rotated by 10 degrees - And I checked all the degrees of all 3 Axis due to changes in Axis between Blender and OpenCV.
double focalLength = 700.0;
CvPOSITObject* positObject;
std::vector<CvPoint3D32f> modelPoints;
modelPoints.push_back(cvPoint3D32f(0.0f, 0.0f, 0.0f));
modelPoints.push_back(cvPoint3D32f(CUBE_SIZE, 0.0f, 0.0f));
modelPoints.push_back(cvPoint3D32f(0.0f, CUBE_SIZE, 0.0f));
modelPoints.push_back(cvPoint3D32f(0.0f, 0.0f, CUBE_SIZE));
std::vector<CvPoint2D32f> imagePoints;
imagePoints.push_back( cvPoint2D32f( 157,372) );
imagePoints.push_back( cvPoint2D32f(423,386 ));
imagePoints.push_back( cvPoint2D32f( 157,108 ));
imagePoints.push_back( cvPoint2D32f(250,337));
// Moving the points to the image center as described in the tutorial
for (int i = 0; i < imagePoints.size();i++) {
imagePoints[i] = cvPoint2D32f(imagePoints[i].x -320, 240 - imagePoints[i].y);
}
CvVect32f translation_vector = new float[3];
CvTermCriteria criteria = cvTermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER,iterations, 0.1f);
positObject = cvCreatePOSITObject( &modelPoints[0], static_cast<int>(modelPoints.size()));
CvMatr32f rotation_matrix = new float[9];
cvPOSIT( positObject, &imagePoints[0], focalLength, criteria, rotation_matrix, translation_vector );
algorithms to get angles...
I already tried to calculate the results from radian to degree and clockwise but I already get bad results using the rotation matrix of cvPosit from OpenCV. I also changed matrix format to check wrong formatting...
I used simple rotation matrices - like only doing a rotation of the x-axis, y and z-axis and some algorithm worked. The rotation matrix of cvPosit didn't work with that algorithm.
I appreciate any support.
I'm studied the pARK example project (http://developer.apple.com/library/IOS/#samplecode/pARk/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011083) so I can apply some of its fundamentals in an app i'm working on. I understand nearly everything, except:
The way it has to calculate if a point of interest must appear or not. It gets the attitude, multiply it with the projection matrix (to get the rotation in GL coords?), then multiply that matrix with the coordinates of the point of interest and, at last, look at the last coordinate of that vector to find out if the point of interest must be shown. Which are the mathematic fundamentals of this?
Thanks a lot!!
I assume you are referring to the following method:
- (void)drawRect:(CGRect)rect
{
if (placesOfInterestCoordinates == nil) {
return;
}
mat4f_t projectionCameraTransform;
multiplyMatrixAndMatrix(projectionCameraTransform, projectionTransform, cameraTransform);
int i = 0;
for (PlaceOfInterest *poi in [placesOfInterest objectEnumerator]) {
vec4f_t v;
multiplyMatrixAndVector(v, projectionCameraTransform, placesOfInterestCoordinates[i]);
float x = (v[0] / v[3] + 1.0f) * 0.5f;
float y = (v[1] / v[3] + 1.0f) * 0.5f;
if (v[2] < 0.0f) {
poi.view.center = CGPointMake(x*self.bounds.size.width, self.bounds.size.height-y*self.bounds.size.height);
poi.view.hidden = NO;
} else {
poi.view.hidden = YES;
}
i++;
}
}
This is performing an OpenGL like vertex transformation on the places of interest to check if they are in a viewable frustum. The frustum is created in the following line:
createProjectionMatrix(projectionTransform, 60.0f*DEGREES_TO_RADIANS, self.bounds.size.width*1.0f / self.bounds.size.height, 0.25f, 1000.0f);
This sets up a frustum with a 60 degree field of view, a near clipping plane of 0.25 and a far clipping plane of 1000. Any point of interest that is further away than 1000 units will then not be visible.
So, to step through the code, first the projection matrix that sets up the frustum, and the camera view matrix, which simply rotates the object so it is the right way up relative to the camera, are multiplied together. Then, for each place of interest, its location is multiplied by the viewProjection matrix. This will project the location of the place of interest into the view frustum, applying rotation and perspective.
The next two lines then convert the transformed location of the place into whats known as normalized device coordinates. The 4 component vector needs to be collapsed to 3 dimensional space, this is achieved by projecting it onto the plane w == 1, by dividing the vector by its w component, v[3]. It is then possible to determine if the point lies within the projection frustum by checking if its coordinates lie in the cube with side length 2 with origin [0, 0, 0]. In this case, the x and y coordinates are being biased from the range [-1 1] to [0 1] to match up with the UIKit coordinate system, by adding 1 and dividing by 2.
Next, the v[2] component, z, is checked to see if it is greater than 0. This is actually incorrect as it has not been biased, it should be checked to see if it is greater than -1. This will detect if the place of interest is in the first half of the projection frustum, if it is then the object is deemed visible and displayed.
If you are unfamiliar with vertex projection and coordinate systems, this is a huge topic with a fairly steep learning curve. There is however a lot of material online covering it, here are a couple of links to get you started:
http://www.falloutsoftware.com/tutorials/gl/gl0.htm
http://www.opengl.org/wiki/Vertex_Transformation
Good luck//
I'm currently working on this [opencv sample]
The interesting part is at line 89 warpPerspectiveRand method. I want to set the rotation angle, translation, scaling and other transformation values manually instead of using random generated values. But I don't know how to calculate the matrix elements.
A simple calculation example would be helpful.
Thanks
double ang = 0.1;
double xscale = 1.2;
double yscale = 1.5;
double xTranslation = 100;
double yTranslation = 200;
cv::Mat t(3,3,CV_64F);
t=0;
t.at<double>(0,0) = xscale*cos(ang);
t.at<double>(1,1) = yscale*cos(ang);
t.at<double>(0,1) = -sin(ang);
t.at<double>(1,0) = sin(ang);
t.at<double>(0,2) = xTranslation ;
t.at<double>(1,2) = yTranslation;
t.at<double>(2,2) = 1;
EDIT:
Rotation is always around (0,0). If you would like to rotated around a different point, you need to translate(move), rotate, and move back. It can be done by creating two matrices, one for rotation (A) and one for translation(T), and building a new Matrix M as:
M = inv(T) * A * T
What you're looking for is a projection matrix
http://en.wikipedia.org/wiki/3D_projection
There are different matrix styles, some of them are 4x4 (the complete theoretical projection matrix), some are 3x3 (as in OpenCV), because they consider the projection as a transform from a planar surface to another planar surface, and this constraint allows one to express the trasform by a 3x3 matrix.