How to dump XMMATRIX member value? - directx

Is there a any good way to dump XMMATRIX and XMVECTOR ?
XMMATRIX mx = XMMatrixScaling(...) * XMMatrixTranslation(...);
DumpMatrix(mx); // printf the 4x4 matrix member
I want DumpMatrix like function.
I checked DirectMath.h. And found
typedef __m128 XMVECTOR;
How to extract (x, y, z, w) from __m128 ?

DirectX Math library design doesn't allow direct access to XMMATRIX and XMVECTOR members. It is likely because they store values in special SIMD data types.
To read the components of an XMVECTOR you can use XMVectorGet* acces functions, for example:
XMVECTOR V;
float x = XMVectorGetX(V);
float w;
XMVectorGetWPtr(&w, V);
or XMStore* functions to store it into a XMFLOAT4, which has scalar members with direct access:
XMVECTOR vPosition;
XMFLOAT4 fPosition;
XMStoreFloat4(&fPosition, vPosition);
float x = fPosition.x;
XMMATRIX you can store into a XMFLOAT4X4:
XMMATRIX mtxView;
XMFLOAT4X4 fView;
XMStoreFloat4x4(&fView, mtxView);
float fView_11 = fView._11;
There are also Load functions to make the opposite: write to XMVECTOR and XMMATRIX.
For further information, refer to DirectXMath Programming Reference.
Happy coding!

I can't comment the answer, but i believe the correct code for XMMATRIX is:
XMMATRIX mtxView;
XMFLOAT4X4 fView;
XMStoreFloat4x4(&fView, mtxView);
float fView_11 = fView._11;
so that you can access the matrix member values using the newly created XMFLOAT4X4

Related

Is SceneKit's SCNMatrix4 stored as column or row-major?

The source of my confusion is the documentation on SCNMatrix4 from Apple:
SceneKit uses matrices to represent coordinate space transformations,
which in turn can represent the combined position, rotation or
orientation, and scale of an object in three-dimensional space.
SceneKit matrix structures are in row-major order, so they are
suitable for passing to shader programs or OpenGL APIs that accept matrix parameters.
This seems contradictory since OpenGL uses column-major transformation matrices!
In general the documentation of Apple on this topic is poor:
GLKMatrix4 does not talk about column vs. row major at all, but
I am certain from all discussions online that GLKMatrix is plain and
simple column-major.
simd_float4x4 does not talk about the
topic, but at least the variable names make it clear that it is
stored as columns:
From simd/float.h:
typedef struct { simd_float4 columns[4]; } simd_float4x4;
Some discussion online about SCNMatrix4 talk as if SCNMatrix4 is different from GLKMatrix4, but looking at the code for SCNMatrix4 give some hints that it might just be column-order as expected:
/* Returns a transform that translates by '(tx, ty, tz)':
* m' = [1 0 0 0; 0 1 0 0; 0 0 1 0; tx ty tz 1]. */
NS_INLINE SCNMatrix4 SCNMatrix4MakeTranslation(float tx, float ty, float tz) {
SCNMatrix4 m = SCNMatrix4Identity;
m.m41 = tx;
m.m42 = ty;
m.m43 = tz;
return m;
}
So is SCNMatrix4 row-major (storing translations in m14, m24, m34) or column-major (storing translations in m41, m42, m43)?
SCNMatrix4 stores the translation in m41, m42 and m43, as is GLKMatrix4. This little Playground confirms it as well as its definition.
import SceneKit
let translation = SCNMatrix4MakeTranslation(1, 2, 3)
translation.m41
// 1
translation.m42
// 2
translation.m43
// 3
I have no idea why this is wrong in the documentation though, probably just a mistake.

Adding a scalar to a Mat object

So I'm trying to add a scalar value to all elements of a Mat object in openCV, however for raw_t_ubit8 and raw_t_ubit16 types I get wrong results. Here's the code.
Mat A;
//Initialize Mat A;
A = A + 0.1;
The Matrix is initially
The result of the addition is exactly the same matrix. This problem does not occur when I try to add scalars to raw_t_real types of matrices. By raw_t_ubit8 I mean the depth is CV_8UC1
If, as you mentioned in the comments, the contained values are scaled in the matrix to fit the integer domain 0..255, then you should also scale the scalar value you sum. Namely:
A = A + cv::Scalar(round(0.1 * 255) );
Or even better:
A += cv::Scalar(round(0.1 * 255) );
Note that cv::Scalar, as pointed out in the comments by Miki, is in any case made from a double (it's a cv::Scalar_<double>).
The rounding could be omitted, but then you leave the choice on how to convert your double into integer to the function implementation.
You should also check what happens when the values saturate.
Documentation for Opencv matrix expressions.
As stated in the comments and in #Antonio's answer, you can't add 0.1 to an integer.
If you are using CV_8UC1 matrices, but you want to work with floating points values, you should multiply by 255.
Mat1b A; // <-- type CV_8UC1
...
A += 0.1 * 255;
If the result of the operation need to be casted, as in this case, then ultimately saturated_cast is called.
This is equivalent to #Antonio's answer, but it results in cleaner code (at least for me).
The same code will be used, either if you sum a double or a Scalar. A Scalar object will be created in both ways using:
template<typename _Tp> inline
Scalar_<_Tp>::Scalar_(_Tp v0)
{
this->val[0] = v0;
this->val[1] = this->val[2] = this->val[3] = 0;
}
However if you need to sum exactly 0.1 to your matrix (and not to scale it by 255), you need to convert your matrix to CV_32FC1:
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int, char** argv)
{
Mat1b A = (Mat1b(3,3) << 1,2,3,4,5,6,7,8,9);
Mat1f F;
A.convertTo(F, CV_32FC1);
F += 0.1;
return 0;
}

Mat and Vec_ types multiplication

Is there any easy way to multiplicate Mat and Vec_? (Provided, that they have proper sizes, e.g.:
Mat_<double> M = Mat(3,3,CV_32F);
Vec3f V=(1,2,3);
result = M*V //?
Maybe there is some easy method of creating row (or col) Mat based on Vec3?
You can't just multiply Mat and Vec (or, more generally, Matx_) elements. Cast the Vec object to Mat:
Mat_<float> M = Mat::eye(3,3,CV_32F);
Vec3f V=(1,2,3);
Mat result = M*Mat(V);
Also, I noticed an error in your code: when constructing M, the type CV_32F corresponds to float elements, not double. This is also corrected in my code example.
Hope that it helps.

Converting angular velocity to quaternion in OpenCV

I need the angular velocity expressed as a quaternion for updating the quaternion every frame with the following expression in OpenCV:
q(k)=q(k-1)*qwt;
My angular velocity is
Mat w; //1x3
I would like to obtain a quaternion form of the angles
Mat qwt; //1x4
I couldn't find information about this, any ideas?
If I understand properly you want to pass from this Axis Angle form to a quaternion.
As shown in the link, first you need to calculate the module of the angular velocity (multiplied by delta(t) between frames), and then apply the formulas.
A sample function for this would be
// w is equal to angular_velocity*time_between_frames
void quatFromAngularVelocity(Mat& qwt, const Mat& w)
{
const float x = w.at<float>(0);
const float y = w.at<float>(1);
const float z = w.at<float>(2);
const float angle = sqrt(x*x + y*y + z*z); // module of angular velocity
if (angle > 0.0) // the formulas from the link
{
qwt.at<float>(0) = x*sin(angle/2.0f)/angle;
qwt.at<float>(1) = y*sin(angle/2.0f)/angle;
qwt.at<float>(2) = z*sin(angle/2.0f)/angle;
qwt.at<float>(3) = cos(angle/2.0f);
} else // to avoid illegal expressions
{
qwt.at<float>(0) = qwt.at<float>(0)=qwt.at<float>(0)=0.0f;
qwt.at<float>(3) = 1.0f;
}
}
Almost every transformation regarding quaternions, 3D space, etc is gathered at this website.
You will find time derivatives for quaternions also.
I find it useful the explanation of the physical meaning of a quaternion, which can be seen as an axis angle where
a = angle of rotation
x,y,z = axis of rotation.
Then the conversion uses:
q = cos(a/2) + i ( x * sin(a/2)) + j (y * sin(a/2)) + k ( z * sin(a/2))
Here is explained thoroughly.
Hope this helped to make it clearer.
One little trick to go with this and get rid of those cos and sin functions. The time derivative of a quaternion q(t) is:
dq(t)/dt = 0.5 * x(t) * q(t)
Where, if the angular velocity is {w0, w1, w2} then x(t) is a quaternion of {0, w0, w1, w2}. See David H Eberly's book section 10.5 for proof

iOS: Questions about camera information within GLKMatrix4MakeLookAt result

The iOS 5 documentation reveals that GLKMatrix4MakeLookAt operates the same as gluLookAt.
The definition is provided here:
static __inline__ GLKMatrix4 GLKMatrix4MakeLookAt(float eyeX, float eyeY, float eyeZ,
float centerX, float centerY, float centerZ,
float upX, float upY, float upZ)
{
GLKVector3 ev = { eyeX, eyeY, eyeZ };
GLKVector3 cv = { centerX, centerY, centerZ };
GLKVector3 uv = { upX, upY, upZ };
GLKVector3 n = GLKVector3Normalize(GLKVector3Add(ev, GLKVector3Negate(cv)));
GLKVector3 u = GLKVector3Normalize(GLKVector3CrossProduct(uv, n));
GLKVector3 v = GLKVector3CrossProduct(n, u);
GLKMatrix4 m = { u.v[0], v.v[0], n.v[0], 0.0f,
u.v[1], v.v[1], n.v[1], 0.0f,
u.v[2], v.v[2], n.v[2], 0.0f,
GLKVector3DotProduct(GLKVector3Negate(u), ev),
GLKVector3DotProduct(GLKVector3Negate(v), ev),
GLKVector3DotProduct(GLKVector3Negate(n), ev),
1.0f };
return m;
}
I'm trying to extract camera information from this:
1. Read the camera position
GLKVector3 cPos = GLKVector3Make(mx.m30, mx.m31, mx.m32);
2. Read the camera right vector as `u` in the above
GLKVector3 cRight = GLKVector3Make(mx.m00, mx.m10, mx.m20);
3. Read the camera up vector as `u` in the above
GLKVector3 cUp = GLKVector3Make(mx.m01, mx.m11, mx.m21);
4. Read the camera look-at vector as `n` in the above
GLKVector3 cLookAt = GLKVector3Make(mx.m02, mx.m12, mx.m22);
There are two questions:
The look-at vector seems negated as they defined it, since they perform (eye - center) rather than (center - eye). Indeed, when I call GLKMatrix4MakeLookAt with a camera position of (0,0,-10) and a center of (0,0,1) my extracted look at is (0,0,-1), i.e. the negative of what I expect. So should I negate what I extract?
The camera position I extract is the result of the view transformation matrix premultiplying the view rotation matrix, hence the dot products in their definition. I believe this is incorrect - can anyone suggest how else I should calculate the position?
Many thanks for your time.
Per its documentation, gluLookAt calculates centre - eye, uses that for some intermediate steps, then negatives it for placement into the resulting matrix. So if you want centre - eye back, the taking negative is explicitly correct.
You'll also notice that the result returned is equivalent to a multMatrix with the rotational part of the result followed by a glTranslate by -eye. Since the classic OpenGL matrix operations post multiply, that means gluLookAt is defined to post multiply the rotational by the translational. So Apple's implementation is correct, and the same as first moving the camera, then rotating it — which is correct.
So if you define R = (the matrix defining the rotational part of your instruction), T = (the translational analogue), you get R.T. If you want to extract T you could premultiply by the inverse of R and then pull the results out of the final column, since matrix multiplication is associative.
As a bonus, because R is orthonormal, the inverse is just the transpose.

Resources