I found this on stackoverflow.
"You will probably need to use quaternions for composing rotations, if you are not doing so already. This avoids the problem of gimbal lock which you can get when orienting a camera by rotation around the 3 axes."
But how do I use the quaternion from the motionmanager in opengl. the code was first based on pitch and yaw only. Now I want to use the roll also, so you can use the gyroscope to look around. Could anybody help me this one?
Thank you.
My advice, don't use Eulers. Just track orientation with vectors(Forward, up, and right) for the object or camera.
To rotate, just do relative rotations of the current forward, up and right. Like rotate right 5 degress.
Quanternions have excessvie operations and physics don't work via yaw, pitch and roll, they are merely measurements to capture orientation, not how things get orientated.
Related
Is there a way to use quaternions for only 2d rotations?
I have an iPhone app which should get every rotation avoiding Gimbal Lock, and I understood that the solution could be using quaternions or rotation matrix.
However, I find difficult to understand how can I use quaternions for 2d rotation instead of using 3d rotation.
Can you give me a suggestion?
Thank you in advantage!
From algorithmic point of view you can start from understanding quaternions. However, you can see how to convert 3D rotation to 2D rotation. Finally, you can see how to include quaternions in ios graphics using GLKQuaternion Reference.
I'm doing some work with a camera and video stabilization with OpenCV.
Let's suppose I know exactly (in meters) how much my camera has moved from one frame to another and I want to use this to return the second frame where it should be.
I'm sure I have to do some math with this number before I make the translation matrix, but i'm a little lost with that... Any help?
Thanks.
EDIT:Ok I'll try to explain it better:
I want to remove from a video the movement (shaking) of the camera and I know how much the camera has moved (and the direction) from one frame to another.
So what I want to do is to move back the second frame where it should be using that information I have.
I have to make a traslation matrix for each two frames and apply it to the second frame.
But here is when I doubt: As the info I have is en meters and is the movement of the camera, and now I'm working with a image and pixels, I think I have to do some operations so the traslation is correct, but I'm not sure what they are exactly
Knowing how much the camera has moved is not enough for creating a synthesized frame. For that you'll need the 3D model of the world as well, which I assume you don't have.
To demonstrate that assume the camera movement is pure translation and you are looking at two objects, one is very far - a few kilometers away and the other is very close - a few centimeters away. The very far object will hardly move in the new frame, while the very close one can move dramatically or even disappear from the field of view of the second frame, you need to know how much the viewing angle has changed for each point and for that you need the 3D model.
Having sensor information may help in the case of rotation but it is not as useful for translations.
I'm working on an app that allows the user to rotate the iOS device like a steering wheel. I'm interested in getting a rough approximation of the degrees of the rotation (doesn't have to accurate at all). Is there an API for this?
Yes. It's called Core Motion.
I want to track the head of a player in order to move the camera inside XNA.
When the player rotates left or right, the camera inside XNA will respond to this action and will also rotate.
I tried using the head joint from Skeleton Data and taking the vector value X,Y but this is not an accurate solution. I need another solution that can rotate the camera inside XNA.
Any suggestions?
You could use the Face Tracking API and see the difference from a certain point on the users face (like their nose) to decide whether or not the user looked in a different direction. The points on a users face are assembled like this:
Then you can see if the X changed and by what amount to see the rotation effects.
(You might want to see Facial Recognition with Kinect)
I know the (pitch, yaw, roll) coordinate system has its own flaw mathematical-wise. However I really hope something instead or derived from this coordinates can be an alternative.
What I am try to do is moving the device (let's say it's an iphone) in the real world and trying to figure out the yaw and pitch regarding to user's eye. Thus, the range of yaw should be (-180, 180) and range of pitch should be (-90, 90). While I move the iphone(always facing me) from bottom to front, the CMDeviceMotion gives me pitch changes from 0 to 90 and while I move the iphone(still facing me) from front to top, the CMDeviceMotion gives me pitch changes from 90 back to 0. These are good and I am perfectly happy about pitch data.
However, when pitch is close to 90, yaw is very shaky and unstable. Will, this is not the problem because I can ignore the change on yaw when pitch is around 90. However, the real problem is that the value of yaw changes dramatically before pitch increasing and after pitch decreasing. I mean it is not only shaky but the mean changes! changes something like 180(Pi). I guess that is because the coordinate totally changes. But I am lost on the coordinate transfering now.
This messy workaround you just described is exactly the reason why you should not use roll, pitch and yaw.
You either go on this road further and make the mess bigger or use rotation matrices or quaternions.