multiple bone rotation mystery - xna

I am working with quaternions and the XNA skinned model example(for weeks now......). I am received two sets of quaternions from some open source sensor boards that you can buy these days on the net. I was able to write some code so that I receive these quaternions and I am able to rotate limbs with them. Now my problem is the following. I am using the upper right arm and lower right arm in my example and I am able to rotate them separately. My initial position is the one depicted below, which perfect.
http://i.imgur.com/c7qei.png "initial position"
Now when I want to rotate my right arm forward I should have my final position as shown below on the right in this figure. But somehow the result is the one position of the left but my real "physical" arm is pointing forward.
http://i.imgur.com/tXCp6.png "ideal final position(right), real wrong position(left)"
Some how the lower arm does not compensate for the rotation of the upperarm. I am sure I am missing one small step. Here below I have put the crucial part of the code I am using
protected override void Update(GameTime gameTime)
{
HandleInput();
UpdateCamera(gameTime);
// Read gamepad inputs.
float initposition = currentGamePadState.ThumbSticks.Right.X;
float armRotation = Math.Max(currentGamePadState.ThumbSticks.Right.Y, 0);
// these quaternions are received from bluetooth
Upper.Z = Fq1;
Upper.Y = -Fq2;
Upper.X = -Fq3; // set 1 quaternions
Upper.W = Fq4;
//***************************
forearm.Z = Uq1;
forearm.Y = -Uq2;
forearm.X = -Uq3;
forearm.W = Uq4; // set 2 quaternions
// set initial position
if (initialpos == true)
{
initposition = 0.9f;
R_forTransform = Matrix.CreateRotationY(initposition);
R_forarminderinit = skinningData.BoneIndices["R_UpperArm"];
L_forTransform = Matrix.CreateRotationY(-initposition);
L_forTransform = Matrix.CreateRotationX(-initposition);
L_forTransform = Matrix.CreateRotationZ(-initposition);
L_forarminderinit = skinningData.BoneIndices["L_UpperArm"];
}
// Create rotation matrices for the upper and lower arm bones.
Matrix upperarmTransform = Matrix.CreateFromQuaternion(Upper);
Matrix forearmTransform = Matrix.CreateFromQuaternion(forearm);
animationPlayer.GetBoneTransforms().CopyTo(boneTransforms, 0);
if (initialpos == true)
{
boneTransforms[R_forarminderinit] = R_forTransform * boneTransforms[R_forarminderinit];
boneTransforms[L_forarminderinit] = L_forTransform * boneTransforms[L_forarminderinit];
}
int forearmindex = skinningData.BoneIndices["R_Forearm"];
int upperarmindex = skinningData.BoneIndices["R_UpperArm"];
boneTransforms[upperarmindex] = upperarmTransform * boneTransforms[upperarmindex];
boneTransforms[forearmindex] = (forearmTransform) * boneTransforms[forearmindex];
animationPlayer.UpdateWorldTransforms(Matrix.Identity, boneTransforms);
animationPlayer.UpdateSkinTransforms();
UpdateBoundingSpheres();
base.Update(gameTime);
}
I would like to ask you if you could help me solve this mystery. I hope I have been as clear as possible in describing my problem. Furthermore I would like to thank you in advance for you effort.
Yours
Dave

It looks to me like you have some mixed-up reference frames. Here's what I think I'm seeing:
Your external sensors report their orientation relative to the world. Your rendering code, on the other hand, deals with the lower arm in the upper arm's reference frame.
If we assume that the initial orientations are q_u = [0,0,0,1] and q_l=[0,0,0,1], when you rotate your arm to point forward, the new orientations are both [0,.707,0,.707], or something like that because both arm segments have experienced a rotation of π/2 relative to the world.
When you render the arm, you rotate the entire arm (not just the upper arm) by q_u. This makes sense, since you want to make sure that the elbow stays connected. But then you rotate the lower arm by q_l and it has rotated twice as far as it should because it holds the shoulder's rotation. If you were to hold your arm straight, but turn your body around, you would see the same thing happen: the upper-arm would rotate by the amount of body rotation and the lower-arm would rotate by that much again.
Perhaps the easiest way to deal with this is to remove q_u from q_l. If q_k is the rotation of the lower arm relative to the upper arm, then q_k = q_u' * q_l where q_u' is the inverse quaternion (just negate the w component).

Related

SceneKit - Rotate object around X and Z axis

I’m using ARKit with SceneKit. When user presses a button I create an anchor and to the SCNNode corresponding to it I add a 3D object (loaded from a .scn file in the project).
The 3D object is placed facing the camera, with the same orientation the camera has. I would like to make it look like the object is laying on a plane surface and not inclined if it is that way. So, if I got it right, I’d need to apply a rotation transformation so that it’s rotation around the X and Z axis become 0.
My attempt at this is: take the node’s x and z eulerAngles, invert them, and rotate that amount around each axis
let rotationZ = rotationMatrixAroundZ(radians: -node.eulerAngles.z)
let rotationX = rotationMatrixAroundX(radians: -node.eulerAngles.x)
let rotationTransform = simd_mul(rotationTransformX, rotationTransformZ)
node.transform = SCNMatrix4(simd_mul(simd_float4x4(node.transform), rotationTransform))
This works all right for most cases, but in some the object is rotated in completely strange ways. Should I be setting the
rotation angle to anything else than just the inverse of the current Euler Angle? Setting the angles to 0 directly did not work at all.
I've come across this and figured out I was running into gimbal lock. The solution was to rotate the node around one axis, parent it to another SCNNode(), then rotate the parent around the other axis. Hope that helps.
You don't have to do the node transform on a matrix, you can simply rotate around a specific axis and that might be a bit simpler in terms of the logic of doing the rotation.
You could do something like:
node.runAction(SCNAction.rotateBy(x: x, y: y, z: z, duration: 0.0))
Not sure if this is the kind of thing you're looking for, but it is simpler than doing the rotation with the SCNMatrix4
Well, I managed a workaround, but I'm not truly happy with it, so I'll leave the question unanswered. Basically I define a threshold of 2 degrees and keep applying those rotations until both Euler Angles around X and Z are below the aforementioned threshold.
func layDownNode(_ node: SCNNode) {
let maxErrDegrees: Float = 2.0
let maxErrRadians = GLKMathDegreesToRadians(maxErrDegrees)
while (abs(node.eulerAngles.x) > maxErrRadians || abs(node.eulerAngles.z) > maxErrRadians) {
let rotationZ = -node.eulerAngles.z
let rotationX = -node.eulerAngles.x
let rotationTransformZ = rotationMatrixAroundZ(radians: rotationZ)
let rotationTransformX = rotationMatrixAroundX(radians: rotationX)
let rotationTransform = simd_mul(rotationTransformX, rotationTransformZ)
node.transform = SCNMatrix4(simd_mul(simd_float4x4(node.transform), rotationTransform))
}
}

How to convert TangoXyxIjData into a matrix of z-values

I am currently using a Project Tango tablet for robotic obstacle avoidance. I want to create a matrix of z-values as they would appear on the Tango screen, so that I can use OpenCV to process the matrix. When I say z-values, I mean the distance each point is from the Tango. However, I don't know how to extract the z-values from the TangoXyzIjData and organize the values into a matrix. This is the code I have so far:
public void action(TangoPoseData poseData, TangoXyzIjData depthData) {
byte[] buffer = new byte[depthData.xyzCount * 3 * 4];
FileInputStream fileStream = new FileInputStream(
depthData.xyzParcelFileDescriptor.getFileDescriptor());
try {
fileStream.read(buffer, depthData.xyzParcelFileDescriptorOffset, buffer.length);
fileStream.close();
} catch (IOException e) {
e.printStackTrace();
}
Mat m = new Mat(depthData.ijRows, depthData.ijCols, CvType.CV_8UC1);
m.put(0, 0, buffer);
}
Does anyone know how to do this? I would really appreciate help.
The short answer is it can't be done, at least not simply. The XYZij struct in the Tango API does not work completely yet. There is no "ij" data. Your retrieval of buffer will work as you have it coded. The contents are a set of X, Y, Z values for measured depth points, roughly 10000+ each callback. Each X, Y, and Z value is of type float, so not CV_8UC1. The problem is that the points are not ordered in any way, so they do not correspond to an "image" or xy raster. They are a random list of depth points. There are ways to get them into some xy order, but it is not straightforward. I have done both of these:
render them to an image, with the depth encoded as color, and pull out the image as pixels
use the model/view/perspective from OpenGL and multiply out the locations of each point and then figure out their screen space location (like OpenGL would during rendering). Sort the points by their xy screen space. Instead of the calculated screen-space depth just keep the Z value from the original buffer.
or
wait until (if) the XYZij struct is fixed so that it returns ij values.
I too wish to use Tango for object avoidance for robotics. I've had some success by simplifying the use case to be only interested in the distance of any object located at the center view of the Tango device.
In Java:
private Double centerCoordinateMax = 0.020;
private TangoXyzIjData xyzIjData;
final FloatBuffer xyz = xyzIjData.xyz;
double cumulativeZ = 0.0;
int numberOfPoints = 0;
for (int i = 0; i < xyzIjData.xyzCount; i += 3) {
float x = xyz.get(i);
float y = xyz.get(i + 1);
if (Math.abs(x) < centerCoordinateMax &&
Math.abs(y) < centerCoordinateMax) {
float z = xyz.get(i + 2);
cumulativeZ += z;
numberOfPoints++;
}
}
Double distanceInMeters;
if (numberOfPoints > 0) {
distanceInMeters = cumulativeZ / numberOfPoints;
} else {
distanceInMeters = null;
}
Said simply this code is taking the average distance of a small square located at the origin of x and y axes.
centerCoordinateMax = 0.020 was determined to work based on observation and testing. The square typically contains 50 points in ideal conditions and fewer when held close to the floor.
I've tested this using version 2 of my tango-caminada application and the depth measuring seems quite accurate. Standing 1/2 meter from a doorway I slid towards the open door and the distance changed form 0.5 meters to 2.5 meters which is the wall at the end of the hallway.
Simulating a robot being navigated I moved the device towards a trash can in the path until 0.5 meters separation and then rotated left until the distance was more than 0.5 meters and proceeded forward. An oversimplified simulation, but the basis for object avoidance using Tango depth perception.
You can do this by using camera intrinsics to convert XY coordinates to normalized values -- see this post - Google Tango: Aligning Depth and Color Frames - it's talking about texture coordinates but it's exactly the same problem
Once normalized, move to screen space x[1280,720] and then the Z coordinate can be used to generate a pixel value for openCV to chew on. You'll need to decide how to color pixels that don't correspond to depth points on your own, and advisedly, before you use the depth information to further colorize pixels.
The main thing is to remember that the raw coordinates returned are already using the basis vectors you want, i.e. you do not want the pose attitude or location

Issue with GLKVector2's

I'm having trouble setting up vectors for an object in my code. I tried modeling my code similarly to the answer in this question: Game enemy move towards player except that I'm using GLKVector2's. I thought my implementation seemed correct, but it's really only my first time using vectors with GLKit and in general I haven't used them too much before.
My current code looks something like:
GLKVector2 vector = GLKVector2Make(self.player.position.x - self.target.position.x, self.player.position.y - self.target.position.y);
float hypo = sqrt(vector.x*vector.x + vector.y*vector.y);
float speed = 0.25;
vector = GLKVector2Make(vector.x/hypo, vector.y/hypo);
vector = GLKVector2MultiplyScalar(vector, speed);
GLKVector2 sum = GLKVector2Add(vector, self.target.position);
self.target.moveVelocity = sum;
Is it possible that my logic just isn't correct here? I'd appreciate any help or suggestions. Thanks!
EDIT: just for clarification since I didn't really explain what happens.. Basically the "enemy" shapes either stutter/jump or just stick. They aren't moving toward the other object at all.
EDIT 2:
If I try using GLKVector2Normalize, then nothing moves. If I do something like:
GLKVector2 vector = GLKVector2Make(self.player.position.x - self.target.position.x, self.player.position.y - self.target.position.y);
float speed = 0.10;
// float distance = GLKVector2Distance(self.player.position, self.target.position);
// vector = GLKVector2Normalize(vector);
vector = GLKVector2MultiplyScalar(vector, speed);
self.target.moveVelocity = vector;
Then the movement works toward the player object, but only updates the one time even though it should be updating every second.
Two things:
There's no need to calculate the magnitude of the vector and divide yourself -- GLKit has a GLKVector2Normalize function, which takes a vector and returns the vector in the same direction with length 1. You can then use GLKVector2MultiplyScalar (as you do) to change the speed.
Your target's velocity should be set to vector, not sum, assuming that in the target's update method (which you should call once per timestep), you add self.moveVelocity.x to self.position.x and self.moveVelocity.y to self.position.y each timestep. As it is now, your sum variable will hold the position that your target should have one timestep in the future, not its velocity.

setting the angle of a b2revolutejoint

From what I have read, in Box2d, you get the angle of a revolute joint with the GetJointAngle function, but when trying to set the angle the member m_referenceAngle is protected. Can the angle not be programmatically set?
I found that I can apply the angle from one joint to another body as:
float FirstAngle = firstArmJoint->GetJointAngle();
secondArmBody->SetTransform(b2Vec2((750.0/PTM_RATIO),(520.0f+100)/PTM_RATIO),hourAngle);
I put this in ccTouchesMoved so that when the user drags the first object (from which FirstAngle is retrieved) the second object (secondArmBody) is also moved.
What happens is that the second body rotates at the top of the image and not at the anchor point.
Any ideas?
SetTransform() can be used to set the position and rotation of a body. This happens completely independently of any joints on the body. For example, if you want to make sure a body is perfectly upright at a given moment, you can call
body->SetTransForm(body->GetPosition(), 0);
passing 0 as the angle value (upright). I've never tried this for a body with a joint on it, but I doubt it would work properly.
When I ran into the problem of having to make a revolutejoint point at a certain angle, I solved it by enabling motor on the joint and adjusting the motor speed until the angle matched the one I wanted. This simulates realistic motion of the joint. Example:
Creating the joint
b2RevoluteJointDef armJointDef;
armJointDef.Initialize(body1, body2,
b2Vec2(body1->GetPosition().x,
((body1->GetPosition().y/PTM_RATIO));
armJointDef.enableMotor = true;
armJointDef.enableLimit = true;
armJointDef.motorSpeed = 0.0f;
armJointDef.maxMotorTorque = 10000.0f;
armJointDef.lowerAngle = CC_DEGREES_TO_RADIANS(lowerArmAngle);
armJointDef.upperAngle = CC_DEGREES_TO_RADIANS(upperArmAngle);
_world->CreateJoint(&armJointDef);
Pointing the joint
float targetAngle = SOME_ANGLE;
b2JointEdge *j = body->GetJointList();
b2RevoluteJoint *r = (b2RevoluteJoint *)j->joint;
if(r->GetAngle() > targetAngle){
r->SetMotorSpeed(-1);
} else { r->SetMotorSpeed(1); }
Basically, you see what side of the current angle the target angle is on, and then set the motor speed to move the joint in the correct direction. Hope that helps!
http://www.box2d.org/manual.html

Moving a 3d camera on XNA

Im doing some practices on XNA, and i created a class that represents a Camera.
My objective is that when the user press some keys make a translation of the camera (not the target) 90 degrees in the X axys (to see an object that i placed in the scene from different angles). By the moment i move the camera in X, Y, and Z without problems.
Actually to set up my camera i use the following lines of code:
public void SetUpCamera()
{
#region ## SET DEFAULTS ##
this.FieldOfViewAngle = 45.0f;
this.AspectRatio =1f;
this.NearPlane = 1.0f;
this.FarPlane = 10000.0f;
#endregion
this.ProjectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(this.FieldOfViewAngle), 16 / 9, this.NearPlane, this.FarPlane);
this.ViewMatrix = Matrix.CreateLookAt(new Vector3(this.PositionX, this.PositionY, this.PositionZ), new Vector3(this.TargetX, this.TargetY, this.TargetZ), Vector3.Up);
}
I have this method to move the camera:
public void UpdateView()
{
this.ViewMatrix = Matrix.CreateLookAt(new Vector3(this.PositionX, this.PositionY, this.PositionZ), new Vector3(this.TargetX, this.TargetY, this.TargetZ), Vector3.Up);
}
Then in the game (update event handler i have the following code)
if (keyboardstate.IsKeyDown(Keys.NumPad9))
{
this.GameCamera.PositionZ -= 1.0f;
}
if (keyboardstate.IsKeyDown(Keys.NumPad3))
{
this.GameCamera.PositionZ += 1.0f;
}
this.GameCamera.UpdateView();
I would like to know how to make this camera translation of 90 degrees to surround one object that i placed in the screen.
To explain my self better about the camera movement here is a video on youtube that uses the exact movement that im trying to describe (see from 14 second)
http://www.youtube.com/watch?v=19mbKZ0I5u4
Assuming the camera in the video is orbiting the car, here is how you would accomplish that in XNA.
For the sake of readability, we'll just use vectors instead of their individual components. So 'target' means it's a Vector3 that includes TargetX, TargetY, & TargetZ. Same with the camera’s position. You can break X, Y, Z values out into fields and make Vector3s out of them to plug into this code if you want to later, but really it would be best for you to work at vector level instead of component level.
//To orbit the car (target)
cameraPosition = Vector3.Transform(cameraPosition – target, Matrix.CreateRotationY(0.01f)) + target;
this.ViewMatrix = Matrix.CreateLookAt(cameraPosition, target, Vector3.Up);
Since all matrix rotations act about an axis that intersects the world origin, to use a rotation matrix to rotate the camera around the car, the object of rotation has to be shifted such that the target (and thus the rotation axis) is located at the world origin. CameraPosition - target accomplishes that. Now cameraPosition can be rotated a little bit. Once cameraPosition is rotated a little bit, it needs to be sent back to the scene at hand, that's what the '+ target' at the end of the line is for.
The 0.01f can be adjusted to whatever rotation rate suits you.

Resources