Issue with GLKVector2's - ios

I'm having trouble setting up vectors for an object in my code. I tried modeling my code similarly to the answer in this question: Game enemy move towards player except that I'm using GLKVector2's. I thought my implementation seemed correct, but it's really only my first time using vectors with GLKit and in general I haven't used them too much before.
My current code looks something like:
GLKVector2 vector = GLKVector2Make(self.player.position.x - self.target.position.x, self.player.position.y - self.target.position.y);
float hypo = sqrt(vector.x*vector.x + vector.y*vector.y);
float speed = 0.25;
vector = GLKVector2Make(vector.x/hypo, vector.y/hypo);
vector = GLKVector2MultiplyScalar(vector, speed);
GLKVector2 sum = GLKVector2Add(vector, self.target.position);
self.target.moveVelocity = sum;
Is it possible that my logic just isn't correct here? I'd appreciate any help or suggestions. Thanks!
EDIT: just for clarification since I didn't really explain what happens.. Basically the "enemy" shapes either stutter/jump or just stick. They aren't moving toward the other object at all.
EDIT 2:
If I try using GLKVector2Normalize, then nothing moves. If I do something like:
GLKVector2 vector = GLKVector2Make(self.player.position.x - self.target.position.x, self.player.position.y - self.target.position.y);
float speed = 0.10;
// float distance = GLKVector2Distance(self.player.position, self.target.position);
// vector = GLKVector2Normalize(vector);
vector = GLKVector2MultiplyScalar(vector, speed);
self.target.moveVelocity = vector;
Then the movement works toward the player object, but only updates the one time even though it should be updating every second.

Two things:
There's no need to calculate the magnitude of the vector and divide yourself -- GLKit has a GLKVector2Normalize function, which takes a vector and returns the vector in the same direction with length 1. You can then use GLKVector2MultiplyScalar (as you do) to change the speed.
Your target's velocity should be set to vector, not sum, assuming that in the target's update method (which you should call once per timestep), you add self.moveVelocity.x to self.position.x and self.moveVelocity.y to self.position.y each timestep. As it is now, your sum variable will hold the position that your target should have one timestep in the future, not its velocity.

Related

How to convert TangoXyxIjData into a matrix of z-values

I am currently using a Project Tango tablet for robotic obstacle avoidance. I want to create a matrix of z-values as they would appear on the Tango screen, so that I can use OpenCV to process the matrix. When I say z-values, I mean the distance each point is from the Tango. However, I don't know how to extract the z-values from the TangoXyzIjData and organize the values into a matrix. This is the code I have so far:
public void action(TangoPoseData poseData, TangoXyzIjData depthData) {
byte[] buffer = new byte[depthData.xyzCount * 3 * 4];
FileInputStream fileStream = new FileInputStream(
depthData.xyzParcelFileDescriptor.getFileDescriptor());
try {
fileStream.read(buffer, depthData.xyzParcelFileDescriptorOffset, buffer.length);
fileStream.close();
} catch (IOException e) {
e.printStackTrace();
}
Mat m = new Mat(depthData.ijRows, depthData.ijCols, CvType.CV_8UC1);
m.put(0, 0, buffer);
}
Does anyone know how to do this? I would really appreciate help.
The short answer is it can't be done, at least not simply. The XYZij struct in the Tango API does not work completely yet. There is no "ij" data. Your retrieval of buffer will work as you have it coded. The contents are a set of X, Y, Z values for measured depth points, roughly 10000+ each callback. Each X, Y, and Z value is of type float, so not CV_8UC1. The problem is that the points are not ordered in any way, so they do not correspond to an "image" or xy raster. They are a random list of depth points. There are ways to get them into some xy order, but it is not straightforward. I have done both of these:
render them to an image, with the depth encoded as color, and pull out the image as pixels
use the model/view/perspective from OpenGL and multiply out the locations of each point and then figure out their screen space location (like OpenGL would during rendering). Sort the points by their xy screen space. Instead of the calculated screen-space depth just keep the Z value from the original buffer.
or
wait until (if) the XYZij struct is fixed so that it returns ij values.
I too wish to use Tango for object avoidance for robotics. I've had some success by simplifying the use case to be only interested in the distance of any object located at the center view of the Tango device.
In Java:
private Double centerCoordinateMax = 0.020;
private TangoXyzIjData xyzIjData;
final FloatBuffer xyz = xyzIjData.xyz;
double cumulativeZ = 0.0;
int numberOfPoints = 0;
for (int i = 0; i < xyzIjData.xyzCount; i += 3) {
float x = xyz.get(i);
float y = xyz.get(i + 1);
if (Math.abs(x) < centerCoordinateMax &&
Math.abs(y) < centerCoordinateMax) {
float z = xyz.get(i + 2);
cumulativeZ += z;
numberOfPoints++;
}
}
Double distanceInMeters;
if (numberOfPoints > 0) {
distanceInMeters = cumulativeZ / numberOfPoints;
} else {
distanceInMeters = null;
}
Said simply this code is taking the average distance of a small square located at the origin of x and y axes.
centerCoordinateMax = 0.020 was determined to work based on observation and testing. The square typically contains 50 points in ideal conditions and fewer when held close to the floor.
I've tested this using version 2 of my tango-caminada application and the depth measuring seems quite accurate. Standing 1/2 meter from a doorway I slid towards the open door and the distance changed form 0.5 meters to 2.5 meters which is the wall at the end of the hallway.
Simulating a robot being navigated I moved the device towards a trash can in the path until 0.5 meters separation and then rotated left until the distance was more than 0.5 meters and proceeded forward. An oversimplified simulation, but the basis for object avoidance using Tango depth perception.
You can do this by using camera intrinsics to convert XY coordinates to normalized values -- see this post - Google Tango: Aligning Depth and Color Frames - it's talking about texture coordinates but it's exactly the same problem
Once normalized, move to screen space x[1280,720] and then the Z coordinate can be used to generate a pixel value for openCV to chew on. You'll need to decide how to color pixels that don't correspond to depth points on your own, and advisedly, before you use the depth information to further colorize pixels.
The main thing is to remember that the raw coordinates returned are already using the basis vectors you want, i.e. you do not want the pose attitude or location

Cocos2d - move a sprite from point A to point B in a sine wave motion

What would be the best way to do this? I see the CCEaseSineInOut action but it doesn't look like that could be used to do this.
I need to move from one side of the screen to the other. The sprite should move in a sine-wave pattern across the screen.
I always like to have complete control over CCNode motion. I only use CCActions to do very basic things. While your case sounds simple enough to possibly do with CCActions, I will show you how to move a CCNode according to any function over time. You can also change scale, color, opacity, rotation, and even anchor point with the same technique.
#interface SomeLayer : CCLayer
{
CCNode *nodeToMove;
float t; // time elapsed
}
#end
#implementation SomeLayer
// Assumes nodeToMove has been created somewhere else
-(void)startAction
{
t = 0;
// updateNodeProperties: gets called at the framerate
// The exact time between calls is passed to the selector
[self schedule:#selector(updateNodeProperties:)];
}
-(void)updateNodeProperties:(ccTime)dt
{
t += dt;
// Option 1: Update properties "differentially"
CGPoint velocity = ccp( Vx(t), Vy(t) ); // You have to provide Vx(t), and Vy(t)
nodeToMove.position = ccpAdd(nodeToMove.position, ccpMult(velocity, dt));
nodeToMove.rotation = ...
nodeToMove.scale = ...
...
// Option 2: Update properties non-differentially
nodeToMove.position = ccp( x(t), y(t) ); // You have to provide x(t) and y(t)
nodeToMove.rotation = ...
nodeToMove.scale = ...
...
// In either case, you may want to stop the action if some condition is met
// i.e.)
if(nodeToMove.position.x > [[CCDirector sharedDirector] winSize].width){
[self unschedule:#selector(updateNodeProperties:)];
// Perhaps you also want to call some other method now
[self callToSomeMethod];
}
}
#end
For your specific problem, you could use Option 2 with x(t) = k * t + c, and y(t) = A * sin(w * t) + d.
Math note #1: x(t) and y(t) are called position parameterizations. Vx(t) and Vy(t) are velocity parameterizations.
Math note #2: If you have studied calculus, it will be readily apparent that Option 2 prevents accumulation of positional errors over time (especially for low framerates). When possible, use Option 2. However, it is often easier to use Option 1 when accuracy is not a concern or when user input is actively changing the parameterizations.
There are many advantages to using CCActions. They handle calling other functions at specific times for you. They are kept track of so that you can easily pause them and restart them, or count them.
But when you really need to manage nodes generally, this is the way to do it. For complex or intricate formulas for position, for example, it is much easier to change the parameterizations than to figure out how to implement that parameterization in CCActions.

Attitude change - angles and axis issue - quaternion math

I have an app that records angles as user is walking around an object, while pointing device (preferably) at the center of the object.
Angle gets reset on user's command - so reference attitude gets reset.
Using Euler angles produces Gimbal lock, so I am currently using quaternions and calculating angles this way:
double angleFromLastPosition = acos(fromLastPositionAttitude.quaternion.w) * 2.0f;
This gives off good precision and it works perfectly IF device's pitch and yaw does not change. In other words, as the angle shows 360 degrees I end up in the same place as the start of the circle.
Problem 1: if device's yaw and pitch change slightly (user not pointing directly at center of the object), so does the angleFromLastPosition.
I understand this part, as my angle formula just shows the angle in between two device attitudes in 3D space.
Scenario:
I mark the start of rotation attitude and start moving in a circle around the object while pointing at the center
I stop at, say, 45 degrees and change pitch of the device by pointing it higher or lower. Angle changes accordingly.
What I would love to see is: angle stays at 45 degrees, even if pitch or yaw changes.
Question 1 is, how can I calculate only the Roll of the device using quaternions, and disregard changes in other two axes (at least within some reasonable number of degrees).
Problem 2: if I rotate for a bit and then freeze the device completely (on tripod so there's no shaking at all), the angleFromLastPosition drifts at a rate of 1 degree per about 10-20 seconds, and it appears not to be linear. In other words, it drifts fast at first, then slows down considerably. Sometimes I get no drift at all - angle is rock-solid if device is stationary. And this makes me lost in understanding what's going on.
Question 2, what is going on here, and how can I take care of the drift?
I went through quite a few articles and tutorials, and quaternion math is beyond me at the moment, hope someone will be able to help with a tip, link, or few lines of code.
I have tested this and it seems to work according to what you're looking for in Question 1, Andrei.
I set the homeangle initially 0, and immediately after the first pass I store the angle returned from walkaroundAngleFromAttitude:fromHomeAngle: in homeangle, for future use.
My testing included starting the device updates using a reference frame:
[_motionManager
startDeviceMotionUpdatesUsingReferenceFrame:CMAttitudeReferenceFrameXArbitraryZVertical
toQueue:operationQueue
withHandler:handler];
and using the following methods called within handler:
- (CMQuaternion) multiplyQuanternion:(CMQuaternion)left withRight:(CMQuaternion)right {
CMQuaternion newQ;
newQ.w = left.w*right.w - left.x*right.x - left.y*right.y - left.z*right.z;
newQ.x = left.w*right.x + left.x*right.w + left.y*right.z - left.z*right.y;
newQ.y = left.w*right.y + left.y*right.w + left.z*right.x - left.x*right.z;
newQ.z = left.w*right.z + left.z*right.w + left.x*right.y - left.y*right.x;
return newQ;
}
-(float)walkaroundRawAngleFromAttitude:(CMAttitude*)attitude {
CMQuaternion e = (CMQuaternion){0,0,1,1};
CMQuaternion quatConj = attitude.quaternion;
quatConj.x *= -1; quatConj.y *= -1; quatConj.z *= -1;
CMQuaternion quat1 = attitude.quaternion;
CMQuaternion quat2 = [self multiplyQuanternion:quat1 withRight:e];
CMQuaternion quat3 = [self multiplyQuanternion:quat2 withRight:quatConj];
return atan2f(quat3.y, quat3.x);
}
-(float)walkaroundAngleFromAttitude:(CMAttitude*)attitude fromHomeAngle:(float)homeangle {
float rawangle = [self walkaroundRawAngleFromAttitude:attitude];
if (rawangle <0) rawangle += M_PI *2;
if (homeangle < 0) homeangle += M_PI *2;
float finalangle = rawangle - homeangle;
if (finalangle < 0) finalangle += M_PI *2;
return finalangle;
}
This is using some modified and extended code from Finding normal vector to iOS device
Edit to deal with Question 2 & Problem 2:
This may not be solvable. I've seen it in other apps (360 pano for example) and have read about faulty readings in gyro and such. If you tried to compensate for it, of course you'll end up with a jittery experience when some authentic rotational movement gets tossed by the compensation code. So far as I've been interpreting for the last few years, this is a hardware-based issue.

multiple bone rotation mystery

I am working with quaternions and the XNA skinned model example(for weeks now......). I am received two sets of quaternions from some open source sensor boards that you can buy these days on the net. I was able to write some code so that I receive these quaternions and I am able to rotate limbs with them. Now my problem is the following. I am using the upper right arm and lower right arm in my example and I am able to rotate them separately. My initial position is the one depicted below, which perfect.
http://i.imgur.com/c7qei.png "initial position"
Now when I want to rotate my right arm forward I should have my final position as shown below on the right in this figure. But somehow the result is the one position of the left but my real "physical" arm is pointing forward.
http://i.imgur.com/tXCp6.png "ideal final position(right), real wrong position(left)"
Some how the lower arm does not compensate for the rotation of the upperarm. I am sure I am missing one small step. Here below I have put the crucial part of the code I am using
protected override void Update(GameTime gameTime)
{
HandleInput();
UpdateCamera(gameTime);
// Read gamepad inputs.
float initposition = currentGamePadState.ThumbSticks.Right.X;
float armRotation = Math.Max(currentGamePadState.ThumbSticks.Right.Y, 0);
// these quaternions are received from bluetooth
Upper.Z = Fq1;
Upper.Y = -Fq2;
Upper.X = -Fq3; // set 1 quaternions
Upper.W = Fq4;
//***************************
forearm.Z = Uq1;
forearm.Y = -Uq2;
forearm.X = -Uq3;
forearm.W = Uq4; // set 2 quaternions
// set initial position
if (initialpos == true)
{
initposition = 0.9f;
R_forTransform = Matrix.CreateRotationY(initposition);
R_forarminderinit = skinningData.BoneIndices["R_UpperArm"];
L_forTransform = Matrix.CreateRotationY(-initposition);
L_forTransform = Matrix.CreateRotationX(-initposition);
L_forTransform = Matrix.CreateRotationZ(-initposition);
L_forarminderinit = skinningData.BoneIndices["L_UpperArm"];
}
// Create rotation matrices for the upper and lower arm bones.
Matrix upperarmTransform = Matrix.CreateFromQuaternion(Upper);
Matrix forearmTransform = Matrix.CreateFromQuaternion(forearm);
animationPlayer.GetBoneTransforms().CopyTo(boneTransforms, 0);
if (initialpos == true)
{
boneTransforms[R_forarminderinit] = R_forTransform * boneTransforms[R_forarminderinit];
boneTransforms[L_forarminderinit] = L_forTransform * boneTransforms[L_forarminderinit];
}
int forearmindex = skinningData.BoneIndices["R_Forearm"];
int upperarmindex = skinningData.BoneIndices["R_UpperArm"];
boneTransforms[upperarmindex] = upperarmTransform * boneTransforms[upperarmindex];
boneTransforms[forearmindex] = (forearmTransform) * boneTransforms[forearmindex];
animationPlayer.UpdateWorldTransforms(Matrix.Identity, boneTransforms);
animationPlayer.UpdateSkinTransforms();
UpdateBoundingSpheres();
base.Update(gameTime);
}
I would like to ask you if you could help me solve this mystery. I hope I have been as clear as possible in describing my problem. Furthermore I would like to thank you in advance for you effort.
Yours
Dave
It looks to me like you have some mixed-up reference frames. Here's what I think I'm seeing:
Your external sensors report their orientation relative to the world. Your rendering code, on the other hand, deals with the lower arm in the upper arm's reference frame.
If we assume that the initial orientations are q_u = [0,0,0,1] and q_l=[0,0,0,1], when you rotate your arm to point forward, the new orientations are both [0,.707,0,.707], or something like that because both arm segments have experienced a rotation of π/2 relative to the world.
When you render the arm, you rotate the entire arm (not just the upper arm) by q_u. This makes sense, since you want to make sure that the elbow stays connected. But then you rotate the lower arm by q_l and it has rotated twice as far as it should because it holds the shoulder's rotation. If you were to hold your arm straight, but turn your body around, you would see the same thing happen: the upper-arm would rotate by the amount of body rotation and the lower-arm would rotate by that much again.
Perhaps the easiest way to deal with this is to remove q_u from q_l. If q_k is the rotation of the lower arm relative to the upper arm, then q_k = q_u' * q_l where q_u' is the inverse quaternion (just negate the w component).

setting the angle of a b2revolutejoint

From what I have read, in Box2d, you get the angle of a revolute joint with the GetJointAngle function, but when trying to set the angle the member m_referenceAngle is protected. Can the angle not be programmatically set?
I found that I can apply the angle from one joint to another body as:
float FirstAngle = firstArmJoint->GetJointAngle();
secondArmBody->SetTransform(b2Vec2((750.0/PTM_RATIO),(520.0f+100)/PTM_RATIO),hourAngle);
I put this in ccTouchesMoved so that when the user drags the first object (from which FirstAngle is retrieved) the second object (secondArmBody) is also moved.
What happens is that the second body rotates at the top of the image and not at the anchor point.
Any ideas?
SetTransform() can be used to set the position and rotation of a body. This happens completely independently of any joints on the body. For example, if you want to make sure a body is perfectly upright at a given moment, you can call
body->SetTransForm(body->GetPosition(), 0);
passing 0 as the angle value (upright). I've never tried this for a body with a joint on it, but I doubt it would work properly.
When I ran into the problem of having to make a revolutejoint point at a certain angle, I solved it by enabling motor on the joint and adjusting the motor speed until the angle matched the one I wanted. This simulates realistic motion of the joint. Example:
Creating the joint
b2RevoluteJointDef armJointDef;
armJointDef.Initialize(body1, body2,
b2Vec2(body1->GetPosition().x,
((body1->GetPosition().y/PTM_RATIO));
armJointDef.enableMotor = true;
armJointDef.enableLimit = true;
armJointDef.motorSpeed = 0.0f;
armJointDef.maxMotorTorque = 10000.0f;
armJointDef.lowerAngle = CC_DEGREES_TO_RADIANS(lowerArmAngle);
armJointDef.upperAngle = CC_DEGREES_TO_RADIANS(upperArmAngle);
_world->CreateJoint(&armJointDef);
Pointing the joint
float targetAngle = SOME_ANGLE;
b2JointEdge *j = body->GetJointList();
b2RevoluteJoint *r = (b2RevoluteJoint *)j->joint;
if(r->GetAngle() > targetAngle){
r->SetMotorSpeed(-1);
} else { r->SetMotorSpeed(1); }
Basically, you see what side of the current angle the target angle is on, and then set the motor speed to move the joint in the correct direction. Hope that helps!
http://www.box2d.org/manual.html

Resources