Unexpected results from GLKQuaternion conversion (from CMQuaternion) - ios

I'm working on an iOS app that will use CoreMotion to calculate range of motion.
I quickly abandoned Euler angles due to gimbal lock. So, now I'm trying to use quaternions.
As you probably know, CMMotionManager reports device motion data through the CMDeviceMotion class. The pertinent property here (for my purposes) is the attitude property, an instance of CMAttitude. From this, of course, you can access pitch, roll and yaw. I am still doing this when logging results just to get a better idea of the data that's coming off the device (since these values are fairly intuitive to envision). It also provides a quaternion property, which is an instance of CMQuaternion.
Based on many hours of research here and elsewhere, I was convinced that using quaternions was the correct approach to have correct results in any orientation. The problem is that I have found the CMQuaternion class/structure to be very dense and difficult to understand. Apple's documentation is sparse, and I couldn't ever back out what I considered to be a valid axis-angle representation from a CMQuaternion. (If someone has the math for calculating an axis-angle representation from a CMQuaternion, I'm all ears!)
I thought I had this issue solved when I stumbled across Apple's GLKQuaternion structure in their GLKit library. GLKit has methods that provide the axis and angle from an instance of GLKQuaternion. There's even a nice constructor method: GLKQuaternionMake(x, y, z, w).
Since CMQuaternion had x, y, z, and w properties, I reasoned that I could use this method to basically "cast" between instances of CMQuaternion and GLKQuaternion. I'm still not sure if that was correct or not.
In any case, I was logging results from my iPhone when I came across some particularly weird results.
The code that I've written has the intended purpose of capturing an initial attitude (when the user taps a button) and then sampling the CoreMotion data and determining the difference between the starting position and the current position.
Here's the code:
- (void)sampleDeviceMotion {
// I've tested this, and "self.startingAttitude" is set/reset correctly
if (self.startingAttitude == nil) {
self.startingAttitude = self.motionManager.deviceMotion.attitude;
}
CMQuaternion quaternion1 = self.startingAttitude.quaternion;
GLKQuaternion q1 = GLKQuaternionMake(quaternion1.x, quaternion1.y, quaternion1.z, quaternion1.w);
GLKVector3 v1 = GLKQuaternionAxis(q1);
float angle1 = GLKQuaternionAngle(q1);
CMQuaternion quaternion2 = self.motionManager.deviceMotion.attitude.quaternion;
GLKQuaternion q2 = GLKQuaternionMake(quaternion2.x, quaternion2.y, quaternion2.z, quaternion2.w);
GLKVector3 v2 = GLKQuaternionAxis(q2);
float angle2 = GLKQuaternionAngle(q2);
float dotProduct = GLKVector3DotProduct(v1, v2);
float length1 = GLKVector3Length(v1);
float length2 = GLKVector3Length(v2);
float cosineOfTheta = dotProduct / (length1 * length2);
float theta = acosf(cosineOfTheta);
theta = radiansToDegrees(theta);
float rotationDelta = angle2 - angle1;
rotationDelta = radiansToDegrees(rotationDelta);
printf("v1: (%.02f, %.02f, %.02f) v2: (%.02f, %.02f, %.02f) angle1: %.02f, angle2: %.02f - vectorDelta: %dº. rotationDelta: %dº (pitch: %.02f, roll: %.02f, yaw: %.02f)\n", v1.x, v1.y, v1.z, v2.x, v2.y, v2.z, angle1, angle2, (int)theta, (int)rotationDelta, self.motionManager.deviceMotion.attitude.pitch, self.motionManager.deviceMotion.attitude.roll, self.motionManager.deviceMotion.attitude.yaw);
}
I've looked that code over 20 times, and I don't see any flaws in it, unless my assumption about the ability to move between CMQuaternion and GLKQuaternion is flawed.
I would expect that when the device is lying flat on a table, tapping the button and leaving the device still would give me results where q1 and q2 are essentially identical. I would understand a slight amount of "wobble" in the data, but if the device doesn't move, the axis and angle of the different quaternions should be (almost) the same.
Sometimes I get that (when the button is tapped and the device is left still), but sometimes the "initial" (self.startingAttitude) and subsequent values are off by a lot (40º-70º).
Also, in one case where the initial and subsequent values were in line, I had a weird result when then attempting to measure a rotation.
I "spun" the phone (rotation around the "Z" axis), and go the following results:
BEFORE:
v1: (-0.03, -0.03, -1.00) v2: (0.01, -0.04, -1.00) angle1: 0.02, angle2: 0.01 - vectorDelta: 2º. rotationDelta: 0º (pitch: 0.00, roll: -0.00, yaw: -0.01)
AFTER:
v1: (-0.03, -0.03, -1.00) v2: (-0.00, -0.01, 1.00) angle1: 0.02, angle2: 0.14 - vectorDelta: 176º. rotationDelta: 7º (pitch: -0.00, roll: -0.00, yaw: 0.14)
I believe the pitch/roll/yaw data to be correct both times. That is, I imparted a small amount of yaw rotation to the phone. So, why the did Z axis flip completely with just a small change in yaw?

Related

How to use OpenCV triangulatePoints and GPS data properly?

I am trying to estimate the 3D position of a world coordinate from 2 frames. The frames are captured with the same camera from different positions. The problem is, the estimation is wrong.
I have
Camera Intrinsic parameters
K = [4708.29296875, 0, 1218.51806640625;
0, 4708.8935546875, 1050.080322265625;
0, 0, 1]
Translation and Rotation data:
Frame X-Coord Y-Coord Z-Coord(altitude) Pitch Roll Yaw
1 353141.23 482097.85 38.678 0.042652439 1.172694124 16.72142499
2 353141.82 482099.69 38.684 0.097542931 1.143224387 16.79931141
Note: GPS data uses cartesian coordinate system (X,Y,Z Coordinates) is in meter units based on British National Grid GPS system.
To get the rotation matrix I used
https://stackoverflow.com/a/56666686/16432598 which is based on http://www.tobias-weis.de/triangulate-3d-points-from-3d-imagepoints-from-a-moving-camera/.
Using above data I calculate Extrinsic Parameters and the Projection Matrices as follows.
Rt0 = [-0.5284449976982357, 0.308213375891041, -0.7910438668806931, 353141.21875;
-0.8478960766271159, -0.2384055118949635, 0.4735346398506075, 482097.84375;
-0.04263950806535898, 0.9209600028339713, 0.3873171123665929, 38.67800140380859]
Rt1 = [-0.4590975294881605, 0.3270290779984009, -0.8260032933114635, 353141.8125;
-0.8830316937622665, -0.2699087096524321, 0.3839326975722462, 482099.6875;
-0.097388326965866, 0.905649640091175, 0.4126914624432091, 38.68399810791016]
P = K * Rt;
P1 = [-2540.030877954028, 2573.365272473235, -3252.513377560185, 1662739447.059914;
-4037.427278644764, -155.5442017945203, 2636.538291686695, 2270188044.171295;
-0.04263950806535898, 0.9209600028339713, 0.3873171123665929, 38.67800140380859]
P2 = [-2280.235105924588, 2643.299156802081, -3386.193495224041, 1662742249.915956;
-4260.36781710715, -319.9665173096691, 2241.257388910372, 2270196732.490808;
-0.097388326965866, 0.905649640091175, 0.4126914624432091, 38.68399810791016]
triangulatePoints(Points2d, projection_matrices, out);
Now, I pick the same point in both images for triangulation
p2d_1(205,806) and p2d_2(116,813)
For the 3D position of this particular point I expect something like;
[353143.7, 482130.3, 40.80]
whereas I calculate
[549845.5109014747, -417294.6070425579, -201805.410744677]
I know that my Intrinsic parameters and GPS data is very accurate.
Can anybody tell me what is missing or what do I do wrong here?
Thanks

How to convert TangoXyxIjData into a matrix of z-values

I am currently using a Project Tango tablet for robotic obstacle avoidance. I want to create a matrix of z-values as they would appear on the Tango screen, so that I can use OpenCV to process the matrix. When I say z-values, I mean the distance each point is from the Tango. However, I don't know how to extract the z-values from the TangoXyzIjData and organize the values into a matrix. This is the code I have so far:
public void action(TangoPoseData poseData, TangoXyzIjData depthData) {
byte[] buffer = new byte[depthData.xyzCount * 3 * 4];
FileInputStream fileStream = new FileInputStream(
depthData.xyzParcelFileDescriptor.getFileDescriptor());
try {
fileStream.read(buffer, depthData.xyzParcelFileDescriptorOffset, buffer.length);
fileStream.close();
} catch (IOException e) {
e.printStackTrace();
}
Mat m = new Mat(depthData.ijRows, depthData.ijCols, CvType.CV_8UC1);
m.put(0, 0, buffer);
}
Does anyone know how to do this? I would really appreciate help.
The short answer is it can't be done, at least not simply. The XYZij struct in the Tango API does not work completely yet. There is no "ij" data. Your retrieval of buffer will work as you have it coded. The contents are a set of X, Y, Z values for measured depth points, roughly 10000+ each callback. Each X, Y, and Z value is of type float, so not CV_8UC1. The problem is that the points are not ordered in any way, so they do not correspond to an "image" or xy raster. They are a random list of depth points. There are ways to get them into some xy order, but it is not straightforward. I have done both of these:
render them to an image, with the depth encoded as color, and pull out the image as pixels
use the model/view/perspective from OpenGL and multiply out the locations of each point and then figure out their screen space location (like OpenGL would during rendering). Sort the points by their xy screen space. Instead of the calculated screen-space depth just keep the Z value from the original buffer.
or
wait until (if) the XYZij struct is fixed so that it returns ij values.
I too wish to use Tango for object avoidance for robotics. I've had some success by simplifying the use case to be only interested in the distance of any object located at the center view of the Tango device.
In Java:
private Double centerCoordinateMax = 0.020;
private TangoXyzIjData xyzIjData;
final FloatBuffer xyz = xyzIjData.xyz;
double cumulativeZ = 0.0;
int numberOfPoints = 0;
for (int i = 0; i < xyzIjData.xyzCount; i += 3) {
float x = xyz.get(i);
float y = xyz.get(i + 1);
if (Math.abs(x) < centerCoordinateMax &&
Math.abs(y) < centerCoordinateMax) {
float z = xyz.get(i + 2);
cumulativeZ += z;
numberOfPoints++;
}
}
Double distanceInMeters;
if (numberOfPoints > 0) {
distanceInMeters = cumulativeZ / numberOfPoints;
} else {
distanceInMeters = null;
}
Said simply this code is taking the average distance of a small square located at the origin of x and y axes.
centerCoordinateMax = 0.020 was determined to work based on observation and testing. The square typically contains 50 points in ideal conditions and fewer when held close to the floor.
I've tested this using version 2 of my tango-caminada application and the depth measuring seems quite accurate. Standing 1/2 meter from a doorway I slid towards the open door and the distance changed form 0.5 meters to 2.5 meters which is the wall at the end of the hallway.
Simulating a robot being navigated I moved the device towards a trash can in the path until 0.5 meters separation and then rotated left until the distance was more than 0.5 meters and proceeded forward. An oversimplified simulation, but the basis for object avoidance using Tango depth perception.
You can do this by using camera intrinsics to convert XY coordinates to normalized values -- see this post - Google Tango: Aligning Depth and Color Frames - it's talking about texture coordinates but it's exactly the same problem
Once normalized, move to screen space x[1280,720] and then the Z coordinate can be used to generate a pixel value for openCV to chew on. You'll need to decide how to color pixels that don't correspond to depth points on your own, and advisedly, before you use the depth information to further colorize pixels.
The main thing is to remember that the raw coordinates returned are already using the basis vectors you want, i.e. you do not want the pose attitude or location

Map device tilt to physicsWorld gravity?

I'm building a "marble" labyrinth game in order to learn spritekit basics.
I want to map the gravity of the game to the tilt of the device. I've been trying to figure out how to do it but I've only been able to map the y axis successfully:
class func obtainGravity(motionManager: CMMotionManager) {
var vec = CGVectorMake(0, 0)
if let attitude = motionManager.deviceMotion?.attitude? {
let y = CGFloat(-attitude.pitch * 2 / M_PI) // This works, it returns 1/-1 when the device is vertical (1 when the home button is upside down)
let x = CGFloat(attitude.roll * 2 / M_PI) // This doesn't work
physicsWorld.gravity = CGVectorMake(x, y)
}
}
I could map the Y axis which makes the ball go "up" or "down" (relative to portrait mode) however I don't understand how to map the X axis (the pull from the side of the device).
For example, when putting the device flat on the table (x, y) should be (0,0) and when putting it on the table screen-down it should also be (0,0) however attitude.roll returns -179. Also if I keep my device vertical (on portrait mode) and turn on my feet keeping the device still, gravity should remain (x: 0, y: 1) however x continues to change as it's based on attitude.roll
How can this be done?
The most straightforward way would be to take accelerometer updates, not device motion updates, and directly read the gravity vector from there — that's exactly what the accelerometer captures: a measure of gravity in the device's coordinate system.
Sadly I'm a Swift thickie so can't provide example code but you're looking to provide a block of type CMAccelerometerHandler which will receive a CMAccelerationData from which you can obtain CMAcceleration struct and that is, directly, the gravity vector to apply.
if let data = motionManager.accelerometerData? {
vec = CGVectorMake(CGFloat(data.acceleration.x), CGFloat(data.acceleration.y))
}

Issue with GLKVector2's

I'm having trouble setting up vectors for an object in my code. I tried modeling my code similarly to the answer in this question: Game enemy move towards player except that I'm using GLKVector2's. I thought my implementation seemed correct, but it's really only my first time using vectors with GLKit and in general I haven't used them too much before.
My current code looks something like:
GLKVector2 vector = GLKVector2Make(self.player.position.x - self.target.position.x, self.player.position.y - self.target.position.y);
float hypo = sqrt(vector.x*vector.x + vector.y*vector.y);
float speed = 0.25;
vector = GLKVector2Make(vector.x/hypo, vector.y/hypo);
vector = GLKVector2MultiplyScalar(vector, speed);
GLKVector2 sum = GLKVector2Add(vector, self.target.position);
self.target.moveVelocity = sum;
Is it possible that my logic just isn't correct here? I'd appreciate any help or suggestions. Thanks!
EDIT: just for clarification since I didn't really explain what happens.. Basically the "enemy" shapes either stutter/jump or just stick. They aren't moving toward the other object at all.
EDIT 2:
If I try using GLKVector2Normalize, then nothing moves. If I do something like:
GLKVector2 vector = GLKVector2Make(self.player.position.x - self.target.position.x, self.player.position.y - self.target.position.y);
float speed = 0.10;
// float distance = GLKVector2Distance(self.player.position, self.target.position);
// vector = GLKVector2Normalize(vector);
vector = GLKVector2MultiplyScalar(vector, speed);
self.target.moveVelocity = vector;
Then the movement works toward the player object, but only updates the one time even though it should be updating every second.
Two things:
There's no need to calculate the magnitude of the vector and divide yourself -- GLKit has a GLKVector2Normalize function, which takes a vector and returns the vector in the same direction with length 1. You can then use GLKVector2MultiplyScalar (as you do) to change the speed.
Your target's velocity should be set to vector, not sum, assuming that in the target's update method (which you should call once per timestep), you add self.moveVelocity.x to self.position.x and self.moveVelocity.y to self.position.y each timestep. As it is now, your sum variable will hold the position that your target should have one timestep in the future, not its velocity.

Attitude change - angles and axis issue - quaternion math

I have an app that records angles as user is walking around an object, while pointing device (preferably) at the center of the object.
Angle gets reset on user's command - so reference attitude gets reset.
Using Euler angles produces Gimbal lock, so I am currently using quaternions and calculating angles this way:
double angleFromLastPosition = acos(fromLastPositionAttitude.quaternion.w) * 2.0f;
This gives off good precision and it works perfectly IF device's pitch and yaw does not change. In other words, as the angle shows 360 degrees I end up in the same place as the start of the circle.
Problem 1: if device's yaw and pitch change slightly (user not pointing directly at center of the object), so does the angleFromLastPosition.
I understand this part, as my angle formula just shows the angle in between two device attitudes in 3D space.
Scenario:
I mark the start of rotation attitude and start moving in a circle around the object while pointing at the center
I stop at, say, 45 degrees and change pitch of the device by pointing it higher or lower. Angle changes accordingly.
What I would love to see is: angle stays at 45 degrees, even if pitch or yaw changes.
Question 1 is, how can I calculate only the Roll of the device using quaternions, and disregard changes in other two axes (at least within some reasonable number of degrees).
Problem 2: if I rotate for a bit and then freeze the device completely (on tripod so there's no shaking at all), the angleFromLastPosition drifts at a rate of 1 degree per about 10-20 seconds, and it appears not to be linear. In other words, it drifts fast at first, then slows down considerably. Sometimes I get no drift at all - angle is rock-solid if device is stationary. And this makes me lost in understanding what's going on.
Question 2, what is going on here, and how can I take care of the drift?
I went through quite a few articles and tutorials, and quaternion math is beyond me at the moment, hope someone will be able to help with a tip, link, or few lines of code.
I have tested this and it seems to work according to what you're looking for in Question 1, Andrei.
I set the homeangle initially 0, and immediately after the first pass I store the angle returned from walkaroundAngleFromAttitude:fromHomeAngle: in homeangle, for future use.
My testing included starting the device updates using a reference frame:
[_motionManager
startDeviceMotionUpdatesUsingReferenceFrame:CMAttitudeReferenceFrameXArbitraryZVertical
toQueue:operationQueue
withHandler:handler];
and using the following methods called within handler:
- (CMQuaternion) multiplyQuanternion:(CMQuaternion)left withRight:(CMQuaternion)right {
CMQuaternion newQ;
newQ.w = left.w*right.w - left.x*right.x - left.y*right.y - left.z*right.z;
newQ.x = left.w*right.x + left.x*right.w + left.y*right.z - left.z*right.y;
newQ.y = left.w*right.y + left.y*right.w + left.z*right.x - left.x*right.z;
newQ.z = left.w*right.z + left.z*right.w + left.x*right.y - left.y*right.x;
return newQ;
}
-(float)walkaroundRawAngleFromAttitude:(CMAttitude*)attitude {
CMQuaternion e = (CMQuaternion){0,0,1,1};
CMQuaternion quatConj = attitude.quaternion;
quatConj.x *= -1; quatConj.y *= -1; quatConj.z *= -1;
CMQuaternion quat1 = attitude.quaternion;
CMQuaternion quat2 = [self multiplyQuanternion:quat1 withRight:e];
CMQuaternion quat3 = [self multiplyQuanternion:quat2 withRight:quatConj];
return atan2f(quat3.y, quat3.x);
}
-(float)walkaroundAngleFromAttitude:(CMAttitude*)attitude fromHomeAngle:(float)homeangle {
float rawangle = [self walkaroundRawAngleFromAttitude:attitude];
if (rawangle <0) rawangle += M_PI *2;
if (homeangle < 0) homeangle += M_PI *2;
float finalangle = rawangle - homeangle;
if (finalangle < 0) finalangle += M_PI *2;
return finalangle;
}
This is using some modified and extended code from Finding normal vector to iOS device
Edit to deal with Question 2 & Problem 2:
This may not be solvable. I've seen it in other apps (360 pano for example) and have read about faulty readings in gyro and such. If you tried to compensate for it, of course you'll end up with a jittery experience when some authentic rotational movement gets tossed by the compensation code. So far as I've been interpreting for the last few years, this is a hardware-based issue.

Resources