I am using the iOS core motion framework to detect if the device is tilted forward or backwards. See image for details: http://i.stack.imgur.com/2Ojw5.jpg
Using the pitch value a can detect this movement but I can not distinguish between forward AND backward.
More details:
I try to detect if there is a movement (tilting forward and backward) in either the forward area OR backward area (see updated sketch).
The problem with the pitch is that it starts with a value of about 1.6 if the device is in an upright position. And the value decreases the same when I am tilting it towards a horizontal potion either forward or backward. The same behavior applies to the accelerometer y value.
It looks like I miss something in the whole core motion thing. ANy Ideas
thanks christian
Using attitude pitch, leaning forward and backward are indistinguishable. However with quaternions you can calculate pitch, and if you convert radians to degrees,
0 means the device is on its back
90 means it's standing up
180 means it's on its face
The opposite hemisphere of rotation is 0 to -180. Here's the code:
func radiansToDegrees(_ radians: Double) -> Double {
return radians * (180.0 / Double.pi)
}
let quat = motionData.attitude.quaternion
let qPitch = CGFloat(radiansToDegrees(atan2(2 * (quat.x * quat.w + quat.y * quat.z), 1 - 2 * quat.x * quat.x - 2 * quat.z * quat.z)))
Try this:
// Create a CMMotionManager
CMMotionManager *mManager = [(AppDelegate *)[[UIApplication sharedApplication] delegate] sharedManager];
// Check whether the accelerometer is available
if ([mManager isAccelerometerAvailable] == YES) {
[mManager setAccelerometerUpdateInterval: .02];
[mManager startAccelerometerUpdatesToQueue:[NSOperationQueue mainQueue] withHandler:^(CMAccelerometerData *accelerometerData, NSError *error) {
[self updateGravityWithAccelerometerX:accelerometerData.acceleration.x y:accelerometerData.acceleration.y z:accelerometerData.acceleration.z];
}];
}
This will call updateGravityWithAccelerometerData every .02 seconds. You should be able to create that method and use NSLog to watch the values change to decipher what you are looking for. I believe you are looking for the acceleration.y value.
You are right, pitch works that way, considering the 4 typical quadrants (http://en.wikipedia.org/wiki/Quadrant_(plane_geometry)) and the counterclockwise direction:
Quadrant I, values range from 0 to PI/2 (or 0 to 90 in degrees).
Quadrant II, values range from PI/2 to 0 (or 90 to 0 in degrees).
Quadrant III, values range from 0 to -PI/2 (or 0 to -90 in degrees).
Quadrant IV, values range from -PI/2 to 0 (or -90 to 0 in degrees).
Considering this looks pretty obvious that you cannot difference between the phone leaning forwards or backwards.
I have recently faced the same problem for an iOS app that counts the number of flips that the phone does. Apple has rejected it so I have published it on GitHub, may be useful for you:
Flip Your Phone! -
https://github.com/apascual/flip-your-phone
You don't want to read the accelerometer for tilt. Accelerometer is for detecting differences in movements. You want the gyroscope so you can determine the absolute attitude (i.e. yaw, pitch and roll). In your case it sounds like you just want roll.
Use startDeviceMotionUpdatesToQueue and then attitude.roll for front and back and attitude.pitch for side to side. Here is what I did in Swift:
func motion(data: CMDeviceMotion){
let pitch = data.attitude.pitch
let roll = data.attitude.roll
let dampener:Float = -0.25 // the ball was rolling too fast
var forward_force = Float(1.6 - roll) * dampener //1.6 is vertical
var side_force = Float(pitch) * dampener // 0 is untilted when rotating cw/ccw
ballNode.physicsBody?.applyForce(SCNVector3Make(side_force, 0, forward_force), atPosition: SCNVector3Make(0, 0, 0), impulse: true)
}
With this you can see if it tilted frontward or backward based on whether the roll is greater than or equal to 1.6 which is approximately straight up.
Related
I'm trying to get the four vectors that make up the boundaries of the frustum in ARKit, and the solution I came up with is as follows:
Find the field of view angles of the camera
Then find the direction and up vectors of the camera
Using these information, find the four vectors using cross products and rotations
This may be a sloppy way of doing it, however it is the best one I got so far.
I am able to get the FOV angles and the direction vector from the ARCamera.intrinsics and ARCamera.transform properties. However, I don't know how to get the up vector of the camera at this point.
Below is the piece of code I use to find the FOV angles and the direction vector:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
if xFovDegrees == nil || yFovDegrees == nil {
let imageResolution = frame.camera.imageResolution
let intrinsics = frame.camera.intrinsics
xFovDegrees = 2 * atan(Float(imageResolution.width) / (2 * intrinsics[0,0])) * 180 / Float.pi
yFovDegrees = 2 * atan(Float(imageResolution.height) / (2 * intrinsics[1,1])) * 180 / Float.pi
}
let cameraTransform = SCNMatrix4(frame.camera.transform)
let cameraDirection = SCNVector3(-1 * cameraTransform.m31,
-1 * cameraTransform.m32,
-1 * cameraTransform.m33)
}
I am also open to suggestions for ways to find the the four vectors I'm trying to get.
I had not understood how this line worked:
let cameraDirection = SCNVector3(-1 * cameraTransform.m31,
-1 * cameraTransform.m32,
-1 * cameraTransform.m33)
This gives the direction vector of the camera because the 3rd row of the transformation matrix gives where the new z-direction of the transformed camera points at. We multiply it by -1 because the default direction of the camera is the negative z-axis.
Considering this information and the fact that the default up vector for a camera is the positive y-axis, the 2nd row of the transformation matrix gives us the up vector of the camera. The following code gives me what I want:
let cameraUp = SCNVector3(cameraTransform.m21,
cameraTransform.m22,
cameraTransform.m23)
It could be that I'm misunderstanding what you're trying to do, but I'd like to offer an alternative solution (the method and result is different than your answer).
For my purposes, I define the up vector as (0, 1, 0) when the phone is pointing straight up - basically I want the unit vector that is pointing straight out of the top of the phone. ARKit defines the up vector as (0, 1, 0) when the phone is horizontal to the left - so the y-axis is pointing out of the right side of the phone - supposedly because they expect AR apps to prefer horizontal orientation.
camera.transform returns the camera's orientation relative to its initial orientation when the AR session started. It is a 4x4 matrix - the first 3x3 of which is the rotation matrix - so when you write cameraTransform.m21 etc. you are referencing part of the rotation matrix, which is NOT the same as the up vector (however you define it).
So if I define the up vector as the unit y-vector where the y axis is pointing out of the top of the phone, I have to write this as (-1, 0, 0) in ARKit space. Then simply multiplying this vector (slightly modified... see below) by the camera's transform will give me the "up vector" that I'm looking for. Below is an example of using this calculation in a ARSessionDelegate callback.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// the unit y vector is appended with an extra element
// for multiplying with the 4x4 transform matrix
let unitYVector = float4(-1, 0, 0, 1)
let upVectorH = frame.camera.transform * unitYVector
// drop the 4th element
let upVector = SCNVector3(upVectorH.x, upVectorH.y, upVectorH.z)
}
You can use let unitYVector = float4(0, 1, 0, 1) if you are working with ARKit's horizontal orientation.
You can also do the same sort of calculation to get the "direction vector" (pointing out of the front of the phone) by multiplying unit vector (0, 0, 1, 1) by the camera transform.
I want to get the azimuth from the back of the phone (-Z axis) for an augmented reality app. My application only runs in Landscape Right. Testing this on iPhone 5S.
Currently, I'm using the following approach:
CoreLocation heading base on back camera (Augmented reality)
I have 2 problems with this approach:
If I'm pointing the back of the device towards north such that I'm currently at 0 degrees, then rotate it clockwise (yaw) a full 360 degrees, I'm now at -20 degrees. Counterclockwise rotations add 20 degrees. This pattern repeats itself such that rotating 720 degrees from 0 now yields -40 degrees and so on. Also, even if I don't necessarily do these clear rotations, but instead move the phone chaotically (spinning, shaking, etc), but end up in the same spot where I was initially, I can't even predict what value it will show.
The other problem is what I think is called gyro drift. If I don't move the device at all, I can clearly see how the value slowly changes over time, by let's say 0.1 degrees every few seconds, sometimes in one direction, sometimes the other, until a certain point where it decides to stop.
The problem is, I don't have the mathematical background to know how to account for these changes. It's especially problematic that I can't seem to compute the rotation matrix from yaw/pitch/roll from deviceMotion.attitude. I tried:
float w = motion.attitude.yaw;
float v = motion.attitude.pitch;
float u = motion.attitude.roll;
r.m11 = cos(v) * cos(w);
r.m12 = sin(u) * sin(v) * cos(w) - cos(u) * sin(w);
r.m13 = sin(u) * sin(w) + cos(u) * sin(v) * cos(w);
r.m21 = cos(v) * sin(w);
r.m22 = cos(u) * cos(w) + sin(u) * sin(v) * sin(w);
r.m23 = cos(u) * sin(v) * sin(w) - sin(u) * cos(w);
r.m31 = -sin(v);
r.m32 = sin(u) * cos(v);
r.m33 = cos(u) * cos(v);
I've tried every Tait–Bryan combination (u-v-w, u-w-v, v-u-w, v-w-u, w-v-u, w-u-v), some of them came close, but still not close enough.
From my observations, it seems like the magneticHeading from CLLocationManager is much more accurate than computed heading from CMMotionManager, but again, even if I got the correct angle, I don't know where should I start to get the equivalent angle in a different coordinate system reference frame. Any help would be greatly appreciated.
When loading a screen in FaceUp Orientation I need to know the angle of the iPhone.
The iPhone is flat on the table but I just need to know if it is in vertical or horizontal position.
I can't use StatusBarOrientation since I have fixed orientation. The orientation of the status bar is always the same
This may be a good time to use CoreMotion. Looks like reading CoreMotionRate may give you what you want:
From the docs:
/*
* CMRotationRate
*
* Discussion:
* A structure containing 3-axis rotation rate data.
*
* Fields:
* x:
* X-axis rotation rate in radians/second. The sign follows the right hand
* rule (i.e. if the right hand is wrapped around the X axis such that the
* tip of the thumb points toward positive X, a positive rotation is one
* toward the tips of the other 4 fingers).
* y:
* Y-axis rotation rate in radians/second. The sign follows the right hand
* rule (i.e. if the right hand is wrapped around the Y axis such that the
* tip of the thumb points toward positive Y, a positive rotation is one
* toward the tips of the other 4 fingers).
* z:
* Z-axis rotation rate in radians/second. The sign follows the right hand
* rule (i.e. if the right hand is wrapped around the Z axis such that the
* tip of the thumb points toward positive Z, a positive rotation is one
* toward the tips of the other 4 fingers).
*/
Quick example of how to get these values:
private lazy var motionManager: CMMotionManager = {
return CMMotionManager()
}()
func recordMotion() {
motionManager.startDeviceMotionUpdatesToQueue(opQueue, withHandler: { (deviceMotion, error) in
if let motion = deviceMotion {
print(motion.rotationRate.x)
print(motion.rotationRate.y)
print(motion.rotationRate.z)
}
})
}
Trying to use CoreMotion to correctly rotate a SceneKit camera. The scene I've built is done rather simple ... all I do is create a bunch of boxes, distributed in an area, and the camera just points down the Z axis.
Unfortunately, the data coming back from device motion doesn't seem to relate to the device's physical position and orientation in any way. It just seems to meander randomly.
As suggested in this SO post, I'm passing the attitude's quaternion directly to the camera node's orientation property.
Am I misunderstanding what data core motion is giving me here? shouldn't the attitude reflect the device's physical orientation? or is it incremental movement and I should be building upon the prior orientation?
This snippet here might help you:
var motionManager = CMMotionManager()
motionManager?.deviceMotionUpdateInterval = 1.0 / 60.0
motionManager?.startDeviceMotionUpdatesToQueue(
NSOperationQueue.mainQueue(),
withHandler: { (motion: CMDeviceMotion!, error: NSError!) -> Void in
let currentAttitude = motion.attitude
var roll = Float(currentAttitude.roll) + (0.5*Float(M_PI))
var yaw = Float(currentAttitude.yaw)
var pitch = Float(currentAttitude.pitch)
self.cameraNode.eulerAngles = SCNVector3(
x: -roll,
y: yaw,
z: -pitch)
})
This setting is for the device in landscape right. You can play around with different orientations by changing the + and -
Import CoreMotion.
For anyone who stumbles on this, here's a more complete answer so you can understand the need for negations and pi/2 shifts. You first need to know your reference frame. Spherical coordinate systems define points as vectors angled away from the z- and x- axes. For the earth, let's define the z-axis as the line from the earth's center to the north pole and the x-axis as the line from the center through the equator at the prime meridian (mid-Africa in the Atlantic).
For (lat, lon, alt), we can then define roll and yaw around the z- and y- axes in radians:
let roll = lon * Float.pi / 180
let yaw = (90 - lat) * Float.pi / 180
I'm pairing roll, pitch, and yaw with z, x, and y, respectively, as defined for eulerAngles.
The extra 90 degrees accounts for the north pole being at 90 degrees latitude instead of zero.
To place my SCNCamera on the globe, I used two SCNNodes: an 'arm' node and the camera node:
let scnCamera = SCNNode()
scnCamera.camera = SCNCamera()
scnCamera.position = SCNVector3(x: 0.0, y: 0.0, z: alt + EARTH_RADIUS)
let scnCameraArm = SCNNode()
scnCameraArm?.position = SCNVector3(x: 0, y: 0, z: 0)
scnCameraArm?.addChildNode(scnCamera)
The arm is positioned at the center of the earth, and the camera is place at alt + EARTH_RADIUS away, i.e. the camera is now at the north pole. To move the camera on every location update, we can now just rotate the arm node with new roll and yaw values:
scnCameraArm.eulerAngles.z = roll
scnCameraArm.eulerAngles.y = yaw
Without changing the camera's orientation, it's virtual lens is always facing the ground and it's virtual 'up' direction is pointed westward.
To change the virtual camera's orientation, the CMMotion callback returns a CMAttitude with roll, pitch, and yaw values relative to a different z- and x- axis reference of your choosing. The magnetometer-based ones use a z-axis pointed away from gravity and an x-axis pointed at the north pole. So a phone with zero pitch, roll, and yaw, would have its screen facing away from gravity, it's back camera pointed at the ground, and its right side of portrait mode facing north. Notice that this orientation is relative to gravity, not to the phone's portrait/landscape mode (which is also relative to gravity). So portrait/landscape is irrelevant.
If you imagine the phone's camera in this orientation near the north pole on the prime meridian, you'll notice that the CMMotion reference is in a different orientation than the virtual camera (SCNCamera). Both cameras are facing the ground, but their respective y-axes (and x) are 180 degrees apart. To line them up, we need to spin one around its respective z-axis, i.e. add/subtract 180 degrees to the roll ...or, since they're expressed in radians, negate them for the same effect.
Also, as far as I can tell, CMAttitude doesn't explicitly document that its roll value means a rotation about the z-axis coming out of the phone's screen, and from experimenting, it seems that attitude.roll and attitude.yaw have opposite definitions than defined in eulerAngles, but maybe this is an artifact of the order that the rotational transformations are applied in virtual space with eulerAngles (?). Anyway, the callback:
motionManager?.startDeviceMotionUpdates(using: .xTrueNorthZVertical, to: OperationQueue.main, withHandler: { (motion: CMDeviceMotion?, err: Error?) in
guard let m = motion else { return }
scnCamera.eulerAngles.z = Float(m.attitude.yaw - Double.pi)
scnCamera.eulerAngles.x = Float(m.attitude.pitch)
scnCamera.eulerAngles.y = Float(m.attitude.roll)
})
You can also start with a different reference frame for your virtual camera, e.g. z-axis pointing through the prime meridian at the equator and x-axis pointing through the north pole (i.e. the CMMotion reference), but you'll still need to invert the longitude somewhere.
With this set up, you can build a scene heavily reliant on GPS locations pretty easily.
I'm developing an AR app using the gyro. I have use an apple code example pARk. It use the rotation matrix to calculate the position of the coordinate and it do really well, but now I'm trying to implement a "radar" and I need to rotate this in function of the device heading. I'm using the CLLocationManager heading but it's not correct.
The question is, how can I get the heading of the device using the CMAttitude to reflect exactly what I get in the screen??
I'm new with rotation matrix and that kind of things.
This is part of the code used to calculate the AR coordinates. Update the cameraTransform with the attitude:
CMDeviceMotion *d = motionManager.deviceMotion;
if (d != nil) {
CMRotationMatrix r = d.attitude.rotationMatrix;
transformFromCMRotationMatrix(cameraTransform, &r);
[self setNeedsDisplay];
}
and then in the drawRect code:
mat4f_t projectionCameraTransform;
multiplyMatrixAndMatrix(projectionCameraTransform, projectionTransform, cameraTransform);
int i = 0;
for (PlaceOfInterest *poi in [placesOfInterest objectEnumerator]) {
vec4f_t v;
multiplyMatrixAndVector(v, projectionCameraTransform, placesOfInterestCoordinates[i]);
float x = (v[0] / v[3] + 1.0f) * 0.5f;
float y = (v[1] / v[3] + 1.0f) * 0.5f;
I also rotate the view with the pitch angle.
The motions updates are started using the north:
[motionManager startDeviceMotionUpdatesUsingReferenceFrame:CMAttitudeReferenceFrameXTrueNorthZVertical];
So I think that must be possible to get the "roll"/heading of the device in any position (with any pitch and yaw...) but I don't know how.
There are a few ways to calculate heading from the rotation matrix returned by CMDeviceMotion. This assumes you use the same definition of Apple's compass, where the +y direction (top of the iPhone) pointing due north returns a heading of 0, and rotating the iPhone to the right increases the heading, so East is 90, South is 180, and so forth.
First, when you start updates, be sure to check to make sure headings are available:
if (([CMMotionManager availableAttitudeReferenceFrames] & CMAttitudeReferenceFrameXTrueNorthZVertical) != 0) {
...
}
Next, when you start the motion manager, ask for attitude as a rotation from X pointing true North (or Magnetic North if you need that for some reason):
[motionManager startDeviceMotionUpdatesUsingReferenceFrame: CMAttitudeReferenceFrameXTrueNorthZVertical
toQueue: self.motionQueue
withHandler: dmHandler];
When the motion manager reports a motion update, you want to find out how much the device has rotated in the X-Y plane. Since we are interested in the top of the iPhone, we'll pick a point in that direction and rotate it using the returned rotation matrix to get the point after rotation:
[m11 m12 m13] [0] [m12]
[m21 m22 m23] [1] = [m22]
[m31 m32 m33] [0] [m32]
The funky brackets are matrices; it's the best I can do using ASCII. :)
The heading is the angle between the rotated point and true North. We can use the X and Y coordinates of the rotated point to extract the arc tangent, which gives the angle between the point and the X axis. This is actually 180 degrees off from what we want, so we have to adjust accordingly. The resulting code looks like this:
CMDeviceMotionHandler dmHandler = ^(CMDeviceMotion *aMotion, NSError *error) {
// Check for an error.
if (error) {
// Add error handling here.
} else {
// Get the rotation matrix.
CMAttitude *attitude = self.motionManager.deviceMotion.attitude;
CMRotationMatrix rm = attitude.rotationMatrix;
// Get the heading.
double heading = PI + atan2(rm.m22, rm.m12);
heading = heading*180/PI;
printf("Heading: %5.0f\n", heading);
}
};
There is one gotcha: If the top of the iPhone is pointed straight up or straight down, the direction is undefined. The result is m21 and m22 are zero, or very close to it. You need to decide what this means for your app and handle the condition accordingly. You might, for example, switch to a heading based on the -Z axis (behind the iPhone) when m12*m12 + m22*m22 is close to zero.
This all assumes you want to rotate about the X-Y plane, as Apple usually does for their compass. It works because you are using the rotation matrix returned by the motion manager to rotate a vector pointed along the Y axis, which is this matrix:
[0]
[1]
[0]
To rotate a different vector--say, one pointed along -Z--use a different matrix, like
[0]
[0]
[-1]
Of course, you also have to take the arc tangent in a different plane, so instead of
double heading = PI + atan2(rm.m22, rm.m12);
you would use
double heading = PI + atan2(-rm.m33, -rm.m13);
to get the rotation in the X-Z plane.