When loading a screen in FaceUp Orientation I need to know the angle of the iPhone.
The iPhone is flat on the table but I just need to know if it is in vertical or horizontal position.
I can't use StatusBarOrientation since I have fixed orientation. The orientation of the status bar is always the same
This may be a good time to use CoreMotion. Looks like reading CoreMotionRate may give you what you want:
From the docs:
/*
* CMRotationRate
*
* Discussion:
* A structure containing 3-axis rotation rate data.
*
* Fields:
* x:
* X-axis rotation rate in radians/second. The sign follows the right hand
* rule (i.e. if the right hand is wrapped around the X axis such that the
* tip of the thumb points toward positive X, a positive rotation is one
* toward the tips of the other 4 fingers).
* y:
* Y-axis rotation rate in radians/second. The sign follows the right hand
* rule (i.e. if the right hand is wrapped around the Y axis such that the
* tip of the thumb points toward positive Y, a positive rotation is one
* toward the tips of the other 4 fingers).
* z:
* Z-axis rotation rate in radians/second. The sign follows the right hand
* rule (i.e. if the right hand is wrapped around the Z axis such that the
* tip of the thumb points toward positive Z, a positive rotation is one
* toward the tips of the other 4 fingers).
*/
Quick example of how to get these values:
private lazy var motionManager: CMMotionManager = {
return CMMotionManager()
}()
func recordMotion() {
motionManager.startDeviceMotionUpdatesToQueue(opQueue, withHandler: { (deviceMotion, error) in
if let motion = deviceMotion {
print(motion.rotationRate.x)
print(motion.rotationRate.y)
print(motion.rotationRate.z)
}
})
}
Related
I'm trying to get the four vectors that make up the boundaries of the frustum in ARKit, and the solution I came up with is as follows:
Find the field of view angles of the camera
Then find the direction and up vectors of the camera
Using these information, find the four vectors using cross products and rotations
This may be a sloppy way of doing it, however it is the best one I got so far.
I am able to get the FOV angles and the direction vector from the ARCamera.intrinsics and ARCamera.transform properties. However, I don't know how to get the up vector of the camera at this point.
Below is the piece of code I use to find the FOV angles and the direction vector:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
if xFovDegrees == nil || yFovDegrees == nil {
let imageResolution = frame.camera.imageResolution
let intrinsics = frame.camera.intrinsics
xFovDegrees = 2 * atan(Float(imageResolution.width) / (2 * intrinsics[0,0])) * 180 / Float.pi
yFovDegrees = 2 * atan(Float(imageResolution.height) / (2 * intrinsics[1,1])) * 180 / Float.pi
}
let cameraTransform = SCNMatrix4(frame.camera.transform)
let cameraDirection = SCNVector3(-1 * cameraTransform.m31,
-1 * cameraTransform.m32,
-1 * cameraTransform.m33)
}
I am also open to suggestions for ways to find the the four vectors I'm trying to get.
I had not understood how this line worked:
let cameraDirection = SCNVector3(-1 * cameraTransform.m31,
-1 * cameraTransform.m32,
-1 * cameraTransform.m33)
This gives the direction vector of the camera because the 3rd row of the transformation matrix gives where the new z-direction of the transformed camera points at. We multiply it by -1 because the default direction of the camera is the negative z-axis.
Considering this information and the fact that the default up vector for a camera is the positive y-axis, the 2nd row of the transformation matrix gives us the up vector of the camera. The following code gives me what I want:
let cameraUp = SCNVector3(cameraTransform.m21,
cameraTransform.m22,
cameraTransform.m23)
It could be that I'm misunderstanding what you're trying to do, but I'd like to offer an alternative solution (the method and result is different than your answer).
For my purposes, I define the up vector as (0, 1, 0) when the phone is pointing straight up - basically I want the unit vector that is pointing straight out of the top of the phone. ARKit defines the up vector as (0, 1, 0) when the phone is horizontal to the left - so the y-axis is pointing out of the right side of the phone - supposedly because they expect AR apps to prefer horizontal orientation.
camera.transform returns the camera's orientation relative to its initial orientation when the AR session started. It is a 4x4 matrix - the first 3x3 of which is the rotation matrix - so when you write cameraTransform.m21 etc. you are referencing part of the rotation matrix, which is NOT the same as the up vector (however you define it).
So if I define the up vector as the unit y-vector where the y axis is pointing out of the top of the phone, I have to write this as (-1, 0, 0) in ARKit space. Then simply multiplying this vector (slightly modified... see below) by the camera's transform will give me the "up vector" that I'm looking for. Below is an example of using this calculation in a ARSessionDelegate callback.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// the unit y vector is appended with an extra element
// for multiplying with the 4x4 transform matrix
let unitYVector = float4(-1, 0, 0, 1)
let upVectorH = frame.camera.transform * unitYVector
// drop the 4th element
let upVector = SCNVector3(upVectorH.x, upVectorH.y, upVectorH.z)
}
You can use let unitYVector = float4(0, 1, 0, 1) if you are working with ARKit's horizontal orientation.
You can also do the same sort of calculation to get the "direction vector" (pointing out of the front of the phone) by multiplying unit vector (0, 0, 1, 1) by the camera transform.
I'm trying to create a paper folding effect in Swift using CALayers and CATransform3DRotate. There are some libraries out there, but those are pretty outdated and don't fit my needs (they don't have symmetric folds, for example).
My content view controller will squeeze to the right half side of the screen, revealing the menu at the left side.
Everything went well, until I applied perspective: then the dimensions I calculate are not correct anymore.
To explain the problem, I created a demo to show you what I'm doing.
This the content view controller with three squares. I will use three folds, so each square will be on a separate fold.
The even folds will get anchor point (0, 0.5) and the odd folds will get anchor point (1, 0.5), plus they'll receive a shadow.
When fully folded, the content view will be half of the screen's width.
On an iPhone 7, each fold/plane will be 125 points unfolded and 62.5 points fully folded when looked at.
To calculate the rotation needed to achieve this 62.5 points width, we can use a trigonometric function. To illustrate, look at this top-down view:
We know the original plane size (125) and the 2D width (62.5), so we can calculate the angle α using arccos:
let angle = acos(width / originalWidth)
The result is 1.04719755 rad or 60 degrees.
When using this formula with CATransform3DRotate, I get the correct result:
Now for the problem: when I add perspective, my calculation isn't correct anymore. The planes are bigger. Probably because of the now different projection.
You can see the planes are now overlapping and being clipped.
I reconstructed the desired result on the right by playing with the angle, but the correction needed is not consistent, unfortunately.
Here's the code I use. It works perfectly without perspective.
// Loop layers
for i in 0..<self.layers.count {
// Get layer
let layer = self.layers[i]
// Get dimensions
let width = self.frame.size.width / CGFloat(self.numberOfFolds)
let originalWidth = self.sourceView.frame.size.width / CGFloat(self.numberOfFolds)
// Calculate angle
let angle = acos(width / originalWidth)
// Set transform
layer.transform = CATransform3DIdentity
layer.transform.m34 = 1.0 / -500
layer.transform = CATransform3DRotate(layer.transform, angle * (i % 2 == 0 ? -1 : 1), 0, 1, 0)
// Update position
if i % 2 == 0 {
layer.position = CGPoint(x: (width * CGFloat(i)), y: layer.position.y)
} else {
layer.position = CGPoint(x: (width * CGFloat(i + 1)), y: layer.position.y)
}
}
So my question is: how do I achieve the desired result? Do I need to correct the angle, or should I calculate the projected/2D width differently?
Thanks in advance! :)
I want to get the azimuth from the back of the phone (-Z axis) for an augmented reality app. My application only runs in Landscape Right. Testing this on iPhone 5S.
Currently, I'm using the following approach:
CoreLocation heading base on back camera (Augmented reality)
I have 2 problems with this approach:
If I'm pointing the back of the device towards north such that I'm currently at 0 degrees, then rotate it clockwise (yaw) a full 360 degrees, I'm now at -20 degrees. Counterclockwise rotations add 20 degrees. This pattern repeats itself such that rotating 720 degrees from 0 now yields -40 degrees and so on. Also, even if I don't necessarily do these clear rotations, but instead move the phone chaotically (spinning, shaking, etc), but end up in the same spot where I was initially, I can't even predict what value it will show.
The other problem is what I think is called gyro drift. If I don't move the device at all, I can clearly see how the value slowly changes over time, by let's say 0.1 degrees every few seconds, sometimes in one direction, sometimes the other, until a certain point where it decides to stop.
The problem is, I don't have the mathematical background to know how to account for these changes. It's especially problematic that I can't seem to compute the rotation matrix from yaw/pitch/roll from deviceMotion.attitude. I tried:
float w = motion.attitude.yaw;
float v = motion.attitude.pitch;
float u = motion.attitude.roll;
r.m11 = cos(v) * cos(w);
r.m12 = sin(u) * sin(v) * cos(w) - cos(u) * sin(w);
r.m13 = sin(u) * sin(w) + cos(u) * sin(v) * cos(w);
r.m21 = cos(v) * sin(w);
r.m22 = cos(u) * cos(w) + sin(u) * sin(v) * sin(w);
r.m23 = cos(u) * sin(v) * sin(w) - sin(u) * cos(w);
r.m31 = -sin(v);
r.m32 = sin(u) * cos(v);
r.m33 = cos(u) * cos(v);
I've tried every Tait–Bryan combination (u-v-w, u-w-v, v-u-w, v-w-u, w-v-u, w-u-v), some of them came close, but still not close enough.
From my observations, it seems like the magneticHeading from CLLocationManager is much more accurate than computed heading from CMMotionManager, but again, even if I got the correct angle, I don't know where should I start to get the equivalent angle in a different coordinate system reference frame. Any help would be greatly appreciated.
I have an angle that I am calculating based on the positioning of a view from the centre of the screen. I need a way to move the view from it's current position, off the screen in the direction of the angle.
I'm sure there is a fairly simple way of calculating a new x and y value, but I haven't been able to figure out the maths. I want to do it using an animation, but I can figure that out myself once I have the coordinates.
Anyone have any suggestions?
If you have angle you can calculate new coordinates by getting sine and cosine values. You can try out following code
let pathLength = 50 as Double // total distance view should move
let piFactor = M_PI / 180
let angle = 90 as Double // direction in which you need to move it
let xCoord = outView.frame.origin.x + CGFloat(pathLength * sin(piFactor*angle)) //outView is name of view you want to animate
let yCoord = outView.frame.origin.y + CGFloat(pathLength * cos(piFactor*angle))
UIView.animateWithDuration(1, delay: 0, options: UIViewAnimationOptions.CurveEaseInOut, animations: { () -> Void in
self.outView.frame = CGRectMake(xCoord, yCoord, self.outView.frame.size.width, self.outView.frame.size.height)
}, completion: { (Bool) -> Void in
})
To me it sounds what you need to do is convert a vector from polar representation (angle and radius) to cartesian representation (x and y coordinates) which should be fairly easy.
You already got the angle so you only need to get the radius, which is the length of the vector. In you case (if I understand it correctly) is the distance from the current center of the view that needs to be animated to it's new position. While it may be complex to know that exactly (cause this part of what you are trying to calculate) you can go on the safe side and take a large enough value that will surely throw the view out of its super view frame. The length of the superview diagonal plus the length of the animated view diagonal should do the work, or even more simple just take the sum of the height and width of both views.
Once you have the complete polar representation of the vector (angle and radius) you can use that simple formula to convert to cartesian representation (x = r * cos(a), y = r * sin(a)) and finally add that vector coordinates to the center of the view you need to animate.
I'm developing an AR app using the gyro. I have use an apple code example pARk. It use the rotation matrix to calculate the position of the coordinate and it do really well, but now I'm trying to implement a "radar" and I need to rotate this in function of the device heading. I'm using the CLLocationManager heading but it's not correct.
The question is, how can I get the heading of the device using the CMAttitude to reflect exactly what I get in the screen??
I'm new with rotation matrix and that kind of things.
This is part of the code used to calculate the AR coordinates. Update the cameraTransform with the attitude:
CMDeviceMotion *d = motionManager.deviceMotion;
if (d != nil) {
CMRotationMatrix r = d.attitude.rotationMatrix;
transformFromCMRotationMatrix(cameraTransform, &r);
[self setNeedsDisplay];
}
and then in the drawRect code:
mat4f_t projectionCameraTransform;
multiplyMatrixAndMatrix(projectionCameraTransform, projectionTransform, cameraTransform);
int i = 0;
for (PlaceOfInterest *poi in [placesOfInterest objectEnumerator]) {
vec4f_t v;
multiplyMatrixAndVector(v, projectionCameraTransform, placesOfInterestCoordinates[i]);
float x = (v[0] / v[3] + 1.0f) * 0.5f;
float y = (v[1] / v[3] + 1.0f) * 0.5f;
I also rotate the view with the pitch angle.
The motions updates are started using the north:
[motionManager startDeviceMotionUpdatesUsingReferenceFrame:CMAttitudeReferenceFrameXTrueNorthZVertical];
So I think that must be possible to get the "roll"/heading of the device in any position (with any pitch and yaw...) but I don't know how.
There are a few ways to calculate heading from the rotation matrix returned by CMDeviceMotion. This assumes you use the same definition of Apple's compass, where the +y direction (top of the iPhone) pointing due north returns a heading of 0, and rotating the iPhone to the right increases the heading, so East is 90, South is 180, and so forth.
First, when you start updates, be sure to check to make sure headings are available:
if (([CMMotionManager availableAttitudeReferenceFrames] & CMAttitudeReferenceFrameXTrueNorthZVertical) != 0) {
...
}
Next, when you start the motion manager, ask for attitude as a rotation from X pointing true North (or Magnetic North if you need that for some reason):
[motionManager startDeviceMotionUpdatesUsingReferenceFrame: CMAttitudeReferenceFrameXTrueNorthZVertical
toQueue: self.motionQueue
withHandler: dmHandler];
When the motion manager reports a motion update, you want to find out how much the device has rotated in the X-Y plane. Since we are interested in the top of the iPhone, we'll pick a point in that direction and rotate it using the returned rotation matrix to get the point after rotation:
[m11 m12 m13] [0] [m12]
[m21 m22 m23] [1] = [m22]
[m31 m32 m33] [0] [m32]
The funky brackets are matrices; it's the best I can do using ASCII. :)
The heading is the angle between the rotated point and true North. We can use the X and Y coordinates of the rotated point to extract the arc tangent, which gives the angle between the point and the X axis. This is actually 180 degrees off from what we want, so we have to adjust accordingly. The resulting code looks like this:
CMDeviceMotionHandler dmHandler = ^(CMDeviceMotion *aMotion, NSError *error) {
// Check for an error.
if (error) {
// Add error handling here.
} else {
// Get the rotation matrix.
CMAttitude *attitude = self.motionManager.deviceMotion.attitude;
CMRotationMatrix rm = attitude.rotationMatrix;
// Get the heading.
double heading = PI + atan2(rm.m22, rm.m12);
heading = heading*180/PI;
printf("Heading: %5.0f\n", heading);
}
};
There is one gotcha: If the top of the iPhone is pointed straight up or straight down, the direction is undefined. The result is m21 and m22 are zero, or very close to it. You need to decide what this means for your app and handle the condition accordingly. You might, for example, switch to a heading based on the -Z axis (behind the iPhone) when m12*m12 + m22*m22 is close to zero.
This all assumes you want to rotate about the X-Y plane, as Apple usually does for their compass. It works because you are using the rotation matrix returned by the motion manager to rotate a vector pointed along the Y axis, which is this matrix:
[0]
[1]
[0]
To rotate a different vector--say, one pointed along -Z--use a different matrix, like
[0]
[0]
[-1]
Of course, you also have to take the arc tangent in a different plane, so instead of
double heading = PI + atan2(rm.m22, rm.m12);
you would use
double heading = PI + atan2(-rm.m33, -rm.m13);
to get the rotation in the X-Z plane.