Compass on 3d map Scenekit - ios

I want a compass like Google Earth. I'm using scenekit to create an earth with a camera that moves around it. The sphere is in a fixed position at (0,0,0). I move the camera using quaternions, applying the new orientation to the empty node. scene scheme
I want to show the camera orientation relative to north pole in a compass like this
compass Behavior
I've tried calculating the angle with the up vector but I got wrong values.
let worldUp = earthOrbitalCamera.orbitalNode.worldUp
let angle = atan2(worldUp.y, worldUp.x)
With this angle I update the needle position. The issues is that I'm getting wrong values.
For example, the camera is align with pole north and the needle points to 40 degress to west.
Any help will be appreciate. Thanks.

If the spherical model is aligned so that its north-pole is always pointing +Y direction in the world space, the following code might work.
let worldUp = earthOrbitalCamera.orbitalNode.worldUp
let worldRight = earthOrbitalCamera.orbitalNode.worldRight
let angle = atan2(worldRight.y, worldUp.y)

Related

Real Distance of object from camera using camera matrix

How can I calculate the distance of an object of known size (e.g. aruco marker of 0.14m printed on paper) from camera. I know the camera matrix (camMatx) and my fx,fy ~= 600px assuming no distortion. From this data I am able to calculate the pose of the aruco marker and have obtained [R|t]. Now the task is to get the distance of the aruco marker from the camera. I also know the height of the camera from ground plane (15m).
How should I go about solving this problem. Any help would be appreciated. Also please note I have also seen approach of similar triangles, but that would work on knowing the distance of the object, which doesnt apply in my case as I have to calculate the distance.
N.B: I dont know the camera sensor height. But I know how high the camera is located above ground.
I know the dimensions of the area in which my object is moving (70m x 45m). In the end I would like to plot the coordinate of the moving object on a 2D map drawn to the scale.

How can I get direction from camera to an anchor

I'm new to ARKit. I want to get the direction from anchor 1 to anchor 2. Currently, I can get the position from transform.columns.3. However, this works only for fixed axis.(z-axis always toward user)
How can I compare two anchor with respect to 6 axes (pitch, yaw, roll)? What should I read to get more detail information about this?
func showDirection(of object: ARAnchor) { // only work for fixed axis
if let currentFrame = sceneView.session.currentFrame {
print("diff(x) = \(currentFrame.camera.transform.columns.3.x - object.transform.columns.3.x)")
print("diff(y) = \(currentFrame.camera.transform.columns.3.y - object.transform.columns.3.y)")
print("diff(z) = \(currentFrame.camera.transform.columns.3.z - object.transform.columns.3.z)")
}
}
I think my answer to this other user's question may be helpful. Basically, using SceneKit or ARKit, you can find the orientations of the camera and of your target anchor, and do some quaternion math to find the axis and angle of the relative rotation between them on x, y and z axes. My example assumed a SceneKit/ARKit app, which allows you to use quaternions instead of matrices, but the math should essentially be the same for ARKit transforms. If you use ARKit's simd_float4x4 transform matrices, you could find one matrix in the space of the other (A.inverse * B) and use the resulting matrix to glean relative position and orientation.
Your question was a little hard to follow, as I'm not sure if the orientation of the anchor you're targeting matters in your case, but this should help as far as comparing two anchors with respect to pitch, yaw and roll.

Is it possible to position an ARKit plane to the center of the screen?

I have a plane I am trying to position using SCNVector3. I need to force the plane to show in the middle of the screen once it is detected so it can be seen at all times. I am only trying to detect the ground so multiple surfaces should not be an issue. I have tried many things like forcing the SCNVector3 the be positioned using a CGFloat but it will not accept those parameters.
You can attach your plane to the camera node (with z = -1) so it will always be visible and follow the phone's position + angles.

ARKit : Translate a 3D Object on camera view

I'm actualy developing an AR application on xCode with ARKit.
I have my iPad who is on a particular orientation and when I add a SCNode on (0,0,0) to my SCNScene with a ARWorldTrackingSessionConfiguration it appears in front of the camera when the iPad is perpendicular to the ground like so :
The iPad is perpendicular to the ground and the 3D object is at (0,0,0)
I would like to have my SCNode to appears directly on the iPad screen when I launch the ARScene like this :
The iPad is oriented in direction to the flower pot and i had to set the coordinates manually
How can i do that ?
I imagine i would have to do something like a translation of coordinates but I don't know how to do that.
And if it can help, i can have the distance between the camera and the flower pot
Thanks in advance ! :)
You need to pass the coordinates of the object in a SCNMatrix4 form as follows:
let translationMatrix = SCNMatrix4Translate(theNode.worldTransform, 0.1, 0.1, 0.1) //tx, ty, tz are translations in each axis i
and then theNode.transform = translation matrix

Translating screen coordinates to sprite coordinates in XNA

I have a sprite object in XNA.
It has a size, position and rotation.
How to translate a point from the screen coordinates to the sprite coordinates ?
Thanks,
SW
You need to calculate the transform matrix for your sprite, invert that (so the transform now goes from world space -> local space) and transform the mouse position by the inverted matrix.
Matrix transform = Matrix.CreateScale(scale) * Matrix.CreateRotationZ(rotation) * Matrix.CreateTranslation(translation);
Matrix inverseTransform = Matrix.Invert(transform);
Vector3 transformedMousePosition = Vector3.Transform(mousePosition, inverseTransform);
You might find the following XNA picking sample useful:
http://creators.xna.com/en-us/sample/picking
One solution is to hit test against the sprite's original, unrotated bounding box.
So given the 2D screen vector (x,y):
translate the 2D vector into local sprite space: (x,y) - (spritex,spritey)
apply inverse sprite rotation
perform hit testing against bounding box
The hit test can of course be made more accurate by taking into account the sprite shape.
I think it may be as simple as using the Contains method on Rectangle, the rectangle being the bounding box of your sprite. I've implemented drag-and-drop this way in XNA; I believe Contains tests based on x and y being screen coordinates.

Resources