ARKit how position nodes base on horizon and not the camera orientation? - ios

When I add a new node with ARKit (ARSKView), the object is positioned base on the device camera. So if your phone is facing down or tilted, the object will be in that direction as well. How can I instead place the object base on the horizon?

For that, right after a new node's creation, use a worldOrientation instance property that controls the node's orientation relative to the scene's world coordinate space.
var worldOrientation: SCNQuaternion { get set }
This quaternion isolates the rotational aspect of the node's worldTransform matrix, which in turn is the conversion of the node's transform from local space to the scene's world coordinate space. That is, it expresses the difference in axis and angle of rotation between the node and the scene's rootNode.
let worldOrientation = sceneView.scene.rootNode.worldOrientation
yourNode.rotation = worldOrientation /* X, Y, Z, W components */
P.S. (as you updated your question) :
If you're using SpriteKit, 2D sprites you spawn in ARSKView are always face the camera. So, if the camera moves around a definite point of real scene, all the sprites must be rotated about their pivot point, still facing a camera.
Nothing can prevent you from using SceneKit and SpriteKit together.

Related

Augmented Reality - Distance between marker image and Camera in ArKit?

I want to calculate the distance between any marker image saved in the iOS project which is used for detecting in Augmented Reality and your current position i.e. camera position using ARKIT ?
As Apple’s documentation and sample code note:
A detected image is reported to your app as an ARImageAnchor object.
ARImageAnchor is a subclass of ARAnchor.
ARAnchor has a transform property, which indicates its position and orientation in 3D space.
ARKit also provides you an ARCamera on every frame (or you can get it from the session’s currentFrame).
ARCamera also has a transform property, indicating the camera’s position and orientation in 3D space.
You can get the translation vector (position) from a 4x4 transform matrix by extracting the last column vector.
That should be enough for you to connect the dots...
If you're using SceneKit and have the SCNNode* that came with -(void) renderer:(id<SCNSceneRenderer>)renderer didAddNode:(SCNNode *)node forAnchor:(ARAnchor*)anchor then [node convertPosition:{0,0,0} toNode:self.sceneView.pointOfView].z is distance to camera.

Apply ARCamera rotation transform to node (ARKit)

I want to apply the rotation of the ARCamera to a 3D node so that the node will always face the camera. How can I implement this code in Objective-C?
You can get an SCNNode to face the ARCamera by using an SCNBillboardConstraint:
An SCNBillboardConstraint object automatically adjusts a node’s orientation so that its local z-axis always points toward the pointOfView node currently being used to render the scene. For example, you can use a billboard constraint to efficiently render parts of a scene using two-dimensional sprite images instead of three-dimensional geometry—by mapping sprites onto planes affected by a billboard constraint, the sprites maintain their orientation with respect to the viewer. To attach constraints to an SCNNode object, use its constraints property.
Objective C:
SCNBillboardConstraint *lookAtConstraint = [SCNBillboardConstraint billboardConstraint];
node.constraints = #[lookAtConstraint];
Swift:
let lookAtConstraint = SCNBillboardConstraint()
node.constraints = [lookAtConstraint]
If you want an SCNNode to face another node then you can use an SCNLookAtConstraint:
For example, you can use a look-at constraint to ensure that a camera or spotlight always follows the movement of a game character. To attach constraints to an SCNNode object, use its constraints property.
A node points in the direction of the negative z-axis of its local coordinate system. This axis defines the view direction for nodes containing cameras and the lighting direction for nodes containing spotlights or directional lights, as well as the orientation of the node’s geometry and child nodes. When Scene Kit evaluates a look-at constraint, it updates the constrained node’s transform property so that the node’s negative z-axis points toward the constraint’s target node.
Objective C:
SCNLookAtConstraint * lookAtNode = [SCNLookAtConstraint lookAtConstraintWithTarget:secondNode];
fistNode.constraints = #[lookAtNode];
Swift:
let lookAtConstraint = SCNLookAtConstraint(target: secondNode)
firstNode.constraints = [lookAtConstraint]
I used
let lookAtConstraint = SCNBillboardConstraint()
node.constraints = [lookAtConstraint]
for a SceneKit directional light node to always shine the light from the view of the user as a 3D head scan model is rotated by the user. Without this, the directional light stays locked to the front of the face model and then there is always a dark shadow at the back of the head.

finding the depth in arkit with SCNVector3Make

the goal of the project is to create a drawing app. i want it so that when i touch the screen and move my finger it will follow the finger and leave a cyan color paint. i did created it BUT there is one problem. the paint DEPTH is always randomly placed.
here is the code, just need to connect the sceneView with the storyboard.
https://github.com/javaplanet17/test/blob/master/drawingar
my question is how do i make the program so that the depth will always be consistent, by consistent i mean there is always distance between the paint and the camera.
if you run the code above you will see that i have printed out all the SCNMatrix4, but i none of them is the DEPTH.
i have tried to change hitTransform.m43 but it only messes up the x and y.
If you want to get a point some consistent distance in front of the camera, you don’t want a hit test. A hit test finds the real world surface in front of the camera — unless your camera is pointed at a wall that’s perfectly parallel to the device screen, you’re always going to get a range of different distances.
If you want a point some distance in front of the camera, you need to get the camera’s position/orientation and apply a translation (your preferred distance) to that. Then to place SceneKit content there, use the resulting matrix to set the transform of a SceneKit node.
The easiest way to do this is to stick to SIMD vector/matrix types throughout rather than converting between those and SCN types. SceneKit adds a bunch of new accessors in iOS 11 so you can use SIMD types directly.
There’s at least a couple of ways to go about this, depending on what result you want.
Option 1
// set up z translation for 20 cm in front of whatever
// last column of a 4x4 transform matrix is translation vector
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.2
// get camera transform the ARKit way
let cameraTransform = view.session.currentFrame.camera.transform
// if we wanted, we could go the SceneKit way instead; result is the same
// let cameraTransform = view.pointOfView.simdTransform
// set node transform by multiplying matrices
node.simdTransform = cameraTransform * translation
This option, using a whole transform matrix, not only puts the node a consistent distance in front of your camera, it also orients it to point the same direction as your camera.
Option 2
// distance vector for 20 cm in front of whatever
let translation = float3(x: 0, y: 0, z: -0.2)
// treat distance vector as in camera space, convert to world space
let worldTranslation = view.pointOfView.simdConvertPosition(translation, to: nil)
// set node position (not whole transform)
node.simdPosition = worldTranslation
This option sets only the position of the node, leaving its orientation unchanged. For example, if you place a bunch of cubes this way while moving the camera, they’ll all be lined up facing the same direction, whereas with option 1 they’d all be in different directions.
Going beyond
Both of the options above are based only on the 3D transform of the camera — they don’t take the position of a 2D touch on the screen into account.
If you want to do that, too, you’ve got more work cut out for you — essentially what you’re doing is hit testing touches not against the world, but against a virtual plane that’s always parallel to the camera and a certain distance away. That plane is a cross section of the camera projection frustum, so its size depends on what fixed distance from the camera you place it at. A point on the screen projects to a point on that virtual plane, with its position on the plane scaling proportional to the distance from the camera (like in the below sketch):
So, to map touches onto that virtual plane, there are a couple of approaches to consider. (Not giving code for these because it’s not code I can write without testing, and I’m in an Xcode-free environment right now.)
Make an invisible SCNPlane that’s a child of the view’s pointOfView node, parallel to the local xy-plane and some fixed z distance in front. Use SceneKit hitTest (not ARKit hit test!) to map touches to that plane, and use the worldCoordinates of the hit test result to position the SceneKit nodes you drop into your scene.
Use Option 1 or Option 2 above to find a point some fixed distance in front of the camera (or a whole translation matrix oriented to match the camera, translated some distance in front). Use SceneKit’s projectPoint method to find the normalized depth value Z for that point, then call unprojectPoint with your 2D touch location and that same Z value to get the 3D position of the touch location with your camera distance. (For extra code/pointers, see my similar technique in this answer.)

Scenekit: Angle between two SCNNode

I am trying to develop a robotic arm tracking system.
I used scenekit to develop the visualization and the control of the system.
The SCNNodes of my system is:
Shoulder--->Upper_arm--->Fore_arm--->Palm.
I could now rotate each node using the rotation property of each SCNNode.
And I am now interested in whether there's any existing API to compute the angle between two SCNNode when the system is moving, eg. the angle between the Upper_arm and Fore_arm?
Try SCNNode.eulerAngles, you will get SCNVector3, which has components:
Pitch (the x component) is the rotation about the node’s x-axis.
Yaw (the y component) is the rotation about the node’s y-axis.
Roll (the z component) is the rotation about the node’s z-axis.
Fore_arm.eulerAngles will give you rotation angles relative to Upper_arm

SceneKit - displaying 2d view inside scene at (x, y, z)

I need to display some view with notification in my scene at given position. This notification should stay the same size, no matter what is the distance. But most important is that it should look like 2d object, no matter what rotation camera has. I don't know if I can actually insert some 2d object, this would be great. So far I'm experimenting with SCNNodes containing Box. I don't know how to make them always rotate towards camera (which rotates in every axis). I tried to use
let lookAt = SCNLookAtConstraint(target: self.cameraNode)
lookAt.gimbalLockEnabled = true
notificationNode.constraints = [lookAt]
This almost works, but nodes are all rotated in some random angle. Looks like UIView with rotation applied. Can someone help me with this?
Put your 2-D object(s) on an SCNPlane. Make the plane node a child of your camera node. Position and rotate the plane as you like, then leave it alone. Anytime the camera moves or rotates, the plane will move and revolve with it always appearing the same.
Ok I know how to do it now. Create empty node without geometry, and add a new node with SCNLookAtConstraint. Then I can move this invisible node with animation, and subnode stays looking at camera.

Resources