I am developing an iOS in-store navigation app for a retail store using AR Scene Kit which should resembles as Lowe's Vision Navigation. Hence, At first I want to programatically plot the position of any xyz product which is available in the store in an AR Scene Kit irrespective of the camera's initial position, but the position of the Product will remain same in the Store. I was totally new to AR as well as Scene Kit.
I am able to add a SCNNode in the ARSCNView but the problem will be my camera's initial position according to that only the SCNNode is plotting. Once this is done, then I need to give the in-store navigation for the selected product from my position inside the store may be using iBeacon or other equivalent.
SCNNodes operate somewhat like UIViews, in the sense that, as you said, the positions are relative to the parent node/view. For the cases where you want the position relative to the whole view or the world you can use worldTransform and -convertPosition:toNode: .
World Transform
Convert Position To Node
Related
I'm working on an app with AR feature. I want to be able to place a 3D model that I have on a horizontal plane that has been detected. So inside the renderer(didAdd) delegate function, I added a node for my 3D model, and set its position to the center of the plane anchor. However, when I run the app to test it, my model is floating on top of the plane instead of standing directly on top of it. My guess is that there is some translation that needs to be done with the coordinates, but don't know about the details. Can somebody give me some pointers?
I have an app that is using ARImageAnchors to detect images with the camera. I've noticed that although the scene node's position in 3d space updates in real-time, the orientation (xyz rotation) of the node can take seconds to update. Any attached scene nodes "snap" to the new orientation as it updates.
Is there a way to animate between the changing anchor orientations to make the transitions smoother?
My setup is simple, I'm using renderer(didAdd:for:) to add a plane to the supplied node.
This is no longer necessary with ARKit 2.0. Simply use ARKit 2.0's image tracking instead of image recognition.
Once the user has scanned the environment and that I detected a plane, I would like the world origin anchor, which is the device position when the app opens (which is the origin of the 3D world), to be reset to where my device is right now so that the user can see my AR objects in front of him.
(my objects are floating and not related to the floor but detecting a plane makes the objects more stable)
I didn't find a way to do that. It's linked to ARConfiguration but it doesn't seem like we can update the coordinate system without resetting all the tracking. Do you have any idea?
According to this post link, the documentation of the rootNode says:
You should not modify the transform property of the root node.
I still tried affecting the camera position to the position of the rootNode but it didn't change anything.
So it seems like the only way is to create a new node and use it as a rootNode from where we are.
I have a series of (flat plane) nodes in my scene that I need to have constantly facing the camera.
How can I adjust the transform/rotation to get this working?
Also, where do I make this calculation?
Currently I am trying to make it happen on user interaction in the SCNSceneRendererDelegate renderer:updateAtTime: delegate method.
How about an SCNBillboardConstraint? That restricts you to iOS 9/El Capitan/tvOS. Add the constraint to each of your flat plane (billboard) nodes.
From the SceneKit Framework Reference: https://developer.apple.com/library/ios/documentation/SceneKit/Reference/SCNBillboardConstraint_Class/index.html
An SCNBillboardConstraint object automatically adjusts a node’s orientation so that it always points toward the pointOfView node currently being used to render the scene.
In the more general case, SCNLookAtConstraint will keep any node's minus-Z axis pointed toward any other node.
I've been fiddling with SceneKit recently and I wanted to make the following thing:
When creating a Game template from Xcode, you get a scene with a ship.
I wanted to animate this ship and orient it according to the relative position of my iPhone after I tap the screen. So for instance, if I hold my iPhone horizontally, tap the screen, this takes the reference attitude of my horizontal iPhone. Then, when I lift it (changing the pitch), I want the ship to orient itself as such.
I've been trying to change my ship node eulerAngles with the attitude pitch yaw and roll as in the following:
CMAttitude * attitude = deviceMotion.attitude
_ship.eulerAngles = SCNVector3Make(-attitude.pitch, attitude.yaw, attitude.roll);
Whenever I do that, the ship goes back to its original position in the scene. I can't seem to understand how to give it a speed in the direction it's facing without making it reset to its original position when I change its eulerAngles.
Ideally, the ship would have some sort of engine power accelerating it in the direction it's facing, while it would still be affected by gravity. How should I do that? Thanks!