I'm actualy developing an AR application on xCode with ARKit.
I have my iPad who is on a particular orientation and when I add a SCNode on (0,0,0) to my SCNScene with a ARWorldTrackingSessionConfiguration it appears in front of the camera when the iPad is perpendicular to the ground like so :
The iPad is perpendicular to the ground and the 3D object is at (0,0,0)
I would like to have my SCNode to appears directly on the iPad screen when I launch the ARScene like this :
The iPad is oriented in direction to the flower pot and i had to set the coordinates manually
How can i do that ?
I imagine i would have to do something like a translation of coordinates but I don't know how to do that.
And if it can help, i can have the distance between the camera and the flower pot
Thanks in advance ! :)
You need to pass the coordinates of the object in a SCNMatrix4 form as follows:
let translationMatrix = SCNMatrix4Translate(theNode.worldTransform, 0.1, 0.1, 0.1) //tx, ty, tz are translations in each axis i
and then theNode.transform = translation matrix
Related
I want a compass like Google Earth. I'm using scenekit to create an earth with a camera that moves around it. The sphere is in a fixed position at (0,0,0). I move the camera using quaternions, applying the new orientation to the empty node. scene scheme
I want to show the camera orientation relative to north pole in a compass like this
compass Behavior
I've tried calculating the angle with the up vector but I got wrong values.
let worldUp = earthOrbitalCamera.orbitalNode.worldUp
let angle = atan2(worldUp.y, worldUp.x)
With this angle I update the needle position. The issues is that I'm getting wrong values.
For example, the camera is align with pole north and the needle points to 40 degress to west.
Any help will be appreciate. Thanks.
If the spherical model is aligned so that its north-pole is always pointing +Y direction in the world space, the following code might work.
let worldUp = earthOrbitalCamera.orbitalNode.worldUp
let worldRight = earthOrbitalCamera.orbitalNode.worldRight
let angle = atan2(worldRight.y, worldUp.y)
I am trying to find distance between iOS device's front-facing camera and user's face in the real world.
So far, I have tried ARKit/SceneKit, and using ARFaceAnchor I am able to detect user's face distance from camera; but it works only in close proximity (up to about 88 cm). My application requires face distance detection up to 200 cms.
I am assuming this could be achieved without the use of trueDepth data (which is being used in ARFaceAnchor).
Can you put me in the right direction?
In order to get the distance between the device and the user's face you should convert position of the detected user's face into camera's coordinate system. To do this, you will have to use the convertPosition method from SceneKit to switch coordinate space, from face coordinate space to camera coordinate space.
let positionInCameraSpace = theFaceNode.convertPosition(pointInFaceCoordinateSpace, to: yourARSceneView.pointOfView)
theFaceNode is the SCNNode created by ARKit representing the user's face. The pointOfView property of your ARSCNView returns the node from which the scene is viewed, basically the camera.
pointInFaceCoordinateSpace could be any vertices of the face mesh or just the position of theFaceNode (which is the origin of the face coordinate system). Here, positionInCameraSpace is a SCNVector3, representing the position of the point you gave, in camera coordinate space. Then you can get the distance between the point and the camera using the x,y and z value of this SCNVector3 (expressed in meters).
these are some links that may help you :
-Distance between face and camera using ARKit
-https://github.com/evermeer/EVFaceTracker
-https://developer.apple.com/documentation/arkit/arfacetrackingconfiguration
-How to measure device distance from face with help of ARKit in iOS?
I want to calculate the distance between any marker image saved in the iOS project which is used for detecting in Augmented Reality and your current position i.e. camera position using ARKIT ?
As Apple’s documentation and sample code note:
A detected image is reported to your app as an ARImageAnchor object.
ARImageAnchor is a subclass of ARAnchor.
ARAnchor has a transform property, which indicates its position and orientation in 3D space.
ARKit also provides you an ARCamera on every frame (or you can get it from the session’s currentFrame).
ARCamera also has a transform property, indicating the camera’s position and orientation in 3D space.
You can get the translation vector (position) from a 4x4 transform matrix by extracting the last column vector.
That should be enough for you to connect the dots...
If you're using SceneKit and have the SCNNode* that came with -(void) renderer:(id<SCNSceneRenderer>)renderer didAddNode:(SCNNode *)node forAnchor:(ARAnchor*)anchor then [node convertPosition:{0,0,0} toNode:self.sceneView.pointOfView].z is distance to camera.
I have a camera node in my SceneKit scene that is setup to allow the user to orbit by offsetting it's pivot:
self.cameraNode.pivot = SCNMatrix4MakeTranslation(0, 0, -100);
The user then rotates the node to orbit the camera.
What I would like to work out is
A) how to get the world space direction the camera is facing at any given time
and then
B) how to convert that direction to x,y,z components I can use to move the camera node relative to the way the camera is facing (forward/back/left/right from the camera's perspective).
Thanks!
This is all in OpenGL ES 2.0...
I'm trying to mix a 3D perspective projection with a 2D orthographic projection so I can have a HUD sat on top of the 3D scene in my game.
The game runs with a landscape orientation, and I'm getting really confused how to handle device orientations.
I am rendering the 3D stuff with a suitable projection matrix, and am rotating the modelView matrix and my lighting by 90 degrees so the 3D scene is the right way up. This bit all works fine.
My problem is I can't work out how to set up the 2D projection matrix properly so the origin is in the upper left corner when the device is in landscape with the home button on the left.
How do I correctly construct the orthographic matrix so this happens? I am currently using this
// OrthoMatrix does the same as the old GLOrthof function (left, right, bottom, top, near, far);
projectionMatrix2D = mat4::OrthoMatrix(0, screenWidth, screenHeight , 0, -1.0, 1.0);
However this just puts the origin in the top left if the device is in portrait with the home button at the bottom, and rotating the device in my hand means everything is on its side.
Should I be trying to alter the 2D projection matrix or is there something else I need to be doing?
I asked this on Gamedev, but in a more general way and received a helpful answer.
How can I create an orthographic display that handles different screen dimensions?