I want to calculate the distance between any marker image saved in the iOS project which is used for detecting in Augmented Reality and your current position i.e. camera position using ARKIT ?
As Apple’s documentation and sample code note:
A detected image is reported to your app as an ARImageAnchor object.
ARImageAnchor is a subclass of ARAnchor.
ARAnchor has a transform property, which indicates its position and orientation in 3D space.
ARKit also provides you an ARCamera on every frame (or you can get it from the session’s currentFrame).
ARCamera also has a transform property, indicating the camera’s position and orientation in 3D space.
You can get the translation vector (position) from a 4x4 transform matrix by extracting the last column vector.
That should be enough for you to connect the dots...
If you're using SceneKit and have the SCNNode* that came with -(void) renderer:(id<SCNSceneRenderer>)renderer didAddNode:(SCNNode *)node forAnchor:(ARAnchor*)anchor then [node convertPosition:{0,0,0} toNode:self.sceneView.pointOfView].z is distance to camera.
Related
I start with two pan/tilt/zoom cameras oriented so they can both see an ArUco marker. These cameras have been calibrated to determine the camera matrix, and the size of the marker is known. I can determine the rotation and translation vectors from the marker to each camera, and the cameras' current pan, tilt, and zoom positions are known.
If I move the marker and turn one camera to follow it, I can note the new pan/tilt/zoom position and determine the new rotation and translation vectors. Now I want to turn the second camera to face the marker. How can I determine the new pan and tilt settings required?
I think I understand how to build a combined transformation matrix from the two sets of rotation and translation vectors, but I don't know how to account for changing pan/tilt values.
I put some code in a Google Colab to better illustrate what I'm trying to do.
When I add a new node with ARKit (ARSKView), the object is positioned base on the device camera. So if your phone is facing down or tilted, the object will be in that direction as well. How can I instead place the object base on the horizon?
For that, right after a new node's creation, use a worldOrientation instance property that controls the node's orientation relative to the scene's world coordinate space.
var worldOrientation: SCNQuaternion { get set }
This quaternion isolates the rotational aspect of the node's worldTransform matrix, which in turn is the conversion of the node's transform from local space to the scene's world coordinate space. That is, it expresses the difference in axis and angle of rotation between the node and the scene's rootNode.
let worldOrientation = sceneView.scene.rootNode.worldOrientation
yourNode.rotation = worldOrientation /* X, Y, Z, W components */
P.S. (as you updated your question) :
If you're using SpriteKit, 2D sprites you spawn in ARSKView are always face the camera. So, if the camera moves around a definite point of real scene, all the sprites must be rotated about their pivot point, still facing a camera.
Nothing can prevent you from using SceneKit and SpriteKit together.
I want to convert the pixel coordinate into real world coordinate. And I found that the ARKit API provide a function in ARCamera call viewMatrix()
Returns a transform matrix for converting from world space to camera
space
It this function can obtain extrinsic matrix for the camera?
This may help:
self.sceneView.session.currentFrame?.camera.transform
The position and orientation of the camera in world coordinate space.
.transform documentation
You can directly extract the eulerAngles from this, but will have to parse the translation yourself.
How come you manually want to project pixels into world positions? (The transform alone isn't going to help you there obviously).
I am trying to find distance between iOS device's front-facing camera and user's face in the real world.
So far, I have tried ARKit/SceneKit, and using ARFaceAnchor I am able to detect user's face distance from camera; but it works only in close proximity (up to about 88 cm). My application requires face distance detection up to 200 cms.
I am assuming this could be achieved without the use of trueDepth data (which is being used in ARFaceAnchor).
Can you put me in the right direction?
In order to get the distance between the device and the user's face you should convert position of the detected user's face into camera's coordinate system. To do this, you will have to use the convertPosition method from SceneKit to switch coordinate space, from face coordinate space to camera coordinate space.
let positionInCameraSpace = theFaceNode.convertPosition(pointInFaceCoordinateSpace, to: yourARSceneView.pointOfView)
theFaceNode is the SCNNode created by ARKit representing the user's face. The pointOfView property of your ARSCNView returns the node from which the scene is viewed, basically the camera.
pointInFaceCoordinateSpace could be any vertices of the face mesh or just the position of theFaceNode (which is the origin of the face coordinate system). Here, positionInCameraSpace is a SCNVector3, representing the position of the point you gave, in camera coordinate space. Then you can get the distance between the point and the camera using the x,y and z value of this SCNVector3 (expressed in meters).
these are some links that may help you :
-Distance between face and camera using ARKit
-https://github.com/evermeer/EVFaceTracker
-https://developer.apple.com/documentation/arkit/arfacetrackingconfiguration
-How to measure device distance from face with help of ARKit in iOS?
I have an ARSCNView and I am tracking feature points in the scene. How would I get the 2D coordinates of the feature points (as in the coordinates of that point in the screen) from the 3D world coordinates of the feature point?
(Essentially the opposite of sceneView.hitTest)
Converting a point from 3D space (usually camera or world space) to 2D view (pixel) space is called projecting that point. (Because it involves a projection transform that defines how to flatten the third dimension.)
ARKit and SceneKit both offer methods for projecting points (and unprojecting points, the reverse transform that requires extra input on how to extrapolate the third dimension).
Since you're working with ARSCNView, you can just use the projectPoint method. (That's inherited from the superclass SCNView and defined in the SCNSceneRenderer protocol, but still applies in AR because ARKit world space is the same as SceneKit world/scene/rootNode space.) Note you'll need to convert back and forth between float3 and SCNVector3 for that method.
Also note the returned "2D" point is still a 3D vector — the x and y coordinates are screen pixels (well, "points" as in UIKit layout units), and the third is a relative depth value. Just make a CGPoint from the first two coordinates for something you can use with other UIKit API.
BTW, if you're using ARKit without SceneKit, there's also a projectPoint method on ARCamera.