Why is "convertPosition" called to project a point found in SceneKit? - ios

I am working on 3D project/unproject logic and I am learning some of the fundamentals. I went over this question:
Scene Kit: projectPoint calculated is displaced
In that question part of the shown code is:
//world coordinates
let v1w = topSphereNode.convertPosition(v1, toNode: scene.rootNode)
let v2w = topSphereNode.convertPosition(v2, toNode: scene.rootNode)
My question is, why is that needed? Why not just use v1 and v2 as points since they are already valid points 3D points in the scene? Why does the top sphere node's position need to be converted with respect to the root node's position?

since they are already valid points 3D points in the scene?
They are not necessarily valid 3D points in the scene, but 3D points in the local space of the sphere. The code you showed converts them to the world space, i.e. the scene space. This is important when the sphere is a child object, rotated, scaled, and/or simply not in the center of the scene.
Rather then using code from a question, check out the answers here:
How to use iOS (Swift) SceneKit SCNSceneRenderer unprojectPoint properly

Related

ARKit how position nodes base on horizon and not the camera orientation?

When I add a new node with ARKit (ARSKView), the object is positioned base on the device camera. So if your phone is facing down or tilted, the object will be in that direction as well. How can I instead place the object base on the horizon?
For that, right after a new node's creation, use a worldOrientation instance property that controls the node's orientation relative to the scene's world coordinate space.
var worldOrientation: SCNQuaternion { get set }
This quaternion isolates the rotational aspect of the node's worldTransform matrix, which in turn is the conversion of the node's transform from local space to the scene's world coordinate space. That is, it expresses the difference in axis and angle of rotation between the node and the scene's rootNode.
let worldOrientation = sceneView.scene.rootNode.worldOrientation
yourNode.rotation = worldOrientation /* X, Y, Z, W components */
P.S. (as you updated your question) :
If you're using SpriteKit, 2D sprites you spawn in ARSKView are always face the camera. So, if the camera moves around a definite point of real scene, all the sprites must be rotated about their pivot point, still facing a camera.
Nothing can prevent you from using SceneKit and SpriteKit together.

How do you get the UIKit coordinates of an ARKit feature point?

I have an ARSCNView and I am tracking feature points in the scene. How would I get the 2D coordinates of the feature points (as in the coordinates of that point in the screen) from the 3D world coordinates of the feature point?
(Essentially the opposite of sceneView.hitTest)
Converting a point from 3D space (usually camera or world space) to 2D view (pixel) space is called projecting that point. (Because it involves a projection transform that defines how to flatten the third dimension.)
ARKit and SceneKit both offer methods for projecting points (and unprojecting points, the reverse transform that requires extra input on how to extrapolate the third dimension).
Since you're working with ARSCNView, you can just use the projectPoint method. (That's inherited from the superclass SCNView and defined in the SCNSceneRenderer protocol, but still applies in AR because ARKit world space is the same as SceneKit world/scene/rootNode space.) Note you'll need to convert back and forth between float3 and SCNVector3 for that method.
Also note the returned "2D" point is still a 3D vector — the x and y coordinates are screen pixels (well, "points" as in UIKit layout units), and the third is a relative depth value. Just make a CGPoint from the first two coordinates for something you can use with other UIKit API.
BTW, if you're using ARKit without SceneKit, there's also a projectPoint method on ARCamera.

finding the depth in arkit with SCNVector3Make

the goal of the project is to create a drawing app. i want it so that when i touch the screen and move my finger it will follow the finger and leave a cyan color paint. i did created it BUT there is one problem. the paint DEPTH is always randomly placed.
here is the code, just need to connect the sceneView with the storyboard.
https://github.com/javaplanet17/test/blob/master/drawingar
my question is how do i make the program so that the depth will always be consistent, by consistent i mean there is always distance between the paint and the camera.
if you run the code above you will see that i have printed out all the SCNMatrix4, but i none of them is the DEPTH.
i have tried to change hitTransform.m43 but it only messes up the x and y.
If you want to get a point some consistent distance in front of the camera, you don’t want a hit test. A hit test finds the real world surface in front of the camera — unless your camera is pointed at a wall that’s perfectly parallel to the device screen, you’re always going to get a range of different distances.
If you want a point some distance in front of the camera, you need to get the camera’s position/orientation and apply a translation (your preferred distance) to that. Then to place SceneKit content there, use the resulting matrix to set the transform of a SceneKit node.
The easiest way to do this is to stick to SIMD vector/matrix types throughout rather than converting between those and SCN types. SceneKit adds a bunch of new accessors in iOS 11 so you can use SIMD types directly.
There’s at least a couple of ways to go about this, depending on what result you want.
Option 1
// set up z translation for 20 cm in front of whatever
// last column of a 4x4 transform matrix is translation vector
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.2
// get camera transform the ARKit way
let cameraTransform = view.session.currentFrame.camera.transform
// if we wanted, we could go the SceneKit way instead; result is the same
// let cameraTransform = view.pointOfView.simdTransform
// set node transform by multiplying matrices
node.simdTransform = cameraTransform * translation
This option, using a whole transform matrix, not only puts the node a consistent distance in front of your camera, it also orients it to point the same direction as your camera.
Option 2
// distance vector for 20 cm in front of whatever
let translation = float3(x: 0, y: 0, z: -0.2)
// treat distance vector as in camera space, convert to world space
let worldTranslation = view.pointOfView.simdConvertPosition(translation, to: nil)
// set node position (not whole transform)
node.simdPosition = worldTranslation
This option sets only the position of the node, leaving its orientation unchanged. For example, if you place a bunch of cubes this way while moving the camera, they’ll all be lined up facing the same direction, whereas with option 1 they’d all be in different directions.
Going beyond
Both of the options above are based only on the 3D transform of the camera — they don’t take the position of a 2D touch on the screen into account.
If you want to do that, too, you’ve got more work cut out for you — essentially what you’re doing is hit testing touches not against the world, but against a virtual plane that’s always parallel to the camera and a certain distance away. That plane is a cross section of the camera projection frustum, so its size depends on what fixed distance from the camera you place it at. A point on the screen projects to a point on that virtual plane, with its position on the plane scaling proportional to the distance from the camera (like in the below sketch):
So, to map touches onto that virtual plane, there are a couple of approaches to consider. (Not giving code for these because it’s not code I can write without testing, and I’m in an Xcode-free environment right now.)
Make an invisible SCNPlane that’s a child of the view’s pointOfView node, parallel to the local xy-plane and some fixed z distance in front. Use SceneKit hitTest (not ARKit hit test!) to map touches to that plane, and use the worldCoordinates of the hit test result to position the SceneKit nodes you drop into your scene.
Use Option 1 or Option 2 above to find a point some fixed distance in front of the camera (or a whole translation matrix oriented to match the camera, translated some distance in front). Use SceneKit’s projectPoint method to find the normalized depth value Z for that point, then call unprojectPoint with your 2D touch location and that same Z value to get the 3D position of the touch location with your camera distance. (For extra code/pointers, see my similar technique in this answer.)

scaling issue when projecting uiview in front of scenekit camera

content
first i have a 360 (equirectangular) image viewer by applying the image as a texture onto a sphere with the scenekit camera at the center.
then i enter a "drawing mode" where i can use my finger to draw on a transparent UIView
when i'm done, i take the drawing and apply it to my sphere as an annotation
problem (with video example)
the problem is in this 3rd step, the scale isn't saved correctly.
https://www.dropbox.com/s/a2l3vvx92sa3cgh/drawing_defect_trimmed_480p.mp4?dl=0
temporary solution
i was able to add a magic number to the expected scale which lessens the scaling problem, but it is still a little bit off and obviously suboptimal from a technical perspective.
e.g. “scale_used = expected_scale + magic_constant”
implementation details
I am projecting a UIView in front of a Scene Kit camera at some custom
distance in the Scene Kit world and trying to make it so the new
Scene Kit node will have exactly the same visual size.
the approach is to calculate the perspective projection of the item,
located in the world at drawingContentItem.distance using camera
zNear - “(screenHeight * distance / Float(zNear))”.
Then we assume that size of visible world of scene kit is from -1000
to 1000; and the view angle is 60 degrees; and calculate the ratio of
scene Kit near plane view to UIView - “(sceneScreenHeight /
nearPlaneHeightInWorlCoordinates)”.
that gives us the finalHeight of drawing in the world coordinates and
we use this to calculate the scale.
But it seems that there is some mistake in the formula and it causes
the need for the magical number. :(

SceneKit - displaying 2d view inside scene at (x, y, z)

I need to display some view with notification in my scene at given position. This notification should stay the same size, no matter what is the distance. But most important is that it should look like 2d object, no matter what rotation camera has. I don't know if I can actually insert some 2d object, this would be great. So far I'm experimenting with SCNNodes containing Box. I don't know how to make them always rotate towards camera (which rotates in every axis). I tried to use
let lookAt = SCNLookAtConstraint(target: self.cameraNode)
lookAt.gimbalLockEnabled = true
notificationNode.constraints = [lookAt]
This almost works, but nodes are all rotated in some random angle. Looks like UIView with rotation applied. Can someone help me with this?
Put your 2-D object(s) on an SCNPlane. Make the plane node a child of your camera node. Position and rotate the plane as you like, then leave it alone. Anytime the camera moves or rotates, the plane will move and revolve with it always appearing the same.
Ok I know how to do it now. Create empty node without geometry, and add a new node with SCNLookAtConstraint. Then I can move this invisible node with animation, and subnode stays looking at camera.

Resources