SceneKit 'clip to bounds' equivalent - ios

Other than camera xFov, yFov, zNear and zFar, is there a way with SceneKit to have a node (and its descendants) 'clip to bounds', i.e. to specify an arbitrary 3D bounding box whereby any nodes that fall outside it would not be rendered?
My use case is as follows:
I have a SceneView with a clear color background that is displayed on top of a separate map view.
I only want to display nodes in my 3D model that are notionally 'above' the ground (i.e. above the map). When the map is flat, I can do that very easily by setting zFar on the SceneKit camera and positioning the camera directly on the positive Z-axis.
However, if the map is tilted, then I need to adjust my Scene camera to match the map tilt. However, at that point, the camera z-axis is no longer orthogonal to the scene content and zFar would 'reveal' nodes in the model that are notionally below the map.
(This would be easy if the map content were part of the SceneKit node hierarchy - just define a floor and render the map on the floor. But the map is displayed in a separate view that sits below the SceneKit view.)
Hence, I'm looking to define a plane or bounding box outside which SceneKit nodes are not rendered, but which can be controlled independently of the camera frustum.
(At least that's what I think I need - perhaps there's another way...)
Update
I found I can implement renderer(aRenderer: SCNSceneRenderer, didSimulatePhysicsAtTime time: NSTimeInterval) for the SCNView delegate, and iterate through my scene nodes and test for world position, hiding or showing as desired. However that feels a bit expensive and inefficient - I'm wondering if there's a better approach?

Related

How do you get the UIKit coordinates of an ARKit feature point?

I have an ARSCNView and I am tracking feature points in the scene. How would I get the 2D coordinates of the feature points (as in the coordinates of that point in the screen) from the 3D world coordinates of the feature point?
(Essentially the opposite of sceneView.hitTest)
Converting a point from 3D space (usually camera or world space) to 2D view (pixel) space is called projecting that point. (Because it involves a projection transform that defines how to flatten the third dimension.)
ARKit and SceneKit both offer methods for projecting points (and unprojecting points, the reverse transform that requires extra input on how to extrapolate the third dimension).
Since you're working with ARSCNView, you can just use the projectPoint method. (That's inherited from the superclass SCNView and defined in the SCNSceneRenderer protocol, but still applies in AR because ARKit world space is the same as SceneKit world/scene/rootNode space.) Note you'll need to convert back and forth between float3 and SCNVector3 for that method.
Also note the returned "2D" point is still a 3D vector — the x and y coordinates are screen pixels (well, "points" as in UIKit layout units), and the third is a relative depth value. Just make a CGPoint from the first two coordinates for something you can use with other UIKit API.
BTW, if you're using ARKit without SceneKit, there's also a projectPoint method on ARCamera.

finding the depth in arkit with SCNVector3Make

the goal of the project is to create a drawing app. i want it so that when i touch the screen and move my finger it will follow the finger and leave a cyan color paint. i did created it BUT there is one problem. the paint DEPTH is always randomly placed.
here is the code, just need to connect the sceneView with the storyboard.
https://github.com/javaplanet17/test/blob/master/drawingar
my question is how do i make the program so that the depth will always be consistent, by consistent i mean there is always distance between the paint and the camera.
if you run the code above you will see that i have printed out all the SCNMatrix4, but i none of them is the DEPTH.
i have tried to change hitTransform.m43 but it only messes up the x and y.
If you want to get a point some consistent distance in front of the camera, you don’t want a hit test. A hit test finds the real world surface in front of the camera — unless your camera is pointed at a wall that’s perfectly parallel to the device screen, you’re always going to get a range of different distances.
If you want a point some distance in front of the camera, you need to get the camera’s position/orientation and apply a translation (your preferred distance) to that. Then to place SceneKit content there, use the resulting matrix to set the transform of a SceneKit node.
The easiest way to do this is to stick to SIMD vector/matrix types throughout rather than converting between those and SCN types. SceneKit adds a bunch of new accessors in iOS 11 so you can use SIMD types directly.
There’s at least a couple of ways to go about this, depending on what result you want.
Option 1
// set up z translation for 20 cm in front of whatever
// last column of a 4x4 transform matrix is translation vector
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.2
// get camera transform the ARKit way
let cameraTransform = view.session.currentFrame.camera.transform
// if we wanted, we could go the SceneKit way instead; result is the same
// let cameraTransform = view.pointOfView.simdTransform
// set node transform by multiplying matrices
node.simdTransform = cameraTransform * translation
This option, using a whole transform matrix, not only puts the node a consistent distance in front of your camera, it also orients it to point the same direction as your camera.
Option 2
// distance vector for 20 cm in front of whatever
let translation = float3(x: 0, y: 0, z: -0.2)
// treat distance vector as in camera space, convert to world space
let worldTranslation = view.pointOfView.simdConvertPosition(translation, to: nil)
// set node position (not whole transform)
node.simdPosition = worldTranslation
This option sets only the position of the node, leaving its orientation unchanged. For example, if you place a bunch of cubes this way while moving the camera, they’ll all be lined up facing the same direction, whereas with option 1 they’d all be in different directions.
Going beyond
Both of the options above are based only on the 3D transform of the camera — they don’t take the position of a 2D touch on the screen into account.
If you want to do that, too, you’ve got more work cut out for you — essentially what you’re doing is hit testing touches not against the world, but against a virtual plane that’s always parallel to the camera and a certain distance away. That plane is a cross section of the camera projection frustum, so its size depends on what fixed distance from the camera you place it at. A point on the screen projects to a point on that virtual plane, with its position on the plane scaling proportional to the distance from the camera (like in the below sketch):
So, to map touches onto that virtual plane, there are a couple of approaches to consider. (Not giving code for these because it’s not code I can write without testing, and I’m in an Xcode-free environment right now.)
Make an invisible SCNPlane that’s a child of the view’s pointOfView node, parallel to the local xy-plane and some fixed z distance in front. Use SceneKit hitTest (not ARKit hit test!) to map touches to that plane, and use the worldCoordinates of the hit test result to position the SceneKit nodes you drop into your scene.
Use Option 1 or Option 2 above to find a point some fixed distance in front of the camera (or a whole translation matrix oriented to match the camera, translated some distance in front). Use SceneKit’s projectPoint method to find the normalized depth value Z for that point, then call unprojectPoint with your 2D touch location and that same Z value to get the 3D position of the touch location with your camera distance. (For extra code/pointers, see my similar technique in this answer.)

Turn an entire SceneKit scene into an image suitable for a texture

I've written a little app using CoreMotion, AV and SceneKit to make a simple panorama. When you take a picture, it maps that onto a SK rectangle and places it in front of whatever CM direction the camera is facing. This is working fine, but...
I would like the user to be able to click a "done" button and turn the entire scene into a single image. I could then map that onto a sphere for future viewing rather than re-creating the entire set of objects. I don't need to stitch or anything like that, I want the individual images to remain separate rectangles, like photos glued to the inside of a ball.
I know about snapshot and tried using that with a really wide FOV, but that results in a fisheye view that does not map back properly (unless I'm doing it wrong). I assume there is some sort of transform I need to apply? Or perhaps there is an easier way to do this?
The key is "photos glued to the inside of a ball". You have a bunch of rectangles, suspended in space. Turning that into one image suitable for projection onto a sphere is a bit of work. You'll have to project each rectangle onto the sphere, and warp the image accordingly.
If you just want to reconstruct the scene for future viewing in SceneKit, use SCNScene's built in serialization, write(to:​options:​delegate:​progress​Handler:​) and SCNScene(named:).
To compute the mapping of images onto a sphere, you'll need some coordinate conversion. For each image, convert the coordinates of the corners into spherical coordinates, with the origin at your point of view. Change the radius of each corner's coordinate to the radius of your sphere, and you now have the projected corners' locations on the sphere.
It's tempting to repeat this process for each pixel in the input rectangular image. But that will leave empty pixels in the spherical output image. So you'll work in reverse. For each pixel in the spherical output image (within the 4 corner points), compute the ray (trivially done, in spherical coordinates) from POV to that point. Convert that ray back to Cartesian coordinates, compute its intersection with the rectangular image's plane, and sample at that point in your input image. You'll want to do some pixel weighting, since your output image and input image will have different pixel dimensions.

SceneKit - displaying 2d view inside scene at (x, y, z)

I need to display some view with notification in my scene at given position. This notification should stay the same size, no matter what is the distance. But most important is that it should look like 2d object, no matter what rotation camera has. I don't know if I can actually insert some 2d object, this would be great. So far I'm experimenting with SCNNodes containing Box. I don't know how to make them always rotate towards camera (which rotates in every axis). I tried to use
let lookAt = SCNLookAtConstraint(target: self.cameraNode)
lookAt.gimbalLockEnabled = true
notificationNode.constraints = [lookAt]
This almost works, but nodes are all rotated in some random angle. Looks like UIView with rotation applied. Can someone help me with this?
Put your 2-D object(s) on an SCNPlane. Make the plane node a child of your camera node. Position and rotate the plane as you like, then leave it alone. Anytime the camera moves or rotates, the plane will move and revolve with it always appearing the same.
Ok I know how to do it now. Create empty node without geometry, and add a new node with SCNLookAtConstraint. Then I can move this invisible node with animation, and subnode stays looking at camera.

Scenekit object that stays in the center of the camera view (Swift)

So I have a camera in my SceneKit project that is able to rotate and move around freely, and I have an object that, in some cases, needs to stay at a constant distance away from the camera and always be in the center of its view no matter how the camera rotates. Unfortunately, I'm new to SceneKit and don't know how to accomplish this.
So the key things I'm looking for are:
How to have the object always at the same distance from the camera
How to have the object always be in the center of the camera's view no matter what direction it's looking
At the moment, both the camera and the object (an SCNNode with a box geometry) are children of the same scene.
I'm coding in swift so I'd prefer an answer using that, but if you have a solution in objective-c, that works too.
Thanks a bunch!
Think about how you might solve this in the real world. Grab a two by four of the appropriate length. Use duct tape to attach the ball to one end, and to attach the camera (aimed at the ball) to the other end.
Now you can carry that rig around with you. The ball will always be in the center of the camera view, and will be a constant distance away from the camera.
You can build the same rig, virtually, in SceneKit. Create a new SCNNode to be the rig (taking the place of the two by four). Add the ball as a child node, at (0, 0, 0). Add the camera as a child node too, at (0, 0, 5) (camera looks down the -Z axis, so this position should put the ball in the center of the view). Now you can move the rig node anywhere in the scene you want, and you'll have a consistent ball position.

Resources