SceneKit: too much memory persisting - memory

I’m out of ideas here, SceneKit is piling on the memory and I’m only getting started. I’m displaying SNCNodes which are stored in arrays so I can separate components of the molecule for animation. These trees model molecules of which I will ultimately have maybe 50 to display, say one per “chapter”. The issue is when I move to another chapter the molecules from previous chapters persist in memory.
The molecule nodes are trees of child nodes. About half of the nodes are empty containers for orientation purposes. Otherwise, the geometries are SCNPrimitives (spheres, capsules, and cylinders). Each geometry has a specular and a diffuse material consisting of a UIColor, no textures are used.
When the app first boots, these molecules are constructed from code and archived into a dictionary. Then, and on subsequent boots, the archived dictionary is read into a local dictionary for use by the VC. (I’m removing safety features in this post for brevity.)
moleculeDictionary = Molecules.readFile() as! [String: [SCNNode]]
When a chapter wants to display a molecule it calls a particular function that loads the needed components for a given molecule from the local dictionary into local SCNNode properties.
// node stores (reuseable)
var atomsNode_1 = SCNNode()
var atomsNode_2 = SCNNode()
. . .
func lysozyme() { // called by a chapter to display this molecule
. . .
components = moleculeDictionary["lysozyme"]
atomsNode_1 = components[0] // protein w/CPK color
baseNode.addChildNode(atomsNode_1)
atomsNode_2 = components[2] // NAG
baseNode.addChildNode(atomsNode_2)
. . .
}
Before the next molecule is to be displayed, I call a “clean up” function:
atomsNode_1.removeFromParentNode()
atomsNode_2.removeFromParentNode()
. . .
When I investigate in instruments, most of the bloated memory is 32 kB chunks called by C3DMeshCreateFromProfile and 80 kB chunks of C3DMeshCreateCopyWithInterleavedSources.
I also have leaks I need to trace which are traceable to the NSKeyedUnarchiver decoding of the archive. So I need to deal with these as well but they are a fraction of the memory use that’s accumulating each molecule call.
If I return to a previously viewed molecule, there is no further increase in memory usage, it all accumulates and persists.
I’ve tried declaring atomsNode_1 and its kin as optionals then setting them to nil at clean up time. No help. I’ve tried, in the clean up function,
atomsNode_1.enumerateChildNodesUsingBlock({
node, stop in
node.removeFromParentNode()
})
Well, the memory goes back down but the nodes seem to now be permanently gone from the loaded dictionary. Damn reference types!
So maybe I need a way to archive the [SCNNode] arrays in such a way as to unarchive and retrieve them individually. In this scenario I would clear them out of memory when done and reload from the archive when revisiting that molecule. But I know not yet how to do either of these. I’d appreciate comments about this before investing more time to be frustrated.

Spheres, capsules, and cylinders all have fairly dense meshes. Do you need all that detail? Try reducing the various segment count properties (segmentCount, radialSegmentCount, etc). As a quick test, substitute SCNPyramid for all of your primitive types (that's the primitive with the lowest vector count). You should see a dramatic reduction in memory use if this is a factor (it will look ugly, but will give you immediate feedback on whether you're on a usable track). Can you use a long SCNBox instead of a cylinder?
Another optimization step would be to use SCNLevelOfDetail to allow substitute, low vertex count geometry when an object is far away. That would be more work than simply reducing the segment counts uniformly, but would pay off if you sometimes need greater detail.
Instead of managing the components yourself in arrays, use the node hierarchy to do that. Create each molecule, or animatable piece of a molecule, as a tree of SCNNodes. Give it a name. Make a flattenedClone. Now archive that. Read the node tree from archive when you need it; don't worry about arrays of nodes.
Consider writing two programs. One is your iOS program that manipulates/displays the molecules. The other is a Mac (or iOS?) program that generates your molecule node trees and archives them. That will give you a bunch of SCNNode tree archives that you can embed, as resources, in your display program, with no on-the-fly generation.
An answer to scene kit memory management using swift notes the need to nil out "textures" (materials or firstMaterial properties?) to release the node. Seems worth a look, although since you're just using UIColor I doubt it's a factor.
Here's an example of creating a compound node and archiving it. In real code you'd separate the archiving from the creation. Note also the use of a long skinny box to simulate a line. Try a chamfer radius of 0!
extension SCNNode {
public class func gizmoNode(axisLength: CGFloat) -> SCNNode {
let offset = CGFloat(axisLength/2.0)
let axisSide = CGFloat(0.1)
let chamferRadius = CGFloat(axisSide)
let xBox = SCNBox(width: axisLength, height: axisSide, length: axisSide, chamferRadius: chamferRadius)
xBox.firstMaterial?.diffuse.contents = NSColor.redColor()
let yBox = SCNBox(width: axisSide, height: axisLength, length: axisSide, chamferRadius: chamferRadius)
yBox.firstMaterial?.diffuse.contents = NSColor.greenColor()
let zBox = SCNBox(width: axisSide, height: axisSide, length: axisLength, chamferRadius: chamferRadius)
zBox.firstMaterial?.diffuse.contents = NSColor.blueColor()
let xNode = SCNNode(geometry: xBox)
xNode.name = "X axis"
let yNode = SCNNode(geometry: yBox)
yNode.name = "Y axis"
let zNode = SCNNode(geometry: zBox)
zNode.name = "Z axis"
let result = SCNNode()
result.name = "Gizmo"
result.addChildNode(xNode)
result.addChildNode(yNode)
result.addChildNode(zNode)
xNode.position.x = offset
yNode.position.y = offset
zNode.position.z = offset
let data = NSKeyedArchiver.archivedDataWithRootObject(result)
let filename = "gizmo"
// Save data to file
let DocumentDirURL = try! NSFileManager.defaultManager().URLForDirectory(.DocumentDirectory, inDomain: .UserDomainMask, appropriateForURL: nil, create: true)
// made the extension "plist" so you can easily inspect it by opening in Finder. Could just as well be "scn" or "node"
// ".scn" can be opened in the Xcode Scene Editor
let fileURL = DocumentDirURL.URLByAppendingPathComponent(filename).URLByAppendingPathExtension("plist")
print("FilePath:", fileURL.path)
if (!data.writeToURL(fileURL, atomically: true)) {
print("oops")
}
return result
}
}

I did also experience a lot of memory bloat from SceneKit in my app, with similar memory chunks as you in Instruments (C3DGenericSourceCreateDeserializedDataWithAccessors, C3DMeshSourceCreateMutable, etc). I found that setting the geometry property to nil on the SCNNode objects before letting Swift deinitialize them solved it.
In your case, in you cleanup function, do something like:
atomsNode_1.removeFromParentNode()
atomsNode_1.geometry = nil
atomsNode_2.removeFromParentNode()
atomsNode_2.geometry = nil
Another example of how you may implement the cleaning:
class ViewController: UIViewController {
#IBOutlet weak var sceneView: SCNView!
var scene: SCNScene!
// ...
override func viewDidLoad() {
super.viewDidLoad()
scene = SCNScene()
sceneView.scene = scene
// ...
}
deinit {
scene.rootNode.cleanup()
}
// ...
}
extension SCNNode {
func cleanup() {
for child in childNodes {
child.cleanup()
}
geometry = nil
}
}
If that doesn't work, you may have better success by setting its texture to nil, as reported on scene kit memory management using swift.

Related

CAAnimation on multiple SceneKit nodes simultaneously

I am creating an application wherein I am using SceneKit contents in AR app. I have multiple nodes which are being placed at different places in my scene. They may or may not be necessarily be inside one parent node. The user has to choose a correct node, as per challenge set by the application. If the user chooses correct node, the correct node goes through one kind of animation and incorrect ones (may be several) undergo another set of animation. I am accomplishing animations using CAAnimation directly, which is all good. Basically to accomplish this, I am creating an array of all nodes and using them for animation.
DispatchQueue.global(qos: .userInteractive).async { [weak self] in
for node in (self?.nodesAddedInScene.keys)! {
for index in 1...node.childNodes.count - 1 {
if node.childNodes[index].childNodes.first?.name == "target" {
self?.riseUpSpinAndFadeAnimation(on: node.childNodes[index])
} else {
self?.fadeAnimation(on: node.childNodes[index])
}
}
}
}
When user chooses "target" node, that node goes through one set of animation and others go through another set of animations.
RiseUpSpinAndFadeAnimation:
private func riseUpSpinAndFadeAnimation(on shape: SCNNode) {
let riseUpAnimation = CABasicAnimation(keyPath: "position")
riseUpAnimation.fromValue = SCNVector3(shape.position.x, shape.position.y, shape.position.z)
riseUpAnimation.toValue = SCNVector3(shape.position.x, shape.position.y + 0.5, shape.position.z)
let spinAnimation = CABasicAnimation(keyPath: "eulerAngles.y")
spinAnimation.toValue = shape.eulerAngles.y + 180.0
spinAnimation.autoreverses = true
let fadeAnimation = CABasicAnimation(keyPath: "opacity")
fadeAnimation.toValue = 0.0
let riseUpSpinAndFadeAnimation = CAAnimationGroup()
riseUpSpinAndFadeAnimation.animations = [riseUpAnimation, fadeAnimation, spinAnimation]
riseUpSpinAndFadeAnimation.duration = 1.0
riseUpSpinAndFadeAnimation.fillMode = kCAFillModeForwards
riseUpSpinAndFadeAnimation.isRemovedOnCompletion = false
shape.addAnimation(riseUpSpinAndFadeAnimation, forKey: "riseUpSpinAndFade")
}
FadeAnimation:
private func fadeAnimation(on shape: SCNNode) {
let fadeAnimation = CABasicAnimation(keyPath: "opacity")
fadeAnimation.toValue = 0.0
fadeAnimation.duration = 0.5
fadeAnimation.fillMode = kCAFillModeForwards
fadeAnimation.isRemovedOnCompletion = false
shape.addAnimation(fadeAnimation, forKey: "fade")
}
I expect animations to work out, which they are actually. However, the issue is since the nodes are in an array animation is not being done at the same time for all nodes. There are minute differences in start of animation which actually is leading to not so good UI.
What I am looking for is a logic wherein I can attach animations on all nodes and call these animations together later when let's say the user taps correct node. Arrays don't seem to be a wise choice to me. However, I am afraid if I make all of these nodes child nodes of an empty node and the run animation on that empty node, possibly it would be difficult to manage placement of these child nodes in the first place since they supposed to be kept at random distances and not necessarily close together. Given that this ultimately drives AR experience, it more so becomes a bummer.
Requesting some inputs whether there are methods to attach animation to multiple (selected out of those) object (even if sequentially) but RUN them together. I used shape.addAnimation(fadeAnimation, forKey: "fade") "forKey", can that be made of use in such use case? Any pointers appreciated.
I've had up to fifty SCNNodes animating in perfect harmony by using CAKeyframe animations that are paused (.speed = 0) and setting the animation's time offset (.timeOffset) inside a SCNSceneRendererDelegate "renderer(updateAtTime:)" function.
It's pretty amazing how you can add a paused animation with an time offset every 1/60th of a second for a large number of nodes. Hats off to the SceneKit developers for having so little overhead on adding and removing CAAnimations.
I tried many different CAAnimation/SCNAction techniques before settling on this. In other methods the animations would drift out of sync over time.
Manganese,
I am just taking a guess here or could spark an idea for you :-)
I am focusing on this part of your question:
"What I am looking for is a logic wherein I can attach animations on all nodes and call these animations together later when let's say the user taps correct node."
I wonder if SCNTransaction:
[https://developer.apple.com/documentation/scenekit/scntransaction][1]
might do the trick
or maybe dispatch.sync or async (totally guessing...but could help)
[https://developer.apple.com/documentation/dispatch][1]
or I am way off the mark :-)
just trying to help out....
We all learn by sharing what we know
RAD

IOS11 Beta ARKit can't scale Scene object

I created a basic scene, and added an dae file.
First every time i run or save the project i get the popup:
The document “billboard.dae” could not be saved.
It still runs though but is annoying.
But the issue is I can't scale the object.
I have tried different values 0.5s and also > 1 but nothing seems to work. Here is my code
override func viewDidLoad()
{
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/billboard.dae")!
let billboardNode = scene.rootNode.childNode(withName: "billboard", recursively: true)
// billboardNode?.position = SCNVector3Make(0, 0, 1)
billboardNode?.position.z = 10
billboardNode?.scale.z = 0.5
// billboardNode?.scale = SCNVector3Make(0.4,0.4, 0.4)
sceneView.scene = scene
}
Any ideas?
Thanks
Have you verified billboardNode is not nil? You're sending an optional (the result of looking for a child node with a given name) position and scaling messages but if it's nil (because finding the child node failed) it won't have any impact.
The error suggests to me there was some problem converting the .dae file, which might explain why the scene can't locate the asset by name. Or it might be as simple as "billboard" vs. "Billboard".

ARKit hide objects behind walls

How can I use the horizontal and vertical planes tracked by ARKit to hide objects behind walls/ behind real objects? Currently the 3D added objects can be seen through walls when you leave a room and/ or in front of objects that they should be behind. So is it possible to use the data ARKit gives me to provide a more natural AR experience without the objects appearing through walls?
You have two issues here.
(And you didn't even use regular expressions!)
How to create occlusion geometry for ARKit/SceneKit?
If you set a SceneKit material's colorBufferWriteMask to an empty value ([] in Swift), any objects using that material won't appear in the view, but they'll still write to the z-buffer during rendering, which affects the rendering of other objects. In effect, you'll get a "hole" shaped like your object, through which the background shows (the camera feed, in the case of ARSCNView), but which can still obscure other SceneKit objects.
You'll also need to make sure that an occluded renders before any other nodes it's supposed to obscure. You can do this using node hierarchy ( I can't remember offhand whether parent nodes render before their children or the other way around, but it's easy enough to test). Nodes that are peers in the hierarchy don't have a deterministic order, but you can force an order regardless of hierarchy with the renderingOrder property. That property defaults to zero, so setting it to -1 will render before everything. (Or for finer control, set the renderingOrders for several nodes to a sequence of values.)
How to detect walls/etc so you know where to put occlusion geometry?
In iOS 11.3 and later (aka "ARKit 1.5"), you can turn on vertical plane detection. (Note that when you get vertical plane anchors back from that, they're automatically rotated. So if you attach models to the anchor, their local "up" direction is normal to the plane.) Also new in iOS 11.3, you can get a more detailed shape estimate for each detected plane (see ARSCNPlaneGeometry), regardless of its orientation.
However, even if you have the horizontal and the vertical, the outer limits of a plane are just estimates that change over time. That is, ARKit can quickly detect where part of a wall is, but it doesn't know where the edges of the wall are without the user spending some time waving the device around to map out the space. And even then, the mapped edges might not line up precisely with those of the real wall.
So... if you use detected vertical planes to occlude virtual geometry, you might find places where virtual objects that are supposed to be hidden show through, either by being not quite hiding right at the edge of the wall, or being visible through places where ARKit hasn't mapped the entire real wall. (The latter issue you might be able to solve by assuming a larger extent than ARKit does.)
For creating an occlusion material (also known as blackhole material or blocking material) you have to use the following instance properties: .colorBufferWriteMask, .readsFromDepthBuffer, .writesToDepthBuffer and .renderingOrder.
You can use them this way:
plane.geometry?.firstMaterial?.isDoubleSided = true
plane.geometry?.firstMaterial?.colorBufferWriteMask = .alpha
plane.geometry?.firstMaterial?.writesToDepthBuffer = true
plane.geometry?.firstMaterial?.readsFromDepthBuffer = true
plane.renderingOrder = -100
...or this way:
func occlusion() -> SCNMaterial {
let occlusionMaterial = SCNMaterial()
occlusionMaterial.isDoubleSided = true
occlusionMaterial.colorBufferWriteMask = []
occlusionMaterial.readsFromDepthBuffer = true
occlusionMaterial.writesToDepthBuffer = true
return occlusionMaterial
}
plane.geometry?.firstMaterial = occlusion()
plane.renderingOrder = -100
In order to create an occlusion material it's really simple
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
// Define a occlusion material
let occlusionMaterial = SCNMaterial()
occlusionMaterial.colorBufferWriteMask = []
boxGeometry.materials = [occlusionMaterial]
self.box = SCNNode(geometry: boxGeometry)
// Set rendering order to present this box in front of the other models
self.box.renderingOrder = -1
Great solution:
GitHub: arkit-occlusion
Worked for me.
But in my case i wanted to set the walls by code. So if you don't want to set the Walls by user -> use the plane detection to detect walls and set the walls by code.
Or in a range of 4 meters the iphone depht sensor works and you can detect obstacles with ARHitTest.
ARKit 6.0 and LiDAR scanner
You can hide any object behind a virtual invisible wall that replicates real wall geometry. iPhones and iPads Pro equipped with a LiDAR scanner help us reconstruct a 3d topological map of surrounding environment. LiDAR scanner greatly improves a quality of Z channel that allows occlude or remove humans from AR scene.
Also LiDAR improves such feature as Object Occlusion, Motion Tracking and Raycasting. With LiDAR scanner you can reconstruct a scene even in a unlit environment or in a room having white walls with no features at all. 3d reconstruction of surrounding environment has become possible in ARKit 6.0 thanks to sceneReconstruction instance property. Having a reconstructed mesh of your walls it's now super easy to hide any object behind real walls.
To activate a sceneReconstruction instance property in ARKit 6.0 use the following code:
#IBOutlet var arView: ARView!
arView.automaticallyConfigureSession = false
guard ARWorldTrackingConfiguration.supportsSceneReconstruction(.mesh)
else { return }
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
arView.debugOptions.insert([.showSceneUnderstanding])
arView.environment.sceneUnderstanding.options.insert([.occlusion])
arView.session.run(config)
Also if you're using SceneKit try the following approach:
#IBOutlet var sceneView: ARSCNView!
func renderer(_ renderer: SCNSceneRenderer,
nodeFor anchor: ARAnchor) -> SCNNode? {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return nil }
let geometry = SCNGeometry(arGeometry: meshAnchor.geometry)
geometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
‍
let node = SCNNode()
node.name = "Node_\(meshAnchor.identifier)"
node.geometry = geometry
return node
}
func renderer(_ renderer: SCNSceneRenderer,
didUpdate node: SCNNode,
for anchor: ARAnchor) {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return }
let newGeometry = SCNGeometry(arGeometry: meshAnchor.geometry)
newGeometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
node.geometry = newGeometry
}
And here are SCNGeometry and SCNGeometrySource extensions:
extension SCNGeometry {
convenience init(arGeometry: ARMeshGeometry) {
let verticesSource = SCNGeometrySource(arGeometry.vertices,
semantic: .vertex)
let normalsSource = SCNGeometrySource(arGeometry.normals,
semantic: .normal)
let faces = SCNGeometryElement(arGeometry.faces)
self.init(sources: [verticesSource, normalsSource], elements: [faces])
}
}
extension SCNGeometrySource {
convenience init(_ source: ARGeometrySource, semantic: Semantic) {
self.init(buffer: source.buffer, vertexFormat: source.format,
semantic: semantic,
vertexCount: source.count,
dataOffset: source.offset,
dataStride: source.stride)
}
}
...and SCNGeometryElement and SCNGeometryPrimitiveType extensions:
extension SCNGeometryElement {
convenience init(_ source: ARGeometryElement) {
let pointer = source.buffer.contents()
let byteCount = source.count *
source.indexCountPerPrimitive *
source.bytesPerIndex
let data = Data(bytesNoCopy: pointer,
count: byteCount,
deallocator: .none)
self.init(data: data, primitiveType: .of(source.primitiveType),
primitiveCount: source.count,
bytesPerIndex: source.bytesPerIndex)
}
}
extension SCNGeometryPrimitiveType {
static func of(type: ARGeometryPrimitiveType) -> SCNGeometryPrimitiveType {
switch type {
case .line: return .line
case .triangle: return .triangles
}
}
}

How to add SCNNodes without blocking main thread?

I'm creating and adding a large number of SCNNodes to a SceneKit scene, which causes the app to freeze for a second or two.
I thought I could fix this by putting all the action in a background thread using DispatchQueue.global(qos: .background).async(), but no dice. It behaves exactly the same.
I saw this answer and put the nodes through SCNView.prepare() before adding them, hoping it would slow down the background thread and prevent blocking. It didn't.
Here's a test function that reproduces the problem:
func spawnNodesInBackground() {
// put all the action in a background thread
DispatchQueue.global(qos: .background).async {
var nodes = [SCNNode]()
for i in 0...5000 {
// create a simple SCNNode
let node = SCNNode()
node.position = SCNVector3(i, i, i)
let geometry = SCNSphere(radius: 1)
geometry.firstMaterial?.diffuse.contents = UIColor.white.cgColor
node.geometry = geometry
nodes.append(node)
}
// run the nodes through prepare()
self.mySCNView.prepare(nodes, completionHandler: { (Bool) in
// nodes are prepared, add them to scene
for node in nodes {
self.myRootNode.addChildNode(node)
}
})
}
}
When I call spawnNodesInBackground() I expect the scene to continue rendering normally (perhaps at a reduced frame rate) while new nodes are added at whatever pace the CPU is comfortable with. Instead the app freezes completely for a second or two, then all the new nodes appear at once.
Why is this happening, and how can I add a large number of nodes without blocking the main thread?
I don't think this problem is solvable using the DispatchQueue. If I substitute some other task instead of creating SCNNodes it works as expected, so I think the problem is related to SceneKit.
The answers to this question suggest that SceneKit has its own private background thread that it batches all changes to. So regardless of what thread I use to create my SCNNodes, they all end up in the same queue in the same thread as the render loop.
The ugly workaround I'm using is to add the nodes a few at a time in SceneKit's delegated renderer(_:updateAtTime:) method until they're all done.
I poked around on this and didn't solve the freeze (I did reduce it a bit).
I expect that prepare() is going to exacerbate the freeze, not reduce it, because it's going to load all resources into the GPU immediately, instead of letting them be lazily loaded. I don't think you need to call prepare() from a background thread, because the doc says it already uses a background thread. But creating the nodes on a background thread is a good move.
I did see pretty good performance improvement by moving the geometry outside the loop, and by using a temporary parent node (which is then cloned), so that there's only one call to add a new child to the scene's root node. I also reduced the sphere's segment count to 10 (from the default of 48).
I started with the spinning spaceship sample project, and triggered the addition of the spheres from the tap gesture. Before my changes, I saw 11 fps, 7410 draw calls per frame, 8.18M triangles. After moving the geometry out of the loop and flattening the sphere tree, I hit 60 fps, with only 3 draw calls per frame and 1.67M triangles (iPhone 6s).
Do you need to build these objects at run time? You could build this scene once, archive it, and then embed it as an asset. Depending on the effect you want to achieve, you might also consider using SCNSceneRenderer's present(_:with:incomingPointOfView:transition:completionHandler) to replace the entire scene at once.
func spawnNodesInBackgroundClone() {
print(Date(), "starting")
DispatchQueue.global(qos: .background).async {
let tempParentNode = SCNNode()
tempParentNode.name = "spheres"
let geometry = SCNSphere(radius: 0.4)
geometry.segmentCount = 10
geometry.firstMaterial?.diffuse.contents = UIColor.green.cgColor
for x in -10...10 {
for y in -10...10 {
for z in 0...20 {
let node = SCNNode()
node.position = SCNVector3(x, y, -z)
node.geometry = geometry
tempParentNode.addChildNode(node)
}
}
}
print(Date(), "cloning")
let scnView = self.view as! SCNView
let cloneNode = tempParentNode.flattenedClone()
print(Date(), "adding")
DispatchQueue.main.async {
print(Date(), "main queue")
print(Date(), "prepare()")
scnView.prepare([cloneNode], completionHandler: { (Bool) in
scnView.scene?.rootNode.addChildNode(cloneNode)
print(Date(), "added")
})
// only do this once, on the simulator
// let sceneData = NSKeyedArchiver.archivedData(withRootObject: scnView.scene!)
// try! sceneData.write(to: URL(fileURLWithPath: "/Users/hal/scene.scn"))
print(Date(), "queued")
}
}
}
I have an asteroid simulation with 10000 nodes and ran into this issue myself. What worked for me was creating the container node, then passing it to a background process to fill it with child nodes.
That background process uses an SCNAction on that container node to add each of the generated asteroids to the container node.
let action = runBlock {
Container in
// generate nodes
/// then For each node in generatedNodes
Container.addChildNode(node)
}
I also used a shared level of detail node with an uneven sided block as its geometry so that the scene can draw those nodes in a single pass.
I also pre-generate 50 asteroid shapes that get random transformations applied during the background generation process. That process simply has to grab at random a pregen block apply a random simd transformation then stored for adding scene later.
I’m considering using a pyramid for the LOD but the 5 x 10 x 15 block works for my purpose. Also this method can be easily throttled to only add a set amount of blocks at a time by creating and passing multiple actions to the node. Initially I passed each node as an action but this way works too.
Showing the entire field of 10000 still affects the FPS slightly by 10 a 20 FPS but At that point the container nodes own LOD comes into effect showing a single ring.
Add all of them when application started but position them where camera dont see. When you need them change their position where they should be.

How to make new instances of a SCNNode with different texture

I load meshes from COLLADA files into SceneKit. Let's say I have a cube with a material that has a certain texture. Then in code, I want to make new copies of this SCNNode - I have used clone so far - and then I need to set a new texture. This is where it gets problematic because if I get the named material of one of the cloned cubes and update its texture (thematerialofmycube.diffuse.contents = #"somefile.png"), will set the same texture to all instances of the cube. clone obviously does not deep copy things like geometry, materials and textures. So what I have tried is making a copy of the geometry itself, and also tried making a new material, setting a new texture to the new material, and adding that material to the materials array of the new geometry, while also removing the old material. There seems to be no straightforward way of doing it this way (the materials are named, but exist in an array so several materials could theoretically have the same name - which leads to some bulky adding/removing of objects from the array), and when I do it, the new textures show up, but they show up upside down, and also the order of materials seem incorrect as I get "backside" textures in place of frontside textures and vice versa. I hope I don't have to draw all this stuff in my 3d editor, there must be a good way of making new instances with arbitrarily specified textures in code.
What I'm specifically doing is I drew a trump card in my 3d editor, and exported it to COLLADA. Now I have 52 pngs of trump card faces - I need to replace the faces of new instances of trump cards obviously.
It seems that the order of the materials in that array is (very) important. If I insert the new material with the updated texture in the same array index as the one I remove, essentially doing a replace, then textures show up on the correct face and aren't upside down - ie they show up in the same way as with the source SCNNode. I have yet to run this for a longer time to see if this works consistently.
[SCNNode clone] do this way for efficiency reasons.Try this code (If you want make a SCNNode Category:
//********************************************************
//<<<<<<<<<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
// SCNNode duplicate
//********************************************************
//<<<<<<<<<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
- (SCNNode *)duplicateNode:(SCNNode *)node
{
SCNNode *newNode = [node clone];
newNode.geometry = [node.geometry copy];
newNode.geometry.firstMaterial = [node.geometry.firstMaterial copy];
return newNode;
}
Swift answer:
fileprivate func deepCopyNode(node: SCNNode) -> SCNNode {
let clone = node.clone()
clone.geometry = node.geometry?.copy() as? SCNGeometry
if let g = node.geometry {
clone.geometry?.materials = g.materials.map{ $0.copy() as! SCNMaterial }
}
return clone
}
Been a long time since this was asked. But thought I'd offer a helper function I have:
func deepCopyNode(_ node: SCNNode) -> SCNNode {
// internal function for recurrsive calls
func deepCopyInternals(_ node: SCNNode) {
node.geometry = node.geometry?.copy() as? SCNGeometry
if let g = node.geometry {
node.geometry?.materials = g.materials.map { $0.copy() as! SCNMaterial }
}
for child in node.childNodes {
deepCopyInternals(child)
}
}
// CLONE main node (and all kids)
// issue here is that both geometry and materials are linked
// still. In our deepCopyNode we want new copies of everything
let clone = node.clone()
// we use this internal function to update both
// geometry and materials, as well as process all children
// this is the *deep* part of "deepCopy"
deepCopyInternals(clone)
return clone
}

Resources