Is it possible to assign widthSegmentCount (or height or chamfer) to a custom geometry object created in Blender. My geometry is rather sharp looking when imported to SceneKit. It looks great in Blender though. The sharpness is depicted in pictures.
The object is moving so setting enableJittering to true doesn't help.
I tried using this code since my object is basically a box:
let box = boxNode.geometry as! SCNBox
box.widthSegmentCount = 150
box.heightSegmentCount = 150
box.chamferSegmentCount = 150
and I'm getting an error: Thread 1: signal SIGABRT
Is this the best SceneKit can do or do I need to export my object from Blender differently?
This has nothing to do with the quality of your mesh. You are just seeing individual pixels without smoothing.
SCNView exposes the antialiasingMode property that will help you get smoother edges (try .Multisampling2X or .Multisampling4X).
Related
I'm currently working on an iPadOS app that uses SceneKit to render some 3D models, nothing too fancy but I hit a big of a snag when it comes to shading these models...
Basically what I do is just set up a SceneKit scene (using pretty much all the default settings, I don't even have a camera) and instantiate the 3D objects from some .obj files I have, like so.
let url = <URL of my .obj file>
let asset = MDLAsset(url: url)
let object = asset.object(at: 0)
let node = SCNNode(mdlObject: object)
let texture = UIImage(data: try Data(contentsOf: <URL of the texture>))
let material = SCNMaterial()
material.diffuse.contents = texture
node.geometry!.materials = [material]
self.sceneView.scene!.rootNode.addChildNode(node)
The textures are loaded manually because unfortunately that's how the client set up the files for me to use. The code works fine and I can see the mesh and its texture, however it also looks like this
As you can see the shading is not smooth at all... and I have no idea how to fix it.
The client has been bothering me to implement Phong shading, and according to Apple's Documentation this is how you do it.
material.lightingModel = .phong
Unfortunately that's still what it looks like with Phong enabled. I'm an absolute beginner when it comes to 3D rendering so this might be laughably easy but I swear I cannot figure out how to get a smoother shading on this model.
I have tried looking left and right and the only thing that has had any kind of noticeable result was to use subdivisionLevel to increase the actual faces in the geometry but this does not scale well as the actual app needs to load a ton of these meshes and it runs out of memory fast even when subdivision set to just 1
Surely there must be a way to smooth those shadows without improving the actual geometry?
Thanks in advance!
Shading requires having correct normals for your geometry. Have you tried using Model IO to generate them?
https://developer.apple.com/documentation/modelio/mdlmesh/1391284-addnormals
We are working on an app similar to iPhone's Animoji app. We have created our own 3d model, which looks quite nice, the only issue is that there are edges around the eyes or nose of the model which are not smooth. Model has been prepared in Maya 3D software.
See the screenshot
You can see the rough edges around eyes and around eye brows.
We are trying following code to smooth these edges:
let node = self.childNodes[0]
let geometry = node.geometry
let verticies = self.change(geometry: geometry!)
geometry?.edgeCreasesSource = SCNGeometrySource(vertices: verticies)
geometry?.subdivisionLevel = 0
guard let element = geometry?.elements.first else {
return
}
//print(element.debugDescription)
geometry?.edgeCreasesElement = SCNGeometryElement(data: element.data, primitiveType: .line, primitiveCount: element.primitiveCount, bytesPerIndex: element.bytesPerIndex)
We are trying to set subdivisionLevel so that edges can be smoothed out. Is this the right approach? Or we need to do something else to get this fixed programmatically. 3d Designer has very smooth edges when we see it in Maya.
Using subdivision level is the right approach to smooth out geometries. In the code snippet however, you're not setting any subdivision level that's higher than 0. If you're working with a small number of objects, it's a good idea to use SceneKit editor and set the subdivision level in the attributes inspector.
I have a Metal-backed SceneKit project where I'd like to draw a curve that is modified on the CPU every frame. This seems like it should be a simple task; one that I've done in OpenGL many times.
Something like:
array_of_points[0].x = 25
array_of_points[0].y = 35
upload_array_to_gpu()
render_frame()
It seems that modifying the SCNGeometrySource of an SCNNode is not possible. Recreating and setting the node.geometry works but the position lags / jitters compared to other nodes in the scene. Recreating the entire node each frame also lags.
One possible avenue might be a geometry shader modifier, but I'd still have to somehow pass in my cpu-computed curve positions.
Another option is to use metal APIs directly. Though this approach seems like it will require lots more code, and I couldn't find too much info about mixing SceneKit with Metal.
I also tried setting the preferredRenderingAPI to OpenGL, but it doesn't change the renderingAPI.
We are porting a game to SpriteKit and I've run in to a bit of a problem. Some objects in our game have triangle-strip trails attached to them. The vertex buffers of the trails are continuously updated as the objects move in the world to create a seamless and flowing effect (there are constraints on how many vertices are in the buffer and how often we emit new vertices).
In the previous implementation we simply updated the affected vertices in the corresponding buffer whenever we emitted new vertices. In SceneKit it seems I am unable to update geometry sources, unless I use geometrySourceWithBuffer:vertexFormat:semantic:vertexCount:dataOffset:dataStride:. To do this however, it seems I need a Metal device, command queue and encoder, to be able to submit commands to update my buffer before it renders.
Is there any way I can do this with a normal SCNView, or do I have to do everything manually with a CAMetalLayer, creating my own metal device etc?
In essence, everything we need, bar this trail geometry is available in SpriteKit, so I was hoping there was some way I could get a hold of the metal device and command queue used by the SKView and simply use that to upload my geometry. This would make things a lot simpler.
I've used a particle system to get a similar effect. The particle emitter is attached to the moving object, with particles set to fade out and eventually die. You might need to set the emitter's targetNode to your scene, depending on the effect you want.
emitter = SKEmitterNode()
emitter.name = marbleNodeNames.trail.rawValue
emitter.particleTexture = SKTexture(imageNamed: "spark")
emitter.particleAlphaSpeed = -1.0
emitter.particleLifetime = 2
emitter.particleScale = 0.2
marbleSprite.addChild(emitter)
I am currently trying to set up a rotating ball in scene kit. I have created the ball and applied a texture to it.
ballMaterial.diffuse.contents = UIImage(named: ballTexture)
ballMaterial.doubleSided = true
ballGeometry.materials = [ballMaterial]
The current ballTexture is a semi-transparent texture as I am hoping to see the back face roll around.
However I get some strange culling where only half of the back facing polygons are shown even though the doubleSided property is set to true.
Any help would be appreciated, thanks.
This happens because the effects of transparency are draw-order dependent. SceneKit doesn't know to draw the back-facing polygons of the sphere before the front-facing ones. (In fact, it can't really do that without reorganizing the vertex buffers on the GPU for every frame, which would be a huge drag on render performance.)
The vertex layout for an SCNSphere has it set up like the lat/long grid on a globe: the triangles render in order along the meridians from 0° to 360°, so depending on how the sphere is oriented with respect to the camera, some of the faces on the far side of the sphere will render before the nearer ones.
To fix this, you need to force the rendering order — either directly, or through the depth buffer. Here's one way to do that, using a separate material for the inside surface to illustrate the difference.
// add two balls, one a child of the other
let node = SCNNode(geometry: SCNSphere(radius: 1))
let node2 = SCNNode(geometry: SCNSphere(radius: 1))
scene.rootNode.addChildNode(node)
node.addChildNode(node2)
// cull back-facing polygons on the first ball
// so we only see the outside
let mat1 = node.geometry!.firstMaterial!
mat1.cullMode = .Back
mat1.transparent.contents = bwCheckers
// my "bwCheckers" uses black for transparent, white for opaque
mat1.transparencyMode = .RGBZero
// cull front-facing polygons on the second ball
// so we only see the inside
let mat2 = node2.geometry!.firstMaterial!
mat2.cullMode = .Front
mat2.diffuse.contents = rgCheckers
// sphere normals face outward, so to make the inside respond
// to lighting, we need to invert them
let shader = "_geometry.normal *= -1.0;"
mat2.shaderModifiers = [SCNShaderModifierEntryPointGeometry: shader]
(The shader modifier bit at the end isn't required — it just makes the inside material get diffuse shading. You could just as well use a material property that doesn't involve normals or lighting, like emission, depending on the look you want.)
You can also do this using a single node with a double-sided material by disabling writesToDepthBuffer, but that could also lead to undesirable interactions with the rest of your scene content — you might also need to mess with renderingOrder in that case.
macOS 10.13 and iOS 11 added SCNTransparencyMode.dualLayer which as far as I can tell doesn't even require setting isDoubleSided to true (the documentation doesn't provide any information at all). So a simple solution that's working for me would be:
ballMaterial.diffuse.contents = UIImage(named: ballTexture)
ballMaterial.transparencyMode = .dualLayer
ballGeometry.materials = [ballMaterial]