Placing SCNNode with geometry - ios

I am creating a box to place in my AR Scene with this code below:
let box = SCNBox(width: side, height: side, length: side, chamferRadius: 0.008)
box.firstMaterial?.diffuse.contents = UIColor(red: 220/255, green: 65/255, blue: 23/255, alpha: 0.6)
box.firstMaterial?.diffuse.contents = UIImage(named:"test1")!
let nodo = SCNNode(geometry: box)
nodo.position = position
What I am trying to figure out is how to make the box have a constant size in screen size (image space).
I would like to have the box node placed in the 3D scene to always look the same size, lets say 30x30 pixels...
So no matter how far or close I am to the box as I move around the ARKit camera moving the phone, I want the box to always show up the same size on screen when in viewport.
How can I achieve that?

If you need your object to be in 3D, the solution is to scale it every frame using the distance between the object and the camera. There's a similar answer here.
If you want a 2d plane, you could place it using UIKit and updating its position every frame by projecting the 3D point to a 2D system and using those values as the coordinates for the view. (Reference)

Related

iOS ARKit + SceneKit physics contact detection scaling issue

I have a simple 3d area that contains 4 walls, each is a SCNNode with a simple SCNBox geometry, of rectangular shape, and matching SCNPhysicsBody attached. The SCNPhysicsBody uses a SCNPhysicsShape.ShapeType.boundingBox, and is set to static type. Here is a code snippet:
let size = (self.levelNode.boundingBox.max - self.levelNode.boundingBox.min) * self.levelNode.scale
//x //z
let geometryA = SCNBox(width: CGFloat(size.x), height: CGFloat(1 * self.levelNode.scale.x), length: 0.01, chamferRadius: 0)
let geometryB = SCNBox(width: CGFloat(size.z), height: CGFloat(1 * self.levelNode.scale.x), length: 0.01, chamferRadius: 0)
geometryA.firstMaterial?.diffuse.contents = UIColor(red: 0.0, green: 0.2, blue: 1.0, alpha: 0.65)
geometryB.firstMaterial?.diffuse.contents = UIColor(red: 0.0, green: 0.2, blue: 1.0, alpha: 0.65)
let nodeA = SCNNode(geometry: geometryA)
nodeA.position += self.levelNode.position
nodeA.position += SCNVector3(0, 0.25 * self.levelNode.scale.y, -size.z/2)
nodeA.name = "Boundary-01"
let nodeB = SCNNode(geometry: geometryA)
nodeB.position += self.levelNode.position
nodeB.position += SCNVector3(0, 0.25 * self.levelNode.scale.y, size.z/2)
nodeB.name = "Boundary-03"
let nodeC = SCNNode(geometry: geometryB)
nodeC.position += self.levelNode.position
nodeC.position += SCNVector3(-size.x/2, 0.25 * self.levelNode.scale.y, 0)
nodeC.eulerAngles = SCNVector3(0, -Float.pi/2, 0)
nodeC.name = "Boundary-02"
let nodeD = SCNNode(geometry: geometryB)
nodeD.position += self.levelNode.position
nodeD.position += SCNVector3(size.x/2, 0.25 * self.levelNode.scale.y, 0)
nodeD.eulerAngles = SCNVector3(0, Float.pi/2, 0)
nodeD.name = "Boundary-04"
let nodes = [nodeA, nodeB, nodeC, nodeD]
for node in nodes {
//
let shape = SCNPhysicsShape(geometry: node.geometry!, options: [
SCNPhysicsShape.Option.type : SCNPhysicsShape.ShapeType.boundingBox])
let body = SCNPhysicsBody(type: .static, shape: shape)
node.physicsBody = body
node.physicsBody?.isAffectedByGravity = false
node.physicsBody?.categoryBitMask = Bitmask.boundary.rawValue
node.physicsBody?.contactTestBitMask = Bitmask.edge.rawValue
node.physicsBody?.collisionBitMask = 0
scene.rootNode.addChildNode(node)
node.physicsBody?.resetTransform()
}
Inside this area, I spawn entities at a regular time interval. Each also has a SCNBox geometry, that is a cube shape this time, smaller than the walls, and same parameters for the physics body as above.
To simplify the behaviour of my entities inside this game area, I am calculating their paths to travel, then applying a SCNAction to the relevant node to move them. The SCNAction moves both the node and physics body attached to it.
I am using the SCNPhysicsWorld contact delegate to detect when an entity reaches one of the boundary walls. I then calculate a random trajectory for it from that wall in another direction, clear its actions, and apply a new move SCNAction.
This is where it gets interesting...
When this 'world' is at 1:1 scale. The contacts are detected as normal both in a standard SCNScene, and a scene projected using ARKit. The visible contact, i.e. the visible change in direction of the entity appears to be close to the boundary as expected. When I check the contact.penetrationDistance of each contact their values are e.g. 0.00294602662324905.
BUT when I change the scale of this 'world' to something smaller, say the equivalent of 10cm width, in ARKit, the simulation falls apart.
The contacts between an entity and a boundary node have a comparatively huge visible gap between them when the contact is detected. Yet the contact.penetrationDistance is of the same magnitude as before.
I switched on the ARSCNView debug options to show the physics shapes in the render, and they all appear to be the correct proportions, matching the bounding box of their node.
As you can see from the code example above, the boundary nodes are generated after I have scaled the level, during my AR user setup. They are added to the root node of the scene, not as a child of the level node. The same code is being used to generate the entities.
Previously I had tried using the resetTransform() function on the physics bodies but this did not produce a reliable scaling of the physics bodies, after I had scaled the level, so I decided to generate the nodes for the boundaries and entities after the level has been scaled.
In Apple's documentation, it does state that if the SCNPhysicsBody is not a custom shape, that it will adopt the scale of the node geometry applied to it. I am not affected by this as I am generating the geometries and their respective nodes, after the scaling has been applied to the level.
One of assumptions at the moment is that the physics simulation falls apart at such a small scale. But I am not relying on the simulation of forces to move the bodies ...
Is there a more appropriate way to scale the physics world?
Or, am I staring a bug in the SCNPhysicsWorld, that is something beyond my control, at the moment.
One solution I did think about was to run the entire simulation at 1:1 scale but hidden, then apply those movements to the smaller entities. As you can imagine, that will affect the performance of the entire scene...
The penetration distance of the first contact is a negative value, suggesting there is a gap. This gap does not appear to scale as you scale down the size of the simulation.
As a way to resolve the above excess, I have implemented an additional check on the contacts in the Contact Delegate to not take the first contact detected for a particular category, but rather ensure the penetrationDistance value is positive, so ensuring that there is overlap between the two objects, before triggering a change in direction of the entity which connected with a boundary.

SceneKit: how to recreate lighting from Google Poly for same OBJ file?

The goal is to recreate the lighting for this OBJ file: https://poly.google.com/view/cKryD9VnDEZ
Code to load OBJ file into SceneKit (can download file from above link):
let modelPath = "model.obj"
let url = NSURL(string: modelPath)
let scene = SCNScene(named: modelPath)!
sceneView.autoenablesDefaultLighting = true
sceneView.allowsCameraControl = true
sceneView.scene = scene
sceneView.backgroundColor = UIColor.white
Options tried so far:
1) The default ambient lighting is much harsher than the Google Poly lighting. Removing the ambient lighting rendered everything too flat.
2) Using four directional lights: one in front, one behind, one below, and one above the model. All lights are angled to point at the model. This was the best, but still left some shadows and harsher areas not seen on Google Polymer.
3) Added two more lights to option #2, this time adding lights to the left and right. This one was worse than option #2 since the extra lights combined with the four existing lights and whitewashed the model.
UPDATE AFTER FOLLOWING SUGGESTIONS:
The code now implements an ambient light and a directional light.
Adding the directional light to the camera node, versus the scene root node, made no difference for some reason.
The light code is below.
There are two problems:
1) In Screenshot 1, the right side of the chest is too bright and shows no edges. The far left face of the chest is too dark. The face with the best lighting is in the center. How can you get the lighting to be like this for all faces (or better match the Google Poly lighting)?
2) In Screenshot 2, the directional light appears to have no effect. How can you ensure the back of the model is as light as the front with the suggested architecture of one ambient light and one directional light?
SCREENSHOT 1:
SCREENSHOT 2:
CODE:
// Create ambient light
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light!.type = .ambient
ambientLightNode.light!.color = UIColor(white: 0.50, alpha: 1.0)
// Add ambient light to scene
scene.rootNode.addChildNode(ambientLightNode)
// Create directional light
let directionalLight = SCNNode()
directionalLight.light = SCNLight()
directionalLight.light!.type = .directional
directionalLight.light!.color = UIColor(white: 0.40, alpha: 1.0)
directionalLight.eulerAngles = SCNVector3(x: Float.pi, y: 0, z: 0)
// Add directional light
scene.rootNode.addChildNode(directionalLight)
OBJ files loaded through Model I/O use physically based lighting by default. This model has a cartoonish look and uses a lot of ambient lighting with a few specular highlights.
You should start by converting all your materials to the lambert lighting model.
Then add an ambient light to you scene. There's a lot of ambient lighting in this scene, every part of the object is lit. A color of 75% white will do.
Finally attach a directional light to the camera to highlight the polygons facing the user. A color of 50% white sounds about right.
In addition to MNuages answer, try to enable screen space ambient occlusion (on the camera). The following enables it for the current camera:
scnView.pointOfView.camera.screenSpaceAmbientOcclusionIntensity = 1.7;
scnView.pointOfView.camera.screenSpaceAmbientOcclusionNormalThreshold = 0.1;
scnView.pointOfView.camera.screenSpaceAmbientOcclusionDepthThreshold = 0.08;
scnView.pointOfView.camera.screenSpaceAmbientOcclusionBias = 0.33;
scnView.pointOfView.camera.screenSpaceAmbientOcclusionRadius = 3.0;
You will probably have to tweak the values a bit to get the for you desired results, the above is just what works for me in a certain scene.

Can I make shadow that can look through transparent object with scenekit and arkit?

I made transparent object with scenekit and linked with arkit.
I made a shadow with lightning material but can't see the shadow look through the transparent object.
I made a plane and placed the object on it.
And give the light to a transparent object.
the shadow appears behind the object but can not see through the object.
Here's code that making the shadow.
let light = SCNLight()
light.type = .directional
light.castsShadow = true
light.shadowRadius = 200
light.shadowColor = UIColor(red: 0, green: 0, blue: 0, alpha: 0.3)
light.shadowMode = .deferred
let constraint = SCNLookAtConstraint(target: model)
lightNode = SCNNode()
lightNode!.light = light
lightNode!.position = SCNVector3(model.position.x + 10, model.position.y + 30, model.position.z+30)
lightNode!.eulerAngles = SCNVector3(45.0, 0, 0)
lightNode!.constraints = [constraint]
sceneView.scene.rootNode.addChildNode(lightNode!)
And the below code is for making a floor under the bottle.
let floor = SCNFloor()
floor.reflectivity = 0
let material = SCNMaterial()
material.diffuse.contents = UIColor.white
material.colorBufferWriteMask = SCNColorMask(rawValue:0)
floor.materials = [material]
self.floorNode = SCNNode(geometry: floor)
self.floorNode!.position = SCNVector3(x, y, z)
self.sceneView.scene.rootNode.addChildNode(self.floorNode!)
I think it can be solved with simple property but I can't figure out.
How can I solve the problem?
A known issue with deferred shading is that it doesn’t work with transparency so you may have to remove that line and use the default forward shading again. That said, the “simple property” you are looking for is the .renderingOrder property on the SCNNode. Set it to 99 for example. Normally the rendering order doesn’t matter because the z buffer is used to determine what pixel is in front of others. For the shadow to show up through the transparant part of the object you need to make sure the object is rendered last.
On a different note, assuming you used some of the material settings I posted on your other question, try setting the shininess value to something like 0.4.
Note that this will still create a shadow as if the object was not transparent at all, so it won’t create a darker shadow for the label and cap. For additional realism you could opt to fake the shadow entirely, as in using a texture for the shadow and drop that on a plane which you rotate and skew as needed. For even more realism, you could fake the caustics that way too.
You may also want to add a reflection map to the reflective property of the material. Almost the same as texture map but in gray scale, where the label and cap are dark gray (not very reflective) and a lighter gray for the glass portion (else it will look like the label is on the inside of the glass). Last tip: use a Shell modifier (that’s what it’s called in 3Ds max anyway) to give the glass model some thickness.

SKSpriteNode not responding to set color & blendModeFactor (SpriteKit & Swift)

I know there are several other posts about this, but my case is kinda specific, i haven't seen this one yet.
I have in my game a ball-shaped sprite, that whenever I tap on it, I would like to add a colorised version of the very same sprite but with an effect of fadeIn and fadeOut.
Going to give you an example code:
self.ball = SKSpriteNode(imageNamed: "ball")
self.ball.position = CGPoint(x: midX, y: midY)
self.ball.zPosition = 1
self.ball.size = CGSize(width: 100, height: 100)
self.touchEffect = SKSpriteNode(imageNamed: "ball")
self.touchEffect.position = CGPoint(x: 0, y: 0)
self.touchEffect.zPosition = 2
self.touchEffect.size = CGSize(width: 100, height: 100)
self.touchEffect.color = UIColor.whiteColor()
self.touchEffect.blendColorFactor = 1
self.touchEffect.alpha = 0
self.ball.addChild(self.touchEffect)
self.addChild(self.ball)
Now, up to this point... I can't even see the touchEffect sprite colorised (if I put alpha to 1), but the same color of the original sprite. Why is this?
At the touchesBegan I do something like this:
func showTapEffect() {
let fadeIn = SKAction.fadeInWithDuration(0.3)
let fadeOut = SKAction.fadeOutWithDuration(0.3)
let sequence = SKAction.sequence([fadeIn,fadeOut])
self.touchEffect.runAction(sequence)
}
I also used it in different scenarios within my very same game, and it worked. Don't know why this unique case doesn't. Any hint? If you need more example code, let me know, is not a copy paste of my current code tho, I just typed it from my memory so you might see a typo error in there. But the code is very alike.
(And my sprite isn't dark or black)
Thanks in advance.
I am posting this an an answer because it is too large for comments, and I can post code in here if need be.
I just reread your thing like 3 times over, You are blending with White, what color are you expecting to get? if you blend Blue and White, you get Blue, if you blend Purple and Blue, You would get Blue. If you blend Blue and Gray, you get a darker Blue. It is all percentage multiplication. I do not believe you get a blend mode to pick from with colors. How it works is it takes the color of each pixel, and on the pixel, breaks it up into R,G,B, then it takes your color, and breaks that up into R G B (lets call it CR CG CB). The math becomes (R * CR,G * CG,B * CB) on a per pixel level.
You are doing (R * 1,G * 1,B * 1) which is (R,G,B)
If you want to colorize your sprite, then you need your sprite to be a gray scale image, and use colors only when you want them to stay that color (To a degree, because blending will still apply to them, you need to figure out the math on how you want it to blend)

Hide SCNFloor but show shadow with SceneKit (swift)

I am trying to display a shadow of my character on a map I have. I have a ambient light and an omni light. If I add a floor, it shows the shadow/reflection, but the floor covers the map.
Without a floor, I don't get any shadow/reflection.
I add floor like this:
floor = SCNFloor()
floor.reflectionFalloffEnd = 10
floor.reflectivity = 0.5
let floorNode = SCNNode(geometry: floor)
floorNode.position = SCNVector3(x: 0, y: -1.0, z: 0)
self.rootNode.addChildNode(floorNode)
The map is created with Mapbox iOS SDK (MGLMapView).
In your screenshots I don't see any shadow. I only see the reflection. For shadows you need either a directional or spot light. For the reflections over your map did you try to the the map texture to your SCNFloor? Another option is to use a SCNFloor with a material transparency of 0 but that will have a cost due to the overdraw.

Resources