Collision not detected correctly with multi-geometry shapes - ios

I have a spaceship object which has a complex geometry, and since SceneKit's physics doesn't work with complex bodies, I have adopted a workaround: I'm using some basic shapes like cylinders and cubes so simulate the whole spaceship's body. In Blender I created a set of objects that approximate the shape of the spaceship:
Then when I load the scene I remove these objects, but use their geometry to construct a SCNPhysicsShape to be used as the physics body of the spaceship:
// First I retrieve all of these bodies, which I named "Body1" up to 9:
let bodies = _scene.rootNode.childNodes(passingTest: { (node:SCNNode, stop:UnsafeMutablePointer<ObjCBool>) -> Bool in
if let name = node.name
{
return name.contains("Body")
}
return false
})
// Then I create an array of SCNPhysicsShape objects, and an array
// containing the transformation associated to each shape
var shapes = [SCNPhysicsShape]()
var transforms = [NSValue]()
for body in bodies
{
shapes.append(SCNPhysicsShape(geometry: body.geometry!, options: nil))
transforms.append(NSValue(scnMatrix4:body.transform))
// I remove it from the scene because it shouldn't be visible, as it has
// the sole goal is simulating the spaceship's physics
body.removeFromParentNode()
}
// Finally I create a SCNPhysicsShape that contains all of the shapes
let shape = SCNPhysicsShape(shapes: shapes, transforms: transforms)
let body = SCNPhysicsBody(type: .dynamic, shape: shape)
body.isAffectedByGravity = false
body.categoryBitMask = SpaceshipCategory
body.collisionBitMask = 0
body.contactTestBitMask = RockCategory
self.spaceship.physicsBody = body
The SCNPhysicsShape object should contain all the shapes that I created in the Blender file. But when I test the program, the spaceship just behaves like an empty body, and collisions are not detected.
PS: my goal is only to detect collisions. I don't want the physics engine to simulate physics.

You want to use a concave shape for your ship. Example below. In order to use concave your body must be static or kinematic. Kinematic is ideal for animated objects.
No need for:
body.collisionBitMask = 0
A contact bit mask is all you will need for contact only.
The code below should work for what you are looking for. This is swift 3 code fyi.
let body = SCNPhysicsBodyType.kinematic
let shape = SCNPhysicsShape(node: spaceship, options: [SCNPhysicsShape.Option.type: SCNPhysicsShape.ShapeType.concavePolyhedron])
spaceship.physicsBody = SCNPhysicsBody(type: body, shape: shape)
Without seeing your bit mask, I am unable to tell if you are handling those properly. You should have something like:
spaceship.physicsBody.categoryBitMask = 1 << 1
Anything you want to contact your spaceship should have:
otherObject.physicsBody.contactTestBitMask = 1 << 1
This will only register a contact of the "otherObject" to the "spaceship" and not the "spaceship" to the "otherObject". So make sure your physics world contact delegate is handled that way or vise versa. No need for both objects to look for each other. That would be less efficient.
Example of delegate below:
func physicsWorld(_ world: SCNPhysicsWorld, didBegin contact: SCNPhysicsContact) {
var tempOtherObject: SCNNode!
var tempSpaceship: SCNNode!
// This will assign the tempOtherObject and tempSpaceship to the proper contact nodes
if contact.nodeA.node == otherObject {
tempOtherObject = contact.nodeA
tempSpaceship = contact.nodeB
}else{
tempOtherObject = contact.nodeB
tempSpaceship = contact.nodeA
}
print("tempOtherObject = ", tempOtherObject)
print("tempSpaceship = ", tempSpaceship)
}

Related

Can't detect collision between rootNode and pointOfView child nodes in SceneKit / ARKit

In an AR app, I want to detect collisions between the user walking around and the walls of an AR node that I construct. In order to do that, I create an invisible cylinder right in front of the user and set it all up to detect collisions.
The walls are all part of a node which is a child of sceneView.scene.rootNode.
The cylinder, I want it to be a child of sceneView.pointOfView so that it would always follow the camera.
However, when I do so, no collisions are detected.
I know that I set it all up correctly, because if instead I set the cylinder node as a child of sceneView.scene.rootNode as well, I do get collisions correctly. In that case, I continuously move that cylinder node to always be in front of the camera in a renderer(updateAtTime ...) function. So I do have a workaround, but I'd prefer it to be a child of pointOfView.
Is it impossible to detect collisions if nodes are children of different root nodes?
Or maybe I'm missing something in my code?
The contactDelegate is set like that:
sceneView.scene.physicsWorld.contactDelegate = self so maybe this only includes sceneView.scene, but will exclude sceneView.pointOfView???
Is that the issue?
Here's what I do:
I have a separate file to create and configure my cylinder node which I call pov:
import Foundation
import SceneKit
func createPOV() -> SCNNode {
let pov = SCNNode()
pov.geometry = SCNCylinder(radius: 0.1, height: 4)
pov.geometry?.firstMaterial?.diffuse.contents = UIColor.blue
pov.opacity = 0.3 // will be set to 0 when it'll work correctly
pov.physicsBody = SCNPhysicsBody(type: .kinematic, shape: nil)
pov.physicsBody?.isAffectedByGravity = false
pov.physicsBody?.mass = 1
pov.physicsBody?.categoryBitMask = BodyType.cameraCategory.rawValue
pov.physicsBody?.collisionBitMask = BodyType.wallsCategory.rawValue
pov.physicsBody?.contactTestBitMask = BodyType.wallsCategory.rawValue
pov.simdPosition = simd_float3(0, -1.5, -0.3) // this position only makes sense when setting as a child of pointOfView, otherwise the position will always be changed by renderer
return pov
}
Now in my viewController.swift file I call this function and set is as a child of either root nodes:
pov = createPOV()
sceneView.pointOfView?.addChildNode(pov!)
(Don't worry right now about not checking and unwrapping).
The above does not detect collisions.
But if instead I add it like so:
sceneView.scene.rootNode.addChildNode(pov!)
then collisions are detected just fine.
But then I need to always move this cylinder to be in front of the camera and I do it like that:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard let pointOfView = sceneView.pointOfView else {return}
let currentPosition = pointOfView.simdPosition
let currentTransform = pointOfView.simdTransform
let orientation = SCNVector3(-currentTransform.columns.2.x, -currentTransform.columns.2.y, -currentTransform.columns.2.z)
let currentPositionOfCamera = orientation + SCNVector3(currentPosition)
DispatchQueue.main.async {
self.pov?.position = currentPositionOfCamera
}
}
For completeness, here's the code I use to configure the node of walls in ViewController (they're built elsewhere in another function):
node?.physicsBody = SCNPhysicsBody(type: .dynamic, shape: SCNPhysicsShape(node: node!, options: nil))
node?.physicsBody?.isAffectedByGravity = false
node?.physicsBody?.mass = 1
node?.physicsBody?.damping = 1.0 // remove linear velocity, needed to stop moving after collision
node?.physicsBody?.angularDamping = 1.0 // remove angular velocity, needed to stop rotating after collision
node?.physicsBody?.velocityFactor = SCNVector3(1.0, 0.0, 1.0) // will allow movement only in X and Z coordinates
node?.physicsBody?.angularVelocityFactor = SCNVector3(0.0, 1.0, 0.0) // will allow rotation only around Y axis
node?.physicsBody?.categoryBitMask = BodyType.wallsCategory.rawValue
node?.physicsBody?.collisionBitMask = BodyType.cameraCategory.rawValue
node?.physicsBody?.contactTestBitMask = BodyType.cameraCategory.rawValue
And here's my physycsWorld(didBegin contact) code:
func physicsWorld(_ world: SCNPhysicsWorld, didBegin contact: SCNPhysicsContact) {
if contact.nodeA.physicsBody?.categoryBitMask == BodyType.wallsCategory.rawValue || contact.nodeB.physicsBody?.categoryBitMask == BodyType.wallsCategory.rawValue {
print("Begin COLLISION")
contactBeginLabel.isHidden = false
}
}
So I print something to the console and I also turn on a label on the view so I'll see that collision was detected (and the walls indeed move as a whole when it works).
So Again, it all works fine when the pov node is a child of sceneView.scene.rootNode, but not if it's a child of sceneView.pointOfView.
Am I doing something wrong or is this a limitation of collision detection?
Is there something else I can do to make this work, besides the workaround I already implemented?
Thanks!
Regarding the positioning of your cyliner:
instead to use the render update at time, you better use a position constraint for your cylinder node to move with the point of view. the result will be the same, as if it were a child of the point of view, but collisions will be detected, because you add it to the main rootnode scenegraph.
let constraint = SCNReplicatorConstraint(target: pointOfView) // must be a node
constraint.positionOffset = positionOffset // some SCNVector3
constraint.replicatesOrientation = false
constraint.replicatesScale = false
constraint.replicatesPosition = true
cylinder.constraints = [constraint]
There is also an influence factor you can configure. By default the influence is 100%, the position will immediatly follow.

ARKit hide objects behind walls

How can I use the horizontal and vertical planes tracked by ARKit to hide objects behind walls/ behind real objects? Currently the 3D added objects can be seen through walls when you leave a room and/ or in front of objects that they should be behind. So is it possible to use the data ARKit gives me to provide a more natural AR experience without the objects appearing through walls?
You have two issues here.
(And you didn't even use regular expressions!)
How to create occlusion geometry for ARKit/SceneKit?
If you set a SceneKit material's colorBufferWriteMask to an empty value ([] in Swift), any objects using that material won't appear in the view, but they'll still write to the z-buffer during rendering, which affects the rendering of other objects. In effect, you'll get a "hole" shaped like your object, through which the background shows (the camera feed, in the case of ARSCNView), but which can still obscure other SceneKit objects.
You'll also need to make sure that an occluded renders before any other nodes it's supposed to obscure. You can do this using node hierarchy ( I can't remember offhand whether parent nodes render before their children or the other way around, but it's easy enough to test). Nodes that are peers in the hierarchy don't have a deterministic order, but you can force an order regardless of hierarchy with the renderingOrder property. That property defaults to zero, so setting it to -1 will render before everything. (Or for finer control, set the renderingOrders for several nodes to a sequence of values.)
How to detect walls/etc so you know where to put occlusion geometry?
In iOS 11.3 and later (aka "ARKit 1.5"), you can turn on vertical plane detection. (Note that when you get vertical plane anchors back from that, they're automatically rotated. So if you attach models to the anchor, their local "up" direction is normal to the plane.) Also new in iOS 11.3, you can get a more detailed shape estimate for each detected plane (see ARSCNPlaneGeometry), regardless of its orientation.
However, even if you have the horizontal and the vertical, the outer limits of a plane are just estimates that change over time. That is, ARKit can quickly detect where part of a wall is, but it doesn't know where the edges of the wall are without the user spending some time waving the device around to map out the space. And even then, the mapped edges might not line up precisely with those of the real wall.
So... if you use detected vertical planes to occlude virtual geometry, you might find places where virtual objects that are supposed to be hidden show through, either by being not quite hiding right at the edge of the wall, or being visible through places where ARKit hasn't mapped the entire real wall. (The latter issue you might be able to solve by assuming a larger extent than ARKit does.)
For creating an occlusion material (also known as blackhole material or blocking material) you have to use the following instance properties: .colorBufferWriteMask, .readsFromDepthBuffer, .writesToDepthBuffer and .renderingOrder.
You can use them this way:
plane.geometry?.firstMaterial?.isDoubleSided = true
plane.geometry?.firstMaterial?.colorBufferWriteMask = .alpha
plane.geometry?.firstMaterial?.writesToDepthBuffer = true
plane.geometry?.firstMaterial?.readsFromDepthBuffer = true
plane.renderingOrder = -100
...or this way:
func occlusion() -> SCNMaterial {
let occlusionMaterial = SCNMaterial()
occlusionMaterial.isDoubleSided = true
occlusionMaterial.colorBufferWriteMask = []
occlusionMaterial.readsFromDepthBuffer = true
occlusionMaterial.writesToDepthBuffer = true
return occlusionMaterial
}
plane.geometry?.firstMaterial = occlusion()
plane.renderingOrder = -100
In order to create an occlusion material it's really simple
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
// Define a occlusion material
let occlusionMaterial = SCNMaterial()
occlusionMaterial.colorBufferWriteMask = []
boxGeometry.materials = [occlusionMaterial]
self.box = SCNNode(geometry: boxGeometry)
// Set rendering order to present this box in front of the other models
self.box.renderingOrder = -1
Great solution:
GitHub: arkit-occlusion
Worked for me.
But in my case i wanted to set the walls by code. So if you don't want to set the Walls by user -> use the plane detection to detect walls and set the walls by code.
Or in a range of 4 meters the iphone depht sensor works and you can detect obstacles with ARHitTest.
ARKit 6.0 and LiDAR scanner
You can hide any object behind a virtual invisible wall that replicates real wall geometry. iPhones and iPads Pro equipped with a LiDAR scanner help us reconstruct a 3d topological map of surrounding environment. LiDAR scanner greatly improves a quality of Z channel that allows occlude or remove humans from AR scene.
Also LiDAR improves such feature as Object Occlusion, Motion Tracking and Raycasting. With LiDAR scanner you can reconstruct a scene even in a unlit environment or in a room having white walls with no features at all. 3d reconstruction of surrounding environment has become possible in ARKit 6.0 thanks to sceneReconstruction instance property. Having a reconstructed mesh of your walls it's now super easy to hide any object behind real walls.
To activate a sceneReconstruction instance property in ARKit 6.0 use the following code:
#IBOutlet var arView: ARView!
arView.automaticallyConfigureSession = false
guard ARWorldTrackingConfiguration.supportsSceneReconstruction(.mesh)
else { return }
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
arView.debugOptions.insert([.showSceneUnderstanding])
arView.environment.sceneUnderstanding.options.insert([.occlusion])
arView.session.run(config)
Also if you're using SceneKit try the following approach:
#IBOutlet var sceneView: ARSCNView!
func renderer(_ renderer: SCNSceneRenderer,
nodeFor anchor: ARAnchor) -> SCNNode? {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return nil }
let geometry = SCNGeometry(arGeometry: meshAnchor.geometry)
geometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
‍
let node = SCNNode()
node.name = "Node_\(meshAnchor.identifier)"
node.geometry = geometry
return node
}
func renderer(_ renderer: SCNSceneRenderer,
didUpdate node: SCNNode,
for anchor: ARAnchor) {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return }
let newGeometry = SCNGeometry(arGeometry: meshAnchor.geometry)
newGeometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
node.geometry = newGeometry
}
And here are SCNGeometry and SCNGeometrySource extensions:
extension SCNGeometry {
convenience init(arGeometry: ARMeshGeometry) {
let verticesSource = SCNGeometrySource(arGeometry.vertices,
semantic: .vertex)
let normalsSource = SCNGeometrySource(arGeometry.normals,
semantic: .normal)
let faces = SCNGeometryElement(arGeometry.faces)
self.init(sources: [verticesSource, normalsSource], elements: [faces])
}
}
extension SCNGeometrySource {
convenience init(_ source: ARGeometrySource, semantic: Semantic) {
self.init(buffer: source.buffer, vertexFormat: source.format,
semantic: semantic,
vertexCount: source.count,
dataOffset: source.offset,
dataStride: source.stride)
}
}
...and SCNGeometryElement and SCNGeometryPrimitiveType extensions:
extension SCNGeometryElement {
convenience init(_ source: ARGeometryElement) {
let pointer = source.buffer.contents()
let byteCount = source.count *
source.indexCountPerPrimitive *
source.bytesPerIndex
let data = Data(bytesNoCopy: pointer,
count: byteCount,
deallocator: .none)
self.init(data: data, primitiveType: .of(source.primitiveType),
primitiveCount: source.count,
bytesPerIndex: source.bytesPerIndex)
}
}
extension SCNGeometryPrimitiveType {
static func of(type: ARGeometryPrimitiveType) -> SCNGeometryPrimitiveType {
switch type {
case .line: return .line
case .triangle: return .triangles
}
}
}

iOS 8 detect if SKSpriteNode is lit by SKLightNode

I have several sprite nodes in my scene that are casting shadows. I also have a sprite that is in one of the shadows. I want to be able to tell if the user moves a sprite out of a shadow into the light. Anyway to do this in swift? Thanks.
Unfortunatly this functionality still isn't included in SpriteKit, however it is possible to implement a decent solution with some caveats.
To determine if a sprite casts a shadow, its shadowCastBitMask property "is tested against the light's categoryBitMask property by performing a logical AND operation." SpriteKit appears to generate the exact same mask data for lighting and physics body calculations, based on the description given in the documentation for the shadowColor property defined on SKLightNode:
When lighting is calculated, shadows are created as if a ray was cast
out from the light node's position. If a sprite casts a shadow, the
rays are blocked when they intersect with the sprite's physics body.
Otherwise, the sprite's texture is used to generate a mask, and any
pixel in the sprite node's texture that has an alpha value that is
nonzero blocks the light.
SKPhysicsWorld has a method, enumerateBodies(alongRayStart:end:using:), for performing this kind of ray intersection test performantly. That means we can test if a sprite is shadowed by any sprite with a physics body. So we can write a method like this to extend SKSpriteNode:
func isLit(by light: SKLightNode) -> Bool {
guard light.isEnabled else {
return false
}
var shadowed = false
scene?.physicsWorld.enumerateBodies(alongRayStart: light.position, end: position) { (body, _, _, stop) in
if let sprite = body.node as? SKSpriteNode, light.categoryBitMask & sprite.shadowCastBitMask != 0 {
shadowed = true
stop.pointee = true
}
}
if shadowed {
return false
} else {
return true
}
}
We can also retrieve which lights in the scene are lighting a particular sprite:
func lights(affecting sprite: SKSpriteNode) -> [SKLightNode] {
let lights = sprite.scene?.children.flatMap { (node) -> SKLightNode? in
node as? SKLightNode
} ?? []
return lights.filter { (light) -> Bool in
sprite.isLit(by: light)
}
}
It'd be great if SpriteKit provided a way to retrieve this information without coupling it to the physics API, or else requiring developers to roll their own ray cast implementations.

How to make collision in scenekit

i am trying to make collision of two objects, but "func physicsWorld(world: SCNPhysicsWorld, didBeginContact contact: SCNPhysicsContact)" is not being called.
my code is,
let carbonNode = SCNNode(geometry: carbonAtom())
carbonNode.position = SCNVector3Make(-6, 8, 0)
let coneAtomNode = SCNNode(geometry: coneAtom())
pinNode = coneAtomNode
pinNode.physicsBody = SCNPhysicsBody.dynamicBody()
pinNode.physicsBody?.restitution = 0.9;
pinNode.categoryBitMask = 0x4;
pinNode.physicsBody?.collisionBitMask = ~(0x4);
coneAtomNode.position = SCNVector3Make(-6, -15, 0)
scene.rootNode.addChildNode(coneAtomNode)
balloonNode = carbonNode
sceneView.scene = scene
sceneView.scene?.physicsWorld.contactDelegate = self
pinNode.runAction(SCNAction.repeatAction(SCNAction.moveTo(SCNVector3Make(-6, 10+5, 0), duration: 1.5), count: 1), completionHandler: {
})
You can't move "dynamic" bodies programmatically (i.e no action, no animation and no manual updates of position/rotation/scale). You can either move dynamic bodies with forces or use a kinematicBody instead.
Kinematic bodies behave just like static bodies but you can move them programmatically.
Also if you want to get physics contacts between two nodes, the two nodes need to have a physicsBody.

SceneKit get texture coordinate after touch with Swift

I want to manipulate 2D textures in a 3D SceneKit scene.
Therefore i used this code to get local coordinates:
#IBAction func tap(sender: UITapGestureRecognizer) {
var arr:NSArray = my3dView.hitTest(sender.locationInView(my3dView), options: NSDictionary(dictionary: [SCNHitTestFirstFoundOnlyKey:true]))
var res:SCNHitTestResult = arr.firstObject as SCNHitTestResult
var vect:SCNVector3 = res.localCoordinates}
I have the texture read out from my scene with:
var mat:SCNNode = myscene.rootNode.childNodes[0] as SCNNode
var child:SCNNode = mat.childNodeWithName("ID12", recursively: false)
var geo:SCNMaterial = child.geometry.firstMaterial
var channel = geo.diffuse.mappingChannel
var textureimg:UIImage = geo.diffuse.contents as UIImage
and now i want to draw at the touchpoint to the texture...
how can i do that? how can i transform my coordinate from touch to the texture image?
Sounds like you have two problems. (Without even having used regular expressions. :))
First, you need to get the texture coordinates of the tapped point -- that is, the point in 2D texture space on the surface of the object. You've almost got that right already. SCNHitTestResult provides those with the textureCoordinatesWithMappingChannel method. (You're using localCoordinates, which gets you a point in the 3D space owned by the node in the hit-test result.) And you already seem to have found the business about mapping channels, so you know what to pass to that method.
Problem #2 is how to draw.
You're doing the right thing to get the material's contents as a UIImage. Once you've got that, you could look into drawing with UIGraphics and CGContext functions -- create an image with UIGraphicsBeginImageContext, draw the existing image into it, then draw whatever new content you want to add at the tapped point. After that, you can get the image you were drawing with UIGraphicsGetImageFromCurrentImageContext and set it as the new diffuse.contents of your material. However, that's probably not the best way -- you're schlepping a bunch of image data around on the CPU, and the code is a bit unwieldy, too.
A better approach might be to take advantage of the integration between SceneKit and SpriteKit. This way, all your 2D drawing is happening in the same GPU context as the 3D drawing -- and the code's a bit simpler.
You can set your material's diffuse.contents to a SpriteKit scene. (To use the UIImage you currently have for that texture, just stick it on an SKSpriteNode that fills the scene.) Once you have the texture coordinates, you can add a sprite to the scene at that point.
var nodeToDrawOn: SCNNode!
var skScene: SKScene!
func mySetup() { // or viewDidLoad, or wherever you do setup
// whatever else you're doing for setup, plus:
// 1. remember which node we want to draw on
nodeToDrawOn = myScene.rootNode.childNodeWithName("ID12", recursively: true)
// 2. set up that node's texture as a SpriteKit scene
let currentImage = nodeToDrawOn.geometry!.firstMaterial!.diffuse.contents as UIImage
skScene = SKScene(size: currentImage.size)
nodeToDrawOn.geometry!.firstMaterial!.diffuse.contents = skScene
// 3. put the currentImage into a background sprite for the skScene
let background = SKSpriteNode(texture: SKTexture(image: currentImage))
background.position = CGPoint(x: skScene.frame.midX, y: skScene.frame.midY)
skScene.addChild(background)
}
#IBAction func tap(sender: UITapGestureRecognizer) {
let results = my3dView.hitTest(sender.locationInView(my3dView), options: [SCNHitTestFirstFoundOnlyKey: true]) as [SCNHitTestResult]
if let result = results.first {
if result.node === nodeToDrawOn {
// 1. get the texture coordinates
let channel = nodeToDrawOn.geometry!.firstMaterial!.diffuse.mappingChannel
let texcoord = result.textureCoordinatesWithMappingChannel(channel)
// 2. place a sprite there
let sprite = SKSpriteNode(color: SKColor.greenColor(), size: CGSize(width: 10, height: 10))
// scale coords: texcoords go 0.0-1.0, skScene space is is pixels
sprite.position.x = texcoord.x * skScene.size.width
sprite.position.y = texcoord.y * skScene.size.height
skScene.addChild(sprite)
}
}
}
For more details on the SpriteKit approach (in Objective-C) see the SceneKit State of the Union Demo from WWDC14. That shows a SpriteKit scene used as the texture map for a torus, with spheres of paint getting thrown at it -- whenever a sphere collides with the torus, it gets a SCNHitTestResult and uses its texcoords to create a paint splatter in the SpriteKit scene.
Finally, some Swift style comments on your code (unrelated to the question and answer):
Use let instead of var wherever you don't need to reassign a value, and the optimizer will make your code go faster.
Explicit type annotations (res: SCNHitTestResult) are rarely necessary.
Swift dictionaries are bridged to NSDictionary, so you can pass them directly to an API that takes NSDictionary.
Casting to a Swift typed array (hitTest(...) as [SCNHitTestResult]) saves you from having to cast the contents.

Resources