Why is the physicsBody bigger than visual model in sceneView? - ios

I am working on the game with spaceship flying around earth and destroying objects. I wanted to add some reaction when object hit the spaceship but the contact is detected before it "hit" visual model of the spaceship. I found out that the physicsbody of the node is bigger than visual model. What should I do to set it on the same size?
Here's some more info:
I created model built of many basic shapes/nodes in SceneKit editor in Xcode
I set physics body in the editor with options: dynamic type, default shape, bounding box, scale: 1.
There's part of my code that do something with the ship node:
shipNode = scene.rootNode.childNode(withName: "ship", recursively: true)!
shipNode.physicsBody!.physicsShape = SCNPhysicsShape(node: shipNode, options: nil)
shipNode.physicsBody!.categoryBitMask = 4
screen of UI of my game, with physicsshapes visible

Just to close this out, we should probably mark it as solved;
Before, the spaceship was too big so I changed scale of "ship reference" in main scene. In this situation textures got smaller but physicsbody didn't. After your tip, I changed the size of the ship in original .scn file and it solved all problems

Related

SCNPhysicsShape(shapes:transforms:) creates MULTIPLE shapes?

My goal is to create a single physics body out of several of SceneKit's SCNBox geometries.
My understanding is that when I pass an SCNPhysicsShape(shapes:transforms:) to an SCNPhysicsBody(type:shape:), it should create a single physics body.
However, I end up with something that seems suspiciously like it's not a single physics body at all, but rather several separate physics bodies -- one for each shape I passed into SCNPhysicsShape(shapes:transforms:).
When I turn on scnView.debugOptions = .showPhysicsShapes, I can clearly see red lines defining the separate bodies in question. On its own, this isn't very convincing evidence (it's conceivable that those lines could be shown for whatever reason while still being a single physics body).
But there's another piece of data, here: The project in which I'm encountering this issue features a small ball that rolls around the scene -- and when that ball rolls over the red lines in question, the ball bounces up into the air a bit. So, it's quite obvious that, whatever is actually going on, there are edges where I would expect to see none.
This behavior is clearly visible in the following GIF. In it, each colored block is a separate SCNBox geometry with its own physics body. Each block has the exact same position.z. The ball bounces considerably as it crosses the point where one geometry meets another.
Here's some code illustrating the issue. parent is an SCNNode that holds the child nodes, and is the node to which I assign the physics body. Please assume that all properties are defined; I'm omitting things that aren't terribly relevant.
let childShape1 = SCNBox(width: 6, height: 2, length: 6, chamferRadius: 0.0)
//Other child shapes defined here...
//Set up the positional translation relative to the child node's parent:
let translateMatrixShape1 = SCNMatrix4MakeTranslation(childShape1.position.x, childShape1.position.y, childShape1.position.z)
//Other child translations defined here...
let parentShape = SCNPhysicsShape(shapes: [childShape1, childShape2, childShape3, childShape4], transforms: [translateMatrixShape1, translateMatrixShape2, translateMatrixShape3, translateMatrixShape4])
parent.physicsBody = SCNPhysicsBody(type: SCNPhysicsBodyType.static, shape: parentShape)
Now, parentShape is four rectangular boxes arranged around a central point, creating a sort of picture-frame-shaped object.
The ball is an SCNNode with an SCNSphere geometry and a dynamic physics body.
Question: Does anyone have any idea what might be going on, here? Have I somehow misunderstood how this whole thing works, or is this a limitation of SceneKit?
Usually you can create one single PhysicsBody from several geometry objects by making a flattenendClone from out of your (i.Ex.) parent node. This new node will have one single geometry then. For the PhysicsBody you can then use the geometry element of the new node. In Addition I recoommend to use a concavePolyhedron for the static body type. (I hope I understood you correctly)

Positioning virtual assets with ARKit and SCNKit

I'm trying to understand SCNKit and ARKit a little better and have a barebones Xcode 9 Augmented Reality app deployed and working on my iPhone (which I'm using as a simple test device).
This app's source code is here.
Basically, the app starts, the camera is initialized, and it renders a 3D fighter jet inside the scene (world view) in a similar fashion to how Pokemon Go injects monsters into your camera viewport (wherever your pointing your camera at). Pretty cool!
This code was all auto-generated for me by Xcode. So I'm trying to understand where the logic lives that determines where to position/orient the fighter jet (the SCN file titled art.scnassets/ship.scn). From here we see the jet being loaded:
override func viewDidLoad() {
super.viewDidLoad()
// Set the view's delegate
sceneView.delegate = self
// Show statistics such as fps and timing information
sceneView.showsStatistics = true
// Create a new scene
print("Hello there Mr. Zac")
let scene = SCNScene(named: "art.scnassets/ship.scn")!
// Set the scene to the view
sceneView.scene = scene
}
But I don't understand how the app chooses where to place the jet/ship and to orient it in which direction. I ask because as a first step I'd like to try repositioning the jet and then also swapping it out for my own asset files.
The "logic" for that, such as there is any, lives in two places.
The ship.scn file defines not just a model, but the model's position in the scene. (That is, in a global "world" coordinate space.)
In the scn file that ships in that Xcode project template, the model is positioned at something like 0, 0, -0.5, so if a camera is placed at the origin of the coordinate system, the ship appears directly in front of the camera, half a meter away.
ARKit itself defines scene/world space relative to the initial real-world position/orientation of the device. By default, that coordinate system's z-axis matches the initial orientation of the device, so anything placed "in front of" the coordinate origin will appear in front of the camera when you start the AR session.

How to scale DAE models correctly in ARKit and SceneKit?

I'm currently trying to combine the following sources:
Apples SceneKit Vehicle Demo, Resp. its Swift version,
ARKit by example, and resp. its Swift version.
Each project on its own works like a charm (although I changed the vehicle demo so that the car can be controlled by on-screen buttons).
Now, when I try to combine both projects to create an augmented reality racing game, I run into problems regarding the size of the .dae model of the car: it's too big.
I can scale the model using the (chassis) nodes .scale property, but as soon as I add the SCNPhysicsVehicle properties and behaviour, the car gets reset(?) to its original size. I tried to scale the model in Xcode (open dae file, change scale), but its bounding box remains the same - that tells me that the rescaling didn't work properly.
Any hints?
1)you can scale the dae models by art.scnassets directly.
art.scnassets -> car.dae -> node inspector -> transforms -> scale the object
2) can scale 3dmodel by SCNAction
let scene = SCNScene(named: "art.scnassets/cup.dae")!
let node = scene.rootNode.childNode(withName: "cup", recursively: true)!
let action = SCNAction.scale(by: sender.scale, duration: 1.0)
node.runAction(action)
What I like to do is use Blender or some other 3d modeling program to resize your dae model to work in meters. Everything in ARKit is based on meters, so by sticking to the same metric you can get all your models to play well together without having to guess what the scale factor needs to be.
I'm not sure how to fix the model directly in Xcode. However, you can fix it in blender. Start by importing the object into blender. Select the object and observe it's dimensions. Scale the object to the desired dimensions and apply them by hittin Ctrl + A, and selecting scale. Alternatively, from the object menu, you can select Apply -> Scale. Now you can export your model with the corrected size.

Modify the Bounding Box Size of 3D model

This function is supposed to add a 3D model to screen in a confined space:
func addCharacter(gridPosition: SCNVector3, levelNode: SCNNode){
let carScene = SCNScene(named: "assets.scnassets/Models/cube.dae")
// Create a material using the model_texture.tga image
let carMaterial = SCNMaterial()
carMaterial.diffuse.contents = UIImage(named: "assets.scnassets/Textures/model_texture.tga")
carMaterial.locksAmbientWithDiffuse = false
// Create a clone of the Car node of the carScene - you need a clone because you need to add many cars
let carNode = carScene!.rootNode.childNodeWithName("Cube", recursively: false)!.clone() as SCNNode
carNode.name = "Cube"
carNode.position = gridPosition
// Set the material
carNode.geometry!.firstMaterial = carMaterial
// Create a physicsbody for collision detection
let carPhysicsBodyShape = SCNPhysicsShape(geometry: SCNBox(width: 0.30, height: 0.20, length: 0.16, chamferRadius: 0.0), options: nil)
carNode.physicsBody = SCNPhysicsBody(type: SCNPhysicsBodyType.Kinematic, shape: carPhysicsBodyShape)
carNode.physicsBody!.categoryBitMask = PhysicsCategory.Car
carNode.physicsBody!.collisionBitMask = PhysicsCategory.Player
levelNode.addChildNode(carNode)
}
The code works if I change the dimensions of the dae file before adding it to Xcode, and by dimensions I mean the Bounding Box Size property of Xcode. I wanted to know if It was possible to change this bounding box size directly from Xcode. It does not seem possible from the UI. Is it possible with code? Or, even better, scale down the object maintaining proportions to a size for the XYZ between the range 0.3 -> 0.7? At the moment my objects show a box size of over 45 for XYZ. Furthermore, If I was to use a .scn file instead of a .dae in the code above, would that still work?
EDIT:
If I change the size via code, would it have an impact on efficiency? I notice that for larger .dae models the fps drops from 60 to 30 and the game slows down.
Changing carNode's scale property will reduce the apparent size of the car.
However, I think you'll be ahead to load your .DAE into the Xcode scene editor. That will allow you to scale it down ahead of time (in the Node Inspector, option-command-3). You can also, if you want, add your texture. Then save it as a .SCN file, which is compressed and should load faster.
Changing the scaling, either in code or in the Xcode scene editor, won't affect efficiency. Reducing the complexity of the car (the number of polygons/vertices) will speed things up, though.
Consider creating SCNLevelOfDetail instances for your car node. That will cause the renderer to use substitute, lower resolution geometry when the node is far from the camera. The WWDC 2014 slide deck demonstrates this, slide 38, AAPLSlideLOD.m.

Detecting touch inside bounding box (alpha mask) of Sprite using SpriteKit

I am starting to learn Swift2 to develop on iOS using SpriteKit and I cannot seem to detect if I touched a visible part of my Sprite.
I can find which node was touched like this:
let touchedNode = self.nodeAtPoint(location)
But this detect the touch on the sprite. I would like to detect the touch on the "not transparent" part of the sprite only.
I tried creating a PhysicsBody with an alpha mask bounding box and testing if the bounding box of the node I selected contains the location of the touch like this:
sprite.physicsBody = SKPhysicsBody.init(texture: sprite.texture!, size: sprite.size)
sprite.physicsBody?.affectedByGravity = false
if (touchedNode.containsPoint(location)){
But it didn't help. If I click in a transparent part of my sprite, the event still triggers.
The documentation says "A new physics body is created that includes all of the texels in the texture that have a nonzero alpha value.", so shouldn't it work?
Thanks for your time.
PS: I also tried to be more generous on my alpha threshold, but this also did not work (in case my transparency wasn't perfect).
UPDATE:
To add a little more details, I am building a level editor. This means that I will create many nodes, depending on what the user chooses, and I need to be able to select/move/rotate/etc those nodes. I am actually using SKSpriteNodes as my PNG pictures are automatically added in xcassette that way. I decided to use the PhysicsBody's Alpha Mask bounding box, as this value is shown in the Scene Editor (Xcode) when you select a node, and when selected, the Alpha Mask highlights exactly the part of my Sprite that I want to be able to detect a touch inside.
If I am using the wrong ideas/techniques, please tell me. This is possible as I am only starting to use Swift and SpriteKit.
UPDATE 2
I queried the physicsWord (as recommended) to get the right physicsBody and got the name of the attached node like this:
let body = self.physicsWorld.bodyAtPoint(location)
print(body?.node?.name)
But this is still printing the name of the node even if I touch outside the bounding box, which makes no sense to me.
Thank you anyway for your help.
The SKPhysicsBody doesn't define the bounds of the SKNode. You can do the hit check by querying the SKPhysicsWorld of your scene by calling bodyAtPoint on it. That will return the SKPhysicsBody you are interested in.
I was having the same problem. You actually can't perform the touch on the alpha mask of a Sprite Node and it says it right in the Apple documentation. You have to use a Composite Box. I used a shape Node and changed the "line width" to 0 so essentially the composite box is invisible. Make sure that the Z axis is higher for the composite box and then run the touch event through the composite box shape node instead of the original node.

Resources