Blender + SceneKit (how-to) - ios

A few questions to game developers. I am very beginner in this. I want to create a game level for example a green plane with trees. I have played a little in Blender and SceneKit. I know that I can export .dae from Blender and import it to Xcode. My questions:
Should I delete camera and light node before export? Why?
Should I design all level in one .dea file or make it separately? For example one .dea for plane and four different trees in for .dea's How to merge them in Xcode?
Can I use many times one .dea to generate for example a forest? How?
If creating design in separately is better way how to keep proportions between them to protect yourself from creation man bigger than tree?
I will be very great full if somebody someone dedicates to these questions. It will cut my time to learn basic. Thanks in advance. :)

I'll tell you how I do it:
1) .dea files use only for models(trees, charecters, building, etc...)
2) Game scene: floor, models, light, camera, obstacles build using Xcode scene builder or by code or mixed (based on the scene).
3) Based on size of world/level it can be split into several scenes(visible/invisible by player). Then you can create one blank scene and load/unload these scenes during runtime.
4) For a model you create a reference and after that build forest using reference of tree. If in the future you need to change the color of tree, all trees in all scenes will be updated.
5) For each model(SCNNode) (loaded from .dea file) you can set scale attribute (from code or by Xcode scene builder)
Also, 3D Apple Games by Tutorials is very good for starting.

Related

How to align SCNScene to a physical table using ARKit?

I'm trying to find the best strategy to align a SCNScene to a physical table. Just like the ARKit app WWWFreeRivers.
Currently I'm just testing out to map a simple plane model, with the same dimensions as the table. If I draw out the plane that ARKit detects, I can see that the plane is not very accurate with the edges. They always go outside of the edges (image below).
So I can't really rely on that plane, to just place the model in the center of this. The model is not rotated correctly either (image below).
I had another idea to use the ARReferenceImage technique, to take a picture of the table top texture, and let ARKit find and match this "image" of the table. But even with wood grain texture, it wasn't enough data for ARKit to recognize it. And ARKit just fails if you have these errors. It doesn't even try to do a bad match.
How can I go about doing this?
Ideas I've had so far:
Take image of table and use ARImageReference feature to match it. This didn't work. Maybe if I add some more interesting feature points to the table, like some sort of QR codes in the corners.
Detect the plane, and then tap the four corners on the table to map out a square, and use this.
Do as the WWW app, just place the object randomly on the plane, and then let the user scale, move and rotate the model to give it correct placement.
Any more ideas? What do you think will be the best approach to this?
Two options I can think of you could use.
You could create an ARWorldMap (iOS12+ only) and use it instead of the ARImageReference, walk around the area while creating a map that subsequent ARKit Sessions will remember. You can experiment slightly as to how to fit your models within the four corners of the table (this is slightly tedious w/o much help from the SceneView editor). However, when you load the saved ARWorldMap and localized against it (just like the ARImageReference), your model should fit within the four corners of the table every time.
If you use something like Unity (and its ARKit plugin), it has much more powerful Editor tools (3D viewer/designer). There are some tools that can help you save the map just like ARWorldMap but then bring in details of the map into the editor so you can line things up right really easily. Placenote's Spatial Capture toolkit can help here. Placenote (iOS11+) creates its own "World Map" but it exposes the visual details in the Unity editor, making it easier to line things up and then localize against (Example). The map is also stored on a managed cloud from the get-go to make sharing across phones much easier.
P.S: Both these options require you to keep the environment generally static (not large lighting changes etc.), though this was a similar constraint to when using ARIMageReference.

Adding animation to an 3d model via ARKit

I have a 3d model of a human being standing. I implemented it into an project using arkit and can place it somewhere in the room. So far so good, but I would like to add an animation to the 3d model. For example when I press the buttonDance that it starts dancing. Not to move it up and down, but to add an animation to it.
What are keywords to make this work or does anyone have a brief way of doing this? Maybe what software to use or is it possible within sceneKit maybe?
You can use services such as Mixamo to generate an animation for your character.
I would advise you to use 3D models in Collade (.DAE) format because this format includes all your animations inside. You will have to clean the .DAE file to collect all the bone animations into one animation, more info here.
You will then need to read the animation from the .DAE file and add it to the node (your 3D model). Esteban Herrera has a great blog post on how to animate 3D models with ARKit.

Terrain Generation using SceneKit

I am trying to make a terrain map for a 3D game in SceneKit. I have tried a few options starting with using a MapKit as the floor but it didn't workout very well. I am not sure what is the proper way of doing the same. I have seen a few search results that talk about procedural generation but none with SceneKit.
I am reaching out to you guys to point me in the right direction. I need to know how to start a project to render a randomly generated terrain. What kinda components should I have in my Scene to do this ? I am not after fully functional code but some established ideas on how this is done normally. I also would like to know what kinda resources I need to generate a map like that. For 2D I can use TileMaps which has tile images put together randomly but how is it done in SceneKit ?

How to Visualize zPositions in iOS

My team and I are working on a SpriteKit based iOS game of medium complexity. There are lots of layers and nodes to the design of the game and the zPositioning of the nodes has gotten sloppy. One task I have agreed to take on is the revamping of our zPosition strategy: moving to constants instead of magic numbers, having a holistic zPosition scheme for the app, etc. but first I want to analyze where we are at now. So here is my question:
I vaguely recall watching a WWDC video (or some other tutorial, maybe) in which the person showed using some aspect of Instruments (or some other tool) to show a 3D rendering of an app, seen from an isomorphic angle, based on the zPosition of the SKNodes (or UIKit elements?) in the app.
Does anyone here know what tool this is? And if not, what is the best way to visualize the current state of zPositions in a SpriteKit based app? Thanks!

Cocos2d: graphics tool

I started to learn Cocos2d to develop games and also Box2d; I read some tutorials and I seen that are used two couples of tool "LevelHelper-SpriteHelper" & "PhysicsEditor-TexturePacker".
I noticed that LevelHelper-SpriteHelper are more "simply" and organize levels and physics objects very well.
While with PhysicsEditor-TexturePacker I noticed some difficulties where the approach is not very clear.
So what are the best tools between "LevelHelper-SpriteHelper" & "PhysicsEditor-TexturePacker"?
And what are the differences? Can you explain me? thanks
This should answer your questions: http://abitofcode.com/2012/07/cocos2d-useful-tools/
Physics editor is a program that you use to create a tracing around a sprite that isn't a simple polygon. For example it could trace an image of a car, so that when you went to detect a collision between your car and another object with a physics engine (something like box2d) it registers a collision just with the car and not a square surrounding the car. Here is a picture that shows you what it does: http://www.codeandweb.com/physicseditor/features.
Texture-packer is used to put all your sprites that you use in your game into one spritesheet. This allows you to minimize the amount of memory that all of your sprites take up.
http://www.codeandweb.com/texturepacker That picture shows you what it does. Instead of having to add all your individual sprite images to your game you put them all on a spritesheet, which trims the space around each image and puts it into a file size that cocos2d and the iphone can work with.
This is helpful because cocos2d only takes images that have dimensions to the power of two. (2,4,8,16....) If you had a sprite that was 50x50, it would actually take up 64x64 amount of space in your game.
Here is a tutorial that explains most of that better than i did: http://www.raywenderlich.com/2361/how-to-create-and-optimize-sprite-sheets-in-cocos2d-with-texture-packer-and-pixel-formats
And here is project where both are used: http://www.raywenderlich.com/7261/monkey-jump
And here is one with levelhelper and spritehelper: http://www.raywenderlich.com/6929/how-to-make-a-game-like-jetpack-joyride-using-levelhelper-spritehelper-part-1
For a list of more tools go here"
http://www.learn-cocos2d.com/2011/06/complete-list-cocos2d-tools/
SpriteHelper is essentially the same tool as TexturePacker. Both create a single large texture from individual images.
LevelHelper is an editing tool to design your game visually. It also allows editing of the physics world.
PhysicsEditor is a tool to create the (collision) shapes of physics bodies from images. No more, no less.

Resources