My team and I are working on a SpriteKit based iOS game of medium complexity. There are lots of layers and nodes to the design of the game and the zPositioning of the nodes has gotten sloppy. One task I have agreed to take on is the revamping of our zPosition strategy: moving to constants instead of magic numbers, having a holistic zPosition scheme for the app, etc. but first I want to analyze where we are at now. So here is my question:
I vaguely recall watching a WWDC video (or some other tutorial, maybe) in which the person showed using some aspect of Instruments (or some other tool) to show a 3D rendering of an app, seen from an isomorphic angle, based on the zPosition of the SKNodes (or UIKit elements?) in the app.
Does anyone here know what tool this is? And if not, what is the best way to visualize the current state of zPositions in a SpriteKit based app? Thanks!
Related
I'm working on an ARKit project for 4 months now.
I noticed that when adding a child to my scene rootNode, there is a FPS drop. The device freezes for less than a second.
I did a lot of research and trials, noticed that all Apple's code examples have this FPS drop too when placing an object.
It does not matter if the node is added directly (scene.rootNode.addChild(child)) or if it's added in the renderer loop at different phases (didUpdateAtTime, didApplyAnimations etc...).
I found that once an object has been added to a scene, the next added object will render immediately. I use a 3D model created in SceneKit editor, clone it to generate my different nodes before adding them as child. I do this loading work before placing the objects.
Instruments shows that the renderer loop is busy for the duration of the freeze.
The only solution that I found is to add my nodes to the scene behind a loading screen before starting the whole experience.
Is that a normal behavior in game programming to render nodes before using them ?
Thanks guys
With the release of ARKit 3.0 and its satellite – RealityKit (framework with optimised rendering engine and changed scene's hierarchy, that was written in Swift hence it has no Objective-C binding), a drop-frame, when adding a child, is reduced to an imperceptible value.
And such a predictable behaviour of ARKit3/RealityKit ligament is especially true for devices with processors A12 Bionic and A13 Bionic manufactured on 7 nm process (and, of course, due to the fact they have last-gen neural engines and powerful GPUs).
For devices with a less powerful processors (A9, A10, A11), it is advisable to use 3D models with a total number of polygons of no more than 10K per model, and with usual shaders like .blinn or .phong (not PBR).
I believe it's quite a common practice for games and apps that use game engines, to firstly load (or cache) all the necessary game assets (like 3D models, textures, sound files, etc) into RAM before using them. For further details please read this article and this article.
However, it’s worth saying that AR games, unlike VR games, consume considerably more processing power, therefore they need to be carefully optimised. So, you're absolutely right, rendering nodes before using them and it's a normal behaviour in game programming.
As an introduction and context, I'm currently a novice iOS app developer and I want to make sure I'm not reinventing the wheel too much as I make this app (reinventing wheels can get very expensive.)
The app will allow the user to download our videos off the internet and will allow storage for offline usage. The problem with storing these videos on the device is that many of them will be too long and thus too big to be practical to store.
The videos are quite simple however, consisting of a couple short "real" video clips at the beginning and end, with the bulk of the video being still images animated around the screen. The animations would consist solely of opacity and simple transformation keyframes (translate, scale, rotate around static anchor point), and would require a variety of easing functions for each transition.
The hardest part likely would be that the "video" player will also have to be able to track with an audio player's timecode, and will have to support seeking to any arbitrary point like a normal video player.
So, now that I've described the problem, here's the solution I've come up with so far. Hopefully doing it this way will reduce the probability of XY problems. :)
The idea is to basically do a dumbed-down version of what Final Cut and other editing programs do with animations—have a bunch of clips, sometimes overlapping, and be able to animate the position, scale, rotation, and opacity of each using keyframes.
My first instinct as far as implementation goes is to use some of iOS's game engine stuff to do animations (maybe SceneKit because it seems to allow animations to use scene time as opposed to real time, despite the fact that it's primarily 3d and I am doing 2d animations) and manually handle syncing time with the audio player, as well as manually handling the adding and removing of nodes from the scene when seeking through the video and when clips begin/end.
What are some built-in systems, plugins, etc. that I can take advantage of to make this easier and faster to develop and maintain? Double points if I don't have to transcode the animations by hand to some custom format.
As I mentioned in my comment your question is rather broad and contains multiple questions in one, I will address what you mentioned to be likely the hardest part:
https://developer.apple.com/documentation/avfoundation/avplayeritem
https://developer.apple.com/documentation/avfoundation/avasset
Instead of SceneKit, take a look at SpriteKit and its SKVideoNode.
Also, research Metal video processing. There are quit a few example projects available you could use as a starting point.
A few questions to game developers. I am very beginner in this. I want to create a game level for example a green plane with trees. I have played a little in Blender and SceneKit. I know that I can export .dae from Blender and import it to Xcode. My questions:
Should I delete camera and light node before export? Why?
Should I design all level in one .dea file or make it separately? For example one .dea for plane and four different trees in for .dea's How to merge them in Xcode?
Can I use many times one .dea to generate for example a forest? How?
If creating design in separately is better way how to keep proportions between them to protect yourself from creation man bigger than tree?
I will be very great full if somebody someone dedicates to these questions. It will cut my time to learn basic. Thanks in advance. :)
I'll tell you how I do it:
1) .dea files use only for models(trees, charecters, building, etc...)
2) Game scene: floor, models, light, camera, obstacles build using Xcode scene builder or by code or mixed (based on the scene).
3) Based on size of world/level it can be split into several scenes(visible/invisible by player). Then you can create one blank scene and load/unload these scenes during runtime.
4) For a model you create a reference and after that build forest using reference of tree. If in the future you need to change the color of tree, all trees in all scenes will be updated.
5) For each model(SCNNode) (loaded from .dea file) you can set scale attribute (from code or by Xcode scene builder)
Also, 3D Apple Games by Tutorials is very good for starting.
I want to make a 2D model in iOS programatically. Like this:
This is taken from the app Gomoji.
I googled it but not get the proper solution.
This character is also moving so it can move hands and the legs meanwhile I want to change the colour of the hands etc.
Could it be possible with SpriteKit, SceneKit, gif, SVG or anything else?
This is an incredible amount of work in code, with SpriteKit and actions.
You might be better off using the puppet features of After Effects to creation motion frame sequences, and then bring them into SpriteKit and string them together and jump between the sequences as necessary.
Start here, to understand the puppetry tools in AE:
https://helpx.adobe.com/after-effects/using/animating-puppet-tools.html
Once you've learnt the lingo, head on over to youtube to pick up tips on how to do 2D arms, head wobbles, etc.
There's also a face animator in the latest versions of After Effects, that might be helpful, too.
Generally speaking, this is still a lot of work. And a lot of fiddling to get it to look "just so". But doing this visually, with manual mouse controls and instant playback before exporting image sequences from AE will be lightyears faster than attempting to do this with joints and code in SpriteKit or any other game engine.
I want to make a 2 player mode, split screen style, like Tiny Wings HD did where each side of an iPad gets a flipped orientation screen of the current Level.
I wanted to also implement it on tvOS (without the flipped orientation) as I feel TV begs for this sort of gameplay as it's pretty classic to have this style of gameplay on TV (e.g. Mario Kart 64 or Goldeneye).
Over on the Apple Developer forum, someone suggested that it could be done as follows, but, there we're no other responses.
"You can have two views attached to the main window (add a subview in your viewcontroller). To both views you can present a copy of the scene. Then you can exchange game data between scenes via singletons."
I was looking for a more in-depth explanation as I don't exactly understand what the answer is saying.
I'd just like to be able to have two cameras both rendering the same scene but one focusing on player 1 and the other player 2.
Obviously this isn't a simple answer, so I don't expect a full in-depth tutorial.
Unfortunately I could find no info on this.
Has anyone tried this?
A sample project would be ideal or some documentation/links that might help.
I'm sure a demonstration of this would be valuable to quite a lot of people.
No Cameras involved or necessary
The players just look like they're moving along the x axis because the backgrounds are scrolling by. You can allow the players to move up & down on the y axis whether jumping, ducking, rolling or following a path like in Tiny Wings, but the player never leaves their x position. You can even have each half of the screen background scrolling at different speeds to represent that one player is moving faster than the other.
In your update method in you scene file you can scroll your backgrounds, and in your touches methods you can jump, duck etc the players
Instead of using an SKView to present an SKScene, you can use SKRenderer and MTKView. SKRenderer renders a scene into a Metal pipeline, which in turn can be presented by an MTView.
Crucially, you can decide if SKRenderer updates the scene, allowing you to render the same scene frame multiple times (possibly using different cameras).
So a pipeline might look like this:
Apple actually talk about this option in Choosing a SpriteKit Scene Renderer. There's also a section about using SKRenderer in Going Beyond 2D with SpriteKit from WWDC17 which is quite helpful. This answer also shows how to use SKRenderer (albeit in Objective-C).