I've finished developing a card game called Up and Down the River in SpriteKit. It is a fairly simple card game with a few animations such as the action of dealing and playing a card.
According to debugger tools, it is generally Very High energy utilization and the averages near 170 wakes per second. (Shown below)
What is typical for a SpriteKit game? Should a simple card game be using this much energy? If not, what should I be looking for in order to reduce the energy usage?
Note: This is being run on macOS, however the game is cross-platform (meaning iOS and macOS). I get similar results for running on an iOS device.
When SpriteKit is running it is constantly updating the screen (usually at 60 frames per second).
If you do not need this high speed you can reduce it to 30 or 20 or lower frames per second by setting preferredFramesPerSecond on the SKView, see https://developer.apple.com/reference/spritekit/skview
If your game is completely static while waiting for user input you can even set isPaused on the SKView to stop updates completely while you are waiting.
Related
I have been running a simple, many body (800) 2x2 pixel SpriteKit simulation using the PhysicsWorld under iOS11 / Swift4 - a rudimentary particle system.
It contains a simple radial gravity node, with 800 objects orbiting it with no collision or contact detection employed.
On my test hardware, an iPhone6S - I have been really impressed with attaining 60FPS and super smooth simulation - which opens the way for much more exploration.
The only problem I have been running up against is that the performance is really inconsistent.
If I switch the gravity node's field strength to a high value mid run (meaning objects move around a lot faster) - in about 25% of runs, the simulation can get 60FPS no problems, but on the other 3 out of 4 runs, the framerate drops to 3-4FPS
If I revert to a low field strength (slower movements) then it always gets 60FPS again.
I have profiled in XCode - and can see that SOMETIMES 3 or 4 threads are given to the app, giving 60FPS, but the other times only a single thread is being given to the task - meaning gnarly performance
I have experimented with Grand Central Dispatch, and thread priorities - to no avail - the only code that is running each frame is the PhysicsWorld anyway, nothing I'm doing frame to frame.
Any ideas out there how I can get consistent 60FPS - as clearly the hardware is capable of it - but just when it feels like it!
Additional note: if the device is plugged into the dev Mac / on charge / USB / with XCode running etc - the performance is more often good with all threads utilised. But trying the exact same app, say the next day, running standalone on the device (not attached to dev machine) - nearly always very poor performance - MAJOR HEADSCRATCH!
Performance issues regarding nodes are common when physics bodies are used, if you have used a physics body use the standardised bodies instead of the custom one. Hope this helps.
In my app, I have several simple scenes (a single 80 segment sphere with a 500px by 1000px texture, rotating once a minute) displaying at once. When I open the app, everything goes smoothly. I get constant 120fps with less than 50mb of memory usage and around 30% cpu usage.
However, if I minimize the app and come back to it a minute later, or just stop interacting with the app for a while, the scenes all lag terribly and get around 4 fps, despite Xcode reporting 30fps, normal memory usage, and super low (~3%) cpu usage.
I get this behavior when testing on a real iPhone 7 iOS 10.3.1, and I'm not sure if this behavior exists on other devices or the emulator.
Here is a sample project I pulled together to demonstrate this issue. (link here) Am I doing something wrong here? How can I make the scenes wake up and resume using as much cpu as they need to maintain good fps?
I won't probably answer the question you've asked directly, but can give you some points to think about.
I launched you demo app on my iPod 6-th gen (64-bit), iOS 10.3.1 and it lags from the very beginning up to about a minute with FPS 2-3. Then after some time it starts to spin smoothly. The same after going background-foreground. It can be explained with some caching of textures.
I resized one of the SCNView's so that it fits the screen, other views stayed behind. Set v4.showsStatistics = true
And here what I got
as you can see Metal flush takes about 18.3 ms for one frame and its only for one SCNView.
According to this answer on Stackoverflow
So, if my interpretation is correct, that would mean that "Metal
flush" measures the time the CPU spends waiting on video memory to
free up so it can push more data and request operations to the GPU.
So we might suspect that problem is in 4 different SCNViews working with GPU simultaneously.
Let's check it. Comparing to the 2-nd point, I've deleted 3 SCNViews behind and put 3 planets from those views to the front one. So that one SCNView has 4 planets at once. And here is the screenshot
and as you can see Metal flush takes up to 5 ms and its from the beginning and everything goes smoothly. Also you may notice that amount of triangles (top right icon) is four times as many as what we can see on the first screenshot.
To sum up, just try to combine all SCNNodes on one SCNView and possibly you'll get a speed up.
So, I finally figured out a partially functional solution, even though its not what I thought it would be.
The first thing I tried was to keep all the nodes in a single global scene as suggested by Sander's answer and set the delegate on one of the SCNViews as suggested in the second answer to this question. Maybe this used to work or it worked in a different context, but it didn't work for me.
How Sander ended up helping me was the use of the performance statistics, which I didn't know existed. I enabled them for one of my scenes, and something stood out to me about performance:
In the first few seconds of running, before the app gets dramatic frame drops, the performance display read 240fps. "Why was this?", I thought. Who would need 240 fps on a mobile phone with a 60hz display, especially when the SceneKit default is 60. Then it hit me: 60 * 4 = 240.
What I guess was happening is that each update in a single scene triggered a "metal flush", meaning that each scene was being flushed 240 times per second. I would guess that this fills the gpu buffer (or memory? I have no idea) slowly, and eventually SceneKit needs to start clearing it out, and 240 fps across 4 views is simply too much for it to keep up with. (which explains why it initially gets good performance before dropping completely.).
My solution (and this is why I said "partial solution"), was to set the preferedFramesPerSecond for each SceneView to 15, for a total of 60 (I can also get away with 30 on my phone, but I'm not sure if this holds up on weaker devices). Unfortunately 15fps is noticeably choppy, but way better than the terrible performance I was getting originally.
Maybe in the future Apple will enable unique refreshes per SceneView.
TL;DR: set preferredFramesPerSecond to sum to 60 over all of your SceneViews.
Many times when I swipe iOS notifications from the top, and back to the app, then SceneKit fps drops to 40. Sometimes it happens also after minimising and maximising app. How come? Can this be overcome somehow? I tried pausing SceneKit when app becomes inactive, and resuming when it's back, but this is not helping.
I've noticed this, too. In several different frameworks that use constant frame refreshing or are tied to the CADisplayLink as part of their presentation process.
I suspect, but have never been able to confirm, iOS is polling the performance of frame refreshing apps when they start, when they come out of inactive states, and when they're re-revealed after being obscured by system functions - like the notifications - and throttling to a guestimated rate without hiccups based on the polling results.
You can read more about my earlier thoughts on the matter here:
SpriteKit scene with low fps on start
and here:
Inconsistent SceneKit framerate
There are many comments on similar experiences in SpriteKit, you can read here: https://forums.developer.apple.com/thread/14487
So it's not just SceneKit, and there's no apparent answer, and the problem varies in extent and nature between iOS versions, but has been lurking for many years.
I'm almost finishing my iOS game written in Swift + SpriteKit.
It's a quite simple game, 30-32 nodes at max. Only 1 thing has physics. The rest is a few animated clouds (around 6). The CPU usage is around 2-3% and max RAM usage of 75-80MB.
Including that I also get frame drops when changing from one scene to another. Why that could be?
(I'm pre-loading all the textures and sounds during game init, and not on the scenes)
When I use the simulator for 5S up to 6S Plus, I don't see any frame drop in there. So that's weird. Looks like it's not my game but my iPhone 6S?
Now, I do also have other games installed on the same device from different developers, and I frequently get random frame drops too. Lags for 2-3 seconds and then comes back to 60fps.
Does anyone know if this is something that's happening after an X iOS update ? or I was even thinking this my be some kind of background service running that's killing my phone. Call it facebook, whatsapp, messenger, etc.
Is there any way I could possibly check on what's going on?
Was this caused by the way that newer versions of SpriteKit are defaulting to Metal render mode as compared to OpenGL mode? For example, do your problems go away when PrefersOpenGL=YES is added to Info.plist? I covered a bit of this performance issue in my blog post about a SpriteKit repeat shader. Note that you should only be testing on an actual iOS device, not the simulator.
in image processing applications what is considered real time? Is 33 fps real time? Is 20 fps real time? If 33 and 20 fps are considered real time then is 1 or 2 fps also real time?
Can anyone throw some light.
In my experience, it's a pretty vague term. Often, what is meant is that the algorithm will run at the rate of the source (e.g. a camera) supplying the images; however, I would prefer to state this explicitly ("the algorithm can process images at the frame rate of the camera").
Real time image processing = produce output simultaneously with the input.
The input may be 25 fps but you may choose to process 1 of every 5 frames(that makes 5 fps processing) and your application is still real time.
TV streaming software: all the frames are processed.
Security application and the input is CCTV security cams: you may choose to skip some frames to fit the performance.
3d game or simulation: fps changes depending on the current scene.
And they are all real time.
Strictly speaking, I would say real-time means that the application is generating images based on user input as it occurs, e.g. a mouse movement which changes the facing of an avatar.
How successful it is at this task - 1 fps, 10 fps, 100 fps, etc - is actually another question.
Real-time describes an approach, not a performance metric.
If however you ask what is the slowest fps which passes as usable by a human, the answer is about 15, I think.
i think it depends on what the real time application is. If the app is showing slideshows with 1 picture every 3 seconds, and the app can process 1 picture within this 3 seconds and show it, then it is real time processing.
If the movie is 29.97 frames per second, and the app can process all 29.97 frames within the second, then it is also real time.
An example is, if an app can take the movie from a VCR or Cable's analog output, and compress it into 29.97 frames per second video and also send all that info to a remote area for another person to watch, then it is real time processing.
(Hard) Real time is when an outcome has no value when delivered too early or too late.
Any FPS is real time provided that displayed frames represent what should be displayed at the very instant they are displayed.
The notion of real-time display is not really tied to a specific frame rate - it could be defined as the minimum frame rate at which movement is perceived as being continuous. So for slow moving objects in a visual frame (e.g. ships in a harbour, or stars in the night sky) a relatively slow frame rate might suffice, whereas for rapid movement (e.g. a racing car simulator) a much higher frame rate would be needed.
There is also a secondary consideration of latency. A real-time display must have sufficiently low latency in relation to other events (e.g. behaviour of a real-time simulation) that there is no perceptible lag in display updates.
That's not actually an easy question (even without taking into account differences between individulas).
Wikipedia has a good article explaining why. For what it's worth, I think cinema films run at 24fps so, if you're happy with that, that's what I'd consider realtime.
It depends on what exactly you are trying to do. For some purposes 1fps or even 2 spf (Seconds per frame) could be considered real-time. For others thats way too slow ...
That said, real-time means that it takes as long (or less) to process x frames as it would take to just present those x frames.
It depends.
automatic aircraft cannon - 1000 fps
monitoring - 10 - 15 fps
authentication - 1 fps
medical devices - 1 fph
I guess the term is used with different meanings in different contexts. In industrial image processing, real time processing is usually the opposite of offline processing. In offline processing applications, you record images (many of them) and process them at a later time. In real time processing, the system that acquires the images also processes them, at the same time, so the processing frame rate must not be higher than the acquisition frame rate.
Real-time means your implementation is fast enough to meet some deadline. The deadline is part of your system's specification. If it's an interactive UI and the users are not too picky, 15Hz update can be OK, although it can feel laggy. If you're using it to drive a car along the motorway 30Hz is about right. If it's a missile, well, maybe 100Hz?