iOS: Instruments, allocations peaking to a flat line - ios

I'm trying to learn how to use instruments. I wonder is I can get some opinions or insight at to what is going on here.
Firstly, shortly after 02:00 my app crashes due to creating many high resolution views. You can see the graph peak when the views are created. I think around 20 - 30 rendered views at aprox 1000 points.
My question is this: Note how the graph peaks and flattens at the end of the track (see the red arrow), starting just before 02:00 when the views are created, does this mean that the device (an iPhone 5) has "run out of memory". I see that the allocations listed as "All Allocations" is 17.76MB. Could this be the reason for the crash? Or is it the graphics crashing?

does this mean that the device (an iPhone 5) has "run out of memory"?
No. It is relative to peak active memory used in that run. Illustration: you would likely see the allocation amount at 1 minute "shrink" when you start creating all those views around 2 minutes.

Related

6S+ GPU traces: cannot account for huge MS difference

FAST trace
SLOW trace
These two traces were captured a couple of minutes apart on an iPhone 6S+ on a more or less static menu. One completes in around 8ms, the other in around 14ms. The game will sit quite happily at 7-8ms render time and then, for no apparent reason, rise up to 14ms frame time for a while.
Bafflingly, individual draw calls take almost twice as long in the 'slow' trace as in the 'fast' trace, though I can see no difference between them. The first draw call in particular is noteworthy: a screen-filling quad with a very simple shader that completes in either 1.x ms or 2.x ms, with (so far as I can tell) identical GL settings in force both times.
I can load both traces into xcode and hit 'analyse', and the results are totally reproducible: the slow trace is always slow, and the fast trace is always fast, and I can't see what's different!
Notes: Yes, there are some redundant GL calls generated by our engine. They're the same in both traces, so they're not the focus of this investigation. And yes, the first two calls are a terrible way to achieve a full-screen fill; I've already talked to the designers :) Again: not the focus here because the question is why are those calls taking twice as long in one trace compared to the other.
The game will sit quite happily at 7-8ms render time and then, for no
apparent reason, rise up to 14ms frame time for a while.
I think what you're seeing here is the dynamic energy saving on the device. If you're comfortably hitting your framerate target, then the OS can reduce the clock speed of the CPU/GPU to reduce energy consumption and heat generation.
Unfortunately, this behaviour makes it very difficult to profile performance. I'm not aware of any way to disable this for performance measuring, or to view the current CPU/GPU clock speed (or the number of cores that have been shut down) to confirm the cause of confusing measurements.
I've not looked at your traces, but it's possible that the one you think is slower is actually marginally faster, and comes just under the threshold where the OS decides to half the clock speeds to save energy.

SceneKit scenes lag when resuming app

In my app, I have several simple scenes (a single 80 segment sphere with a 500px by 1000px texture, rotating once a minute) displaying at once. When I open the app, everything goes smoothly. I get constant 120fps with less than 50mb of memory usage and around 30% cpu usage.
However, if I minimize the app and come back to it a minute later, or just stop interacting with the app for a while, the scenes all lag terribly and get around 4 fps, despite Xcode reporting 30fps, normal memory usage, and super low (~3%) cpu usage.
I get this behavior when testing on a real iPhone 7 iOS 10.3.1, and I'm not sure if this behavior exists on other devices or the emulator.
Here is a sample project I pulled together to demonstrate this issue. (link here) Am I doing something wrong here? How can I make the scenes wake up and resume using as much cpu as they need to maintain good fps?
I won't probably answer the question you've asked directly, but can give you some points to think about.
I launched you demo app on my iPod 6-th gen (64-bit), iOS 10.3.1 and it lags from the very beginning up to about a minute with FPS 2-3. Then after some time it starts to spin smoothly. The same after going background-foreground. It can be explained with some caching of textures.
I resized one of the SCNView's so that it fits the screen, other views stayed behind. Set v4.showsStatistics = true
And here what I got
as you can see Metal flush takes about 18.3 ms for one frame and its only for one SCNView.
According to this answer on Stackoverflow
So, if my interpretation is correct, that would mean that "Metal
flush" measures the time the CPU spends waiting on video memory to
free up so it can push more data and request operations to the GPU.
So we might suspect that problem is in 4 different SCNViews working with GPU simultaneously.
Let's check it. Comparing to the 2-nd point, I've deleted 3 SCNViews behind and put 3 planets from those views to the front one. So that one SCNView has 4 planets at once. And here is the screenshot
and as you can see Metal flush takes up to 5 ms and its from the beginning and everything goes smoothly. Also you may notice that amount of triangles (top right icon) is four times as many as what we can see on the first screenshot.
To sum up, just try to combine all SCNNodes on one SCNView and possibly you'll get a speed up.
So, I finally figured out a partially functional solution, even though its not what I thought it would be.
The first thing I tried was to keep all the nodes in a single global scene as suggested by Sander's answer and set the delegate on one of the SCNViews as suggested in the second answer to this question. Maybe this used to work or it worked in a different context, but it didn't work for me.
How Sander ended up helping me was the use of the performance statistics, which I didn't know existed. I enabled them for one of my scenes, and something stood out to me about performance:
In the first few seconds of running, before the app gets dramatic frame drops, the performance display read 240fps. "Why was this?", I thought. Who would need 240 fps on a mobile phone with a 60hz display, especially when the SceneKit default is 60. Then it hit me: 60 * 4 = 240.
What I guess was happening is that each update in a single scene triggered a "metal flush", meaning that each scene was being flushed 240 times per second. I would guess that this fills the gpu buffer (or memory? I have no idea) slowly, and eventually SceneKit needs to start clearing it out, and 240 fps across 4 views is simply too much for it to keep up with. (which explains why it initially gets good performance before dropping completely.).
My solution (and this is why I said "partial solution"), was to set the preferedFramesPerSecond for each SceneView to 15, for a total of 60 (I can also get away with 30 on my phone, but I'm not sure if this holds up on weaker devices). Unfortunately 15fps is noticeably choppy, but way better than the terrible performance I was getting originally.
Maybe in the future Apple will enable unique refreshes per SceneView.
TL;DR: set preferredFramesPerSecond to sum to 60 over all of your SceneViews.

Xcode Memory Graph - showing increasing memory use - what exactly does it show?

When watching the debug graph in xcode 6 (and probably 5 too), when running my application the memory use continues to rise as I place more of a certain object on the screen and animate it's movement. It does not seem to decrease when I remove it. Once removed I believe there are no more references to them.
See screenshot:
http://i.stack.imgur.com/SnhbK.png
However when I use Instruments to try to identify what's going on, there's only around 12mb persisting, and Total Bytes continues to rise, as expected.
See screenshot:
http://i.stack.imgur.com/VBwce.png
Is this normal behaviour? What exactly is the graph in Xcode showing? Am I overlooking something?
In Instruments I have Allocation Lifespan set to All Allocations and Allocation Type set to All Heap and Anonymous VM for the screenshots above.
UPDATE
By running Instruments with Activity Monitor I was able to see that the "Real Memory" was increasing at the same rate as is displayed in Xcode. #Mark Szymczyk pointed out that OpenGL ES Texture memory allocations are not shown in the Allocations instrument.
By purging the texture cache with the following command in Cocos2D 3.1 at regular intervals, memory use consistently drops back down to around 18mb and begins increasing again as I add more sprites.
[[CCDirector sharedDirector] purgeCachedData];
Credits go to Mark Szymczyk for pointing me in this direction - thanks!
Looking at your screenshots, the Xcode graph is probably showing the equivalent of the Total Bytes column in your Instruments screenshot. When you remove an object, the persistent bytes will decrease, but the total bytes won't. That would explain why the memory use never goes down in the Xcode graph.
The Persistent Bytes column in Instruments is what you should be looking at to determine your app's memory usage.

Interpreting downward spikes in Time Profiler

I'd appreciate some help on how I should interpret some results I get from Time Profiler and Activity Monitor. I couldn't find anything on this on the site, probably because the question is rather specific. However, I imagine I'm not the only one not sure what to read into the spikes they get on the Time Profiler.
I'm trying to figure out why my game is having regular hiccups on the iPhone 4. I'm trying to run it at 60 FPS, so I know it's tricky on such an old device, but I know some other games manage that fine. I'm using Unity, but this is a more general question about interpreting Instruments results. I don't have enough reputation to post images, and I can only post two links, so I can't post everything I'd like.
Here is what I get running my game on Time Profiler:
Screenshot of Time Profiler running my game
As far as I understand (but please correct me if I'm wrong), this graph is showing how much CPU my game uses during each sample the Time Profiler takes (I've set the samples to be taken once per millisecond). As you can see, there are frequent downward spikes in that graph, which (based on looking at the game itself as it plays) coincide with the hiccups in the game.
Additionally, the spikes are more common while I touch the device, especially if I move my finger on it continuously (which is what I did while playing our game above). (I couldn't make a comparable non-touching version because my game requires touching, but see below for a comparison.)
What confuses me here is that the spikes are downward: If my code was inefficient, doing too many calculations on some frames, I'd expect to see upward spikes, now downward. So here are the theories I've managed to come up with:
1) The downward spikes represent something else stealing CPU time (like, a background task, or the CPU's speed itself varying, or something). Because less time is available for my processing, I get hiccups, and it also shows as my app using less CPU.
2) My code is in fact inefficient, causing spikes every now and then. Because the processing takes isn't finished in one frame, it continues onto the next, but only needs a little extra time. That means that on that second frame, it uses less CPU, resulting in a downward spike. (It is my understanding that iOS frames are always equal legnth, say, 1/60 s, and so the third frame cannot start early even if we spent just a little extra time on the second.)
3) This is just a sampling problem, caused by the fact that the sampling frequency is 1ms while the frame length is about 16ms.
Both theories would make sense to me, and would also explain why our game has hiccups but some lighter games don't. 1) Lighter games would not suffer so badly from CPU stolen, because they don't need that much CPU to begin with. 2) Lighter games don't have as many spikes of their own.
However, some other tests seem to go against each of these theories:
1) If frames always get stolen like this, I'd expect similar spikes to appear on other games too. However, testing with another game (from the App Store, also using Unity), I don't get them (I had an image to show that but unfortunately I cannot post it).
Note: This game has lots of hiccups while running in the Time Profiler as well, so hiccups don't seem to always mean downward spikes.
2) To test the hypothesis that my app is simply spiking, I wrote a program (again in Unity) that wastes a consistent amount of milliseconds per frame (by running a loop until the specified time has passed according to the system clock). Here's what I get on Time Profiler when I make it waste 8ms per frame:
Screenshot of Time Profiler running my time waster app
As you can see, the downward spikes are still there, even though the app really shouldn't be able to cause spikes. (You can also see the effect of touching here, as I didn't touch it for the first half of the visible graph, and touched it continuously for the second.)
3) If this was due to unsync between the framerate and the sampling, I'd expect there to be a lot more oscillation there. Surely, my app would use 100% of the milliseconds until it's done with a frame, then drop to zero?
So I'm pretty confused about what to make of this. I'd appreciate any insight you can provide into this, and if you can tell me how to fix it, all the better!
Best regards,
Tommi Horttana
Have you tried unity's profiler? Does it show simillar results? Note that unity3d has two profilers on ios:
editor profiler - pro only (but there is a 30 day trial)
internal profiler - you have to enable it in xcode project's source
Look at http://docs.unity3d.com/Manual/MobileProfiling.html, maybe something will hint you.
If i had to guess, I'd check one of the most common source timing hickups - the mono garbage collector.
Try running it yourself in a set frequency (like every 250ms) and see if there is a difference in the pattern:
System.GC.Collect();

why is memory usage changing when no code is being executed? (as3)

Hi there,
As you can see by the graph from profiler over the space of 1 minute the memory rises about 2mb and then drops back down only to rise again to the same spot. This is on an almost blank screen and no code is running. No new objects are being created. I've also noticed on iOS the CPU usage is also rising and falling in a similar pattern-from 20% up to 70%.
Thanks for reading.
There are many reasons. I recently had a similar situation where CPU was strangely high.
My debugging methodology was to comment out ALL code other than the boiler plate document class constructor and slowly introduce variables, classes and methods (in blocks rather than one at a time!) until the issue reappeared.
In my particular case it was to do with a network monitor class that I had incorrectly set up.

Resources