I'm currently profiling a blackberry application, to know where part of the application drains battery. In one of my MainScreen, I noticed that when I push this screen the CPU Utilization increases upto 100%, then I saw that the method GraphicsInternal.updateDisplayFromBackBuffer(...) method uses upto 70% of the CPU. Now my question is, why is it that this method consumes that amount of resources?
Related
My application has many different components - an AR view, a separate SceneKit view, UI, and underlying algorithms which are ongoing.
I want to determine which area could be causing performance issues. Here's what Xcode gives me, under CPU:
These threads all appear to be re-sized so that they fill the height of their graph. So it's not useful to determine what's actually taking up resources.
And here's what Instruments gives me, under Time Profiler:
Here I can clearly see that some threads dominate, but it's impossible to identify any of these threads.
FAST trace
SLOW trace
These two traces were captured a couple of minutes apart on an iPhone 6S+ on a more or less static menu. One completes in around 8ms, the other in around 14ms. The game will sit quite happily at 7-8ms render time and then, for no apparent reason, rise up to 14ms frame time for a while.
Bafflingly, individual draw calls take almost twice as long in the 'slow' trace as in the 'fast' trace, though I can see no difference between them. The first draw call in particular is noteworthy: a screen-filling quad with a very simple shader that completes in either 1.x ms or 2.x ms, with (so far as I can tell) identical GL settings in force both times.
I can load both traces into xcode and hit 'analyse', and the results are totally reproducible: the slow trace is always slow, and the fast trace is always fast, and I can't see what's different!
Notes: Yes, there are some redundant GL calls generated by our engine. They're the same in both traces, so they're not the focus of this investigation. And yes, the first two calls are a terrible way to achieve a full-screen fill; I've already talked to the designers :) Again: not the focus here because the question is why are those calls taking twice as long in one trace compared to the other.
The game will sit quite happily at 7-8ms render time and then, for no
apparent reason, rise up to 14ms frame time for a while.
I think what you're seeing here is the dynamic energy saving on the device. If you're comfortably hitting your framerate target, then the OS can reduce the clock speed of the CPU/GPU to reduce energy consumption and heat generation.
Unfortunately, this behaviour makes it very difficult to profile performance. I'm not aware of any way to disable this for performance measuring, or to view the current CPU/GPU clock speed (or the number of cores that have been shut down) to confirm the cause of confusing measurements.
I've not looked at your traces, but it's possible that the one you think is slower is actually marginally faster, and comes just under the threshold where the OS decides to half the clock speeds to save energy.
What ranges could be considered Low, Medium, and High in memory usage?
As my app becomes more complex, I notice this number getting higher. I've been trying to use this number as an indicator to how efficiently I'm coding but I've realized I have no bar to compare it with.
How to understand Memory Usage says 1024 Mb to an iPhone/iPad, but obviously all of this memory can't go to the app.
You could get(a pretty nice) overview from this SO question. It won't show you low-medium values, but if you know the limit you can adjust below limit.
If you are near the limits in some view - override didReceiveMemoryWarning and dispose resources accordingly.
My advise is to test always on device, as simulator need a lot memory just because of it's architecture and it's not relative to real devices.
I use instruments to trace memory usage of some iOS app, such as Path,Instagram,Facebook.
I find that while a foreground app take too much memory resulting very tight "Physical Memory Free". And while memory free reaches its lower boundary, maybe 10M, the system starts to reduce background app's memory size, for example Path drops memory from about 70M to 40M progressively.
I am wondering how this happened, and I found that my own app doesn't behavior this way while in background. What I should do to make my app able to reduce it's memory footprint in background if needed?
We have a memory-intensive 3D app which is primarily targeted at iPad 2 and iPhone 4S, but it works on iPod Touch 4G and iPhone 3GS as well. We have found that the smaller memory footprint on the iPod Touch 4G, combined with the retina display, makes this platform more susceptible to out-of-memory errors. iOS5 also seems to have lowered the available memory somewhat.
It's relatively easy for us to lower the resolution of 3D models, based on the platform we're using, but we have to set that resolution before loading, and thus we cannot effectively lower it dynamically based on memory pressure warnings from the O/S.
We've tuned the memory usage based on trial and error, but we've found that devices that haven't been rebooted in a long time (e.g., months) have a lot less useable memory than devices which have been rebooted recently. (Even if you kill off all the running apps.)
I'm wondering what other iOS app developers use as their practical memory limit for iPod Touch 4G apps?
While keeping all the caveats that everyone is offering in mind, my personal general rule of thumb has been that in sensible weather you can expect to have around the following:
512MB device -> 200MB usable (iPhone 4-4S, iPad 2)
256MB device -> 100MB usable (iPhone 3GS, iPad, iPod Touch 3G-4G)
128MB device -> 50MB usable (iPhone 3G, iPod Touch 1G-2G)
And if you want to rigorously withstand insensible weather without otherwise making a point of being flexibly responsive with memory usage, you can halve those numbers, or even third them. But it will be fairly difficult to guarantee sterling reliability if you can't throw anything overboard when conditions become dire. It's more like a sliding scale of how much performance you're willing to throw away for how much reliability at that point.
In environment predictability terms, iOS is a lot more like the PC than a dedicated machine, for better and worse, with the added bonus of a drill sergeant for an OS.
Recently I found this awesome tool to find what is the maximum memory capacity of any iOS device. We can also find at which memory level we received the Low Memory warning.
here is the link: https://github.com/Split82/iOSMemoryBudgetTest
It's hard to give an actual number because of all the external allocations the OS does on your behalf in UIKit and OpenGL. I try to keep my own allocations to around 30MB, with 50MB sort of my top end. I've pushed it as high as 90MB, but I got jettisoned a lot at that level so it's probably a bad idea unless the task using all that memory is very brief.
If you need to hack around your current problem you could just detect the problematic devices up front and turn down your graphics engine's resolution at startup. You can get exact device info or you could check for display scaling (retina) combined with number of processor cores and amount of RAM to determine what quality level to use.
I've had great success reducing my memory usage by using mapped files in place of loading data into RAM and you may want to give that a try if you have any large data allocations.
Also watch out for views/controls leaking from UIKit as they consume a lot of memory and can lead to being jettisoned at seemingly random times. I had some code which leaked child views from several view controllers. Eventually those leaks would chew up my app, though my app's memory usage didn't reflect the problem directly.