I am working on a Core Audio app using Audio Units. Performance is important with render callbacks occurring tens of thousands of times per second. I already know that the processor isn't perfectly emulated in the simulator (mach_timebase_info in the sim returns numerators and denominators which match values from my laptop's Core 2 Duo chip), so it's reasonable to expect the performance to be different too.
Should I expect the Simulator to run slower or faster than an iPad 2?
Does the simulator emulate a dual core A5, or the old single core chip from the iPad 1? (Device lists only iPad, iPhone and retina iPhone)
Does it (horror) just expose whatever chip is in my computer to iOS, meaning I could have as many cores as my host computer available to my simulated app?
Obviously I do my testing and profiling on the iPad itself. However, for those moments when I'm on a plane, or coding in my lunch break, or my wife is watching Netflix and I can't use the iPad, I'd like to know whether I'm getting optimistic or pessimistic performance from the simulator.
Perfomance of the Simulator does not relate to the performance of device. You can never compare them in any way.
Some parts of your application may be significantly slower on a device, some will be significantly faster.
In my experience the simulator is faster than the device (varies accordingly to your processor). And since the binary is built on the architecture of your processor my guess would be that it directly exposes the host's processors (but i can't confirm it).
Obviously CPU performance will be different, and the Simulator pretty much never runs out of memory.
Another difference I've noticed is faster disk performance on a device, due to its solid state disk.
So yeah, performance varies, and if it matters to you, you must test on a device.
Related
My app uses CoreML to run neural network (CoreML use CPU for most layers). Some time my performance is very good but after 30 seconds speed become slowly (fps is much less). I profiled it and found, that iOS use performance core for my app at the beginning. After 30 seconds it stops use Performance cores and starts use effective cores (them frequency is less). You may see to on the screenshot.
qos of my queue is .userInitiated. Also I tried .userInteractive, but is don't change anything.
I assume that it is cores planning feature of iOS. But I cannot find any information about it.
Is the documentation, which describe this behavior? Can I say iOS use performance core for my app all the time?
I use iPhone XR with iOS 14.7.1.
All over this document Apple mention iOS terminates apps under certain conditions, and the most popular reason seems to be freeing up some RAM. And that causes issues for apps that do not implement state restoration - some of the content user is working on and stepped away from for a moment could be easily erased. There's even a 16 page thread on Apple forums where users complaining about that.
Is anyone aware why iOS actually terminates apps instead of moving memory occupied by them onto disc/swap?
Does termination actually provide considerable performance improvement compared to other means?
What you are describing is paging, or more accurately, page swapping. The iOS version of BSD Unix does not perform paging, for lots of reasons. Here are a few educated guesses:
It's too power-hungry for a mobile device.
Flash memory can't handle the churn involved in paging. Flash memory has a limited number of lifetime write cycles per storage location, and paging would chew through the life of the flash chip.
As the other poster pointed out, swapping to disk would use up available disk space, which is also limited. Not a problem when you have a 500 GB drive, but it is a big problem on a device with only 16 GB of HD and 1 GB of RAM.
You're not going to get an answer for this question here. Apple don't explain the inner workings of iOS and anything else is going to be guesswork.
Here's my guesswork:
iOS is a heavily resource constrained environment. Memory is limited but so is disk space - a 16GB iPhone has 1GB of RAM, so "swapping to disk" isn't really something that can be freely applied. When do you stop? How do you know this isn't already being done, but there is only a limited swap in place?
The primary goal of iOS has always been to prioritise responsiveness of the foreground app. Anything other than warning, then closing background apps would probably impact this too much. If there are 15 apps in the background then imagine the processor load on nicely swapping the memory out for each process?
Because the RAM that was saved onto the disc would be much slower. It's better to cut the program then having it run poorly. I think that answered both questions.
Thanks everybody for responses. I had to do some research to answer this question, though. So I was looking for more understanding that led into "app termination" decisions. I know, there are some smart people working in Apple, but for me it always help to understand the reason something is build "this way" rather then just following it.
It turned down into these 2 questions
Why iOS terminates apps instead of freeing memory by paging out (swapping)?
Does termination provide considerable performance win?
To understand that I dug a bit into the history of iPhone. There's a video that was accessible on iTunes, unfortunately the link does not work anymore. Anyways, the video was introducing the very first version of multitasking on iPhone 3G (or was it 3GS? Not sure which device starts to support multitasking).
Nowadays iPhone devices are quite advanced in terms of hardware. Those are actually more advanced then some desktops we had 7-10 years ago, which already have had incorporated swapping long ago. But if we look for first iPhone releases, those are not that much advanced in terms of hardware. iPhone 3G is 620 Mhz ARM and 128 RAM. iPod touch 1gen had 400mhz ARM. And multitasking was supposed to run on all the devices of that time.
If we take a look at iOS, it was always has the smoothness of animations in priority; taking look at hardware I see it would be challenging to have both snappy and responsive device along with processing swapping background applications memory, so it seems very logical and very fair to terminate apps. A year or two later Apple provided APIs to facilitate implementation state restoration.
But if we look at the current iPhones and iPads - they do have enough power in order not to terminate apps and just drop their memory on disk without any drop downs in animations and foreground app performance. Why not add that on latest devices? I assume this is common for the software industry; new features often prioritised higher then improvements on existing workflows; Apple has been releasing MobileMe, support for Retina displays, AutoLayout, iCloud - so I can understand that cool improvements of already existing features has been sacrificed.
The issue with apps that don't provide state restoration is easily solved by providing state restoration.
Just killing apps when the system runs out of memory is a huge performance gain. Consider that the system usually runs out of memory when you launch another app, and any action that is done instead of killing old apps would have to be done before launching the new app; that's about the most performance critical point in time.
And for at least five years you have been told that when your app goes to the background, you should store just enough state to come back to that state if your app is restarted.
I am trying to decide if I should preload all of my textures on a loading screen in my game, but I don’t know how much memory I can use for this. I looked around the web and I found where someone said that you can preload all of your textures as long as it is 80MB or under. If this is correct does that mean 80MB on all iOS devices (iPhone 3gs and up)?
Only the system knows
Ultimately, this question is all about memory. Apple doesn't care what you are doing with the memory, they just care how much you are using.
There is no 'hard set' limit on how much memory you can use on device X and up. The system (iOS) decides that.
If you are using too much, the system will send you a memory warning. If your memory usage grows, the system will begin to kill background tasks - like music, etc.
If you continue using too much, it will kill your app.
This differs between devices. For example, the 3GS has 256 MB of RAM, the 4 and above have 512 MB, and future devices may have more. Adjust accordingly.
So, test your app, watch for memory warnings, and optimize memory usage!
What coding tricks, compilation flags, software-architecture considerations can be applied in order to keep powerconsumption low in an AIR for iOS application (or to reduce powerconsumption in an existing application that burns too much battery)?
One of the biggest things you can do is adjust the framerate based off of the app state.
You can do this by adding handlers inside your App.mxml
<s:ViewNavigatorApplication xmlns:fx="http://ns.adobe.com/mxml/2009"
xmlns:s="library://ns.adobe.com/flex/spark"
activate="activate()" deactivate="close()" />
Inside your activate and close methods
//activate
FlexGlobals.topLevelApplication.frameRate = 24;
//deactivate
FlexGlobals.topLevelApplication.frameRate = 2;
You can also play around with this number depending on what your app is currently doing. If you're just displaying text try lowering your fps. This should give you the most bang for your buck power savings.
Generally, high power consumption can be the result of:
intensive network usage
no sleep mode for display while app idle
un-needed location services usage
continously high usage of cpu
Regarding (flex/flash) AIR I would suggest that:
First you use the Flex profiler + task-manager and monitor CPU and Memory usage. Try to reduce them as much as possible. As soon as you have this low on windows/mac they will go lower (theoretically on mobile devices)
Next step would be to use a network monitor and reduce the amount and size of the network (webservice) calls. Try to identify unneeded network activity and eliminate it.
Try to detect any idle state of the app (possible in flex, not sure about flash) and maybe put the whole app in an idle mode (if you have fireworks animation running then just call stop())
Also I am not sure about it, but will reduce for sure cpu and use more gpu: by using Stage3D (now available with air 3.2 also for mobile) when you do complex anymations. It may reduce execution time since HW accel is there, therefore power consumption may be lower.
If I am wrong about something please comment/downvote (as you like) but this is my personal impression.
Update 1
As prompted in the comments, there is not a 100% link between cpu usage on a desktop and on a mobile device, but "theoreticaly" on the low level we should have at least the same cpu usage trend.
My tips:
Profile your App in a first step with the profiler from the Flash Builder
If you have a Mac, profile your app with Instruments from XCode
And important:
behaviors of Simulator, IPA-Interpreter packages and IPA-Test builds are different.
Simulator - pro forma optimizations
IPA-Interpreter - Get a feeling of performance
IPA-Test - "real" performance behavior
And finally test the AppStore-Build, it is the fastest (in meaning of performance) package mode.
Additional we saw, that all this modes can vary.
I have an application written in C++ that someone else has written in a way that's supposed to maximally take advantage of cpu caches. This application runs on a guest Ubuntu OS that is using paravirtualization. I ran cachegrind and received very low cache miss rates.
Since my OS is virtualized, can I be sure that these values are in fact correct in showing that the cpu cache is being well used for my application?
Cachegrind is a simulator. A real CPU may actually perform differently (e.g. your real CPU may have a different cache hierarchy to cachegrind, different sizes of cache, a different replacement policy and so forth). You would need to watch the real CPUs performance counters to know for sure how well your program was really performing on real hardware with respect to cache.