I am looking for tips on speeding up the Android Simulator, especially when run from MonoDroid.
Currently the simulator works, but at a speed much lower than when we write with Eclipse and debug.
Are you referring to app startup, or normal execution after app startup?
App startup will be slower (Mono for Android does quite a bit of work during app startup), but once things are running (and the methods have been JITed) performance should be reasonable (considering it's the emulator, perhaps not quite reasonable).
Related
I can see a frame rate drops and stuttering in my flutter application while changing the views in the app. I was wondering whether it is normal or not.
Also as a note i'm in the debug mode of the app.
Hard to tell if it is normal, a problem with Flutter or a problem with your code.
It doesn't make too much sense to evaluate performance in debug builds because performance characteristics are quite different to release builds.
If you actually want to evaluate performance characteristics, use a --release or --profile build.
I'm scared to ask this question because it doesn't include specifics and doesn't have any code samples, but that's because I've encountered it on three entirely different apps that I've worked on in the past few weeks, and I'm thinking specific code might just cloud the issue.
Scoured the web and found no reference to the phenomenon I'm encountering, so I'm just going to throw this out there and hope someone else has seen the same thing:
The 'problem' is that all the iOS OpenGL apps I've built, to a man, run MUCH FASTER when I'm profiling them in Instruments than when they're running standalone. As in, a frame rate roughly twice as fast (jumping from, eg, 30fps to 60fps). This is both measured with a code-timing loop and from watching the apps run. Instruments appears to be doing something magical.
This is on a device, not the iOS simulator.
If I profile my OpenGL apps and upload to a device — specifically, iPad 3 running iOS 5.1 — via Instruments, the frame rate is just flat-out much, much faster than running standalone. There appears to be no frame skipping or shennanigans like that. It simply does the same computation at around twice the speed.
Although I'm not including any code samples, just assume I'm doing the normal stuff. OpenGL ES 2.0, with VBOs and VAOs. Multithreading some computationally intensive code areas with dispatch queues/blocks. Nothing exotic or crazy.
I'd just like to know if anyone has experienced anything vaguely similar. If not, I'll just head back to my burrow and continue stabbing myself in the leg with a fork.
Could be that when you profile, a release build is used (by default) instead of a debug build when you just hit run.
I am working on a Core Audio app using Audio Units. Performance is important with render callbacks occurring tens of thousands of times per second. I already know that the processor isn't perfectly emulated in the simulator (mach_timebase_info in the sim returns numerators and denominators which match values from my laptop's Core 2 Duo chip), so it's reasonable to expect the performance to be different too.
Should I expect the Simulator to run slower or faster than an iPad 2?
Does the simulator emulate a dual core A5, or the old single core chip from the iPad 1? (Device lists only iPad, iPhone and retina iPhone)
Does it (horror) just expose whatever chip is in my computer to iOS, meaning I could have as many cores as my host computer available to my simulated app?
Obviously I do my testing and profiling on the iPad itself. However, for those moments when I'm on a plane, or coding in my lunch break, or my wife is watching Netflix and I can't use the iPad, I'd like to know whether I'm getting optimistic or pessimistic performance from the simulator.
Perfomance of the Simulator does not relate to the performance of device. You can never compare them in any way.
Some parts of your application may be significantly slower on a device, some will be significantly faster.
In my experience the simulator is faster than the device (varies accordingly to your processor). And since the binary is built on the architecture of your processor my guess would be that it directly exposes the host's processors (but i can't confirm it).
Obviously CPU performance will be different, and the Simulator pretty much never runs out of memory.
Another difference I've noticed is faster disk performance on a device, due to its solid state disk.
So yeah, performance varies, and if it matters to you, you must test on a device.
I've just kind of been 'winging' it with long tests (for hours) with no crashes, and eyeballing my code for quite a while and made sure that everything looks pretty kosher as far as memory leaks. But should I be using instruments... is it mandatory to do this before uploading to app store?
I think that using Instruments is not only good practice, it's strongly recommended by the iOS development community as a whole. Even if your app seems to run fine, you still may have leaks in other use cases. Test your app thoroughly with Instruments before pushing to the App Store or you may be in for a lot of users on older generation devices complaining that the app crashes.
Some of the most crucial tools:
Leaks
Allocations
Time Profiler
Another suggestion alongside using Instruments is to compile with the -pedantic flag.
In addition to what Yuji said, turn on as many warnings as you can in the build settings, by default these are off.
No.
But at least run "Build & Analyze" in the XCode. It tells you what it can find about the memory leaks, just by analyzing the source code statically. It's basically eye-balling the code by the machine. It's infinitely better than doing that yourself. If there're any warnings issued, fix all of them. It's rare for the static analyzer to give false positives.
Also, running your app with Instruments is helpful to see how it really allocates memories. Sometimes it's fun, too.
I would never publish an app without running Instrument's leak tool.
I often miss a release somewhere. And even if I read the code 200 times I would not find it without Instruments.
I am creating apps for the Ipad and its driving me crazy.
The memory that is usable by the apps changes depending on what other apps were ran before it.
There is no reliable set amount of memory that can be used by your app.
i.e. If safari is ran then even after it closes it takes up some amount of memory which effects other apps.
Does anyone know if there is a way to clear the memory before my app runs so I can get the same running environment every time?
I have created several prototype apps to show to other people and it seems like after a few days they always come back to me and tell me that it crashes and to fix it.
When I test it, the reason is always because there is not enough memory (when there was enough before when I was testing). So I need to squeeze every bit of memory (which usually effects performance due to heavy loading and releasing) out of the app and tell them to restart their ipad if it continues to happen.
I read in a book that generally apps can use at max 40mb or so, most of the apps that crash are crashing at around 27mb. I want my remaining 13mb!!
While you will get a pretty good state after a reboot, what you really should look for is clean memory management and avoiding leaks.
Using the available memory wisely is solely up to the programmer. Don't tell your users to reboot the device, ever. And with every update of the OS memory things might change anyway.