Actionscript: performance of creating text fields and fills on iOS - ios

I am curious to know what sort of performance I can expect from Air on iOS. I have a class which creates 30 display objects each with several text fields and a fill.
Each display object takes about 0.011 seconds to create on my PC. This raises to 0.056 on an iPad Retina (A7).
From my debugging it takes around 0.004 to create and format a textfield.
When I get to 30 display objects the 0.056 becomes 1.68 seconds.
Is this typical?
Can anything be done ?
I have traced each and every stage of the class and every function is taking about the same time to execute, so I do not think one specific stage has an issue.

Welcome to the world of mobile devices. They all have a very low (if you compare to that of desktop PCs) CPU power, while Flash uses CPU to render its content normally.
the larger area you need to redraw on the screen - the worse performance is
the more objects are there to draw - the worse performance is
Shapes, strokes, fills, fonts - they all are vector data that Flash need to render and draw, thus they all take a heavy toll on CPU usage, which also results in heavy battery drain. That's why Apple discontinued supporting Flash Plugin on their devices long ago.
Then Adobe announced Stage3D which allows (with a certain dose of work) to take advantage of GPU-rendering, which is faster even on desktop computers, and literally saves the day for Flash/AIR application on mobile devices.
Long story short, slow performance is the way things are for native Flash content. If you want better performance and faster applications on mobile devices, you need to proceed with some GPU-enabled framework, like Gamua Starling.

Related

6S+ GPU traces: cannot account for huge MS difference

FAST trace
SLOW trace
These two traces were captured a couple of minutes apart on an iPhone 6S+ on a more or less static menu. One completes in around 8ms, the other in around 14ms. The game will sit quite happily at 7-8ms render time and then, for no apparent reason, rise up to 14ms frame time for a while.
Bafflingly, individual draw calls take almost twice as long in the 'slow' trace as in the 'fast' trace, though I can see no difference between them. The first draw call in particular is noteworthy: a screen-filling quad with a very simple shader that completes in either 1.x ms or 2.x ms, with (so far as I can tell) identical GL settings in force both times.
I can load both traces into xcode and hit 'analyse', and the results are totally reproducible: the slow trace is always slow, and the fast trace is always fast, and I can't see what's different!
Notes: Yes, there are some redundant GL calls generated by our engine. They're the same in both traces, so they're not the focus of this investigation. And yes, the first two calls are a terrible way to achieve a full-screen fill; I've already talked to the designers :) Again: not the focus here because the question is why are those calls taking twice as long in one trace compared to the other.
The game will sit quite happily at 7-8ms render time and then, for no
apparent reason, rise up to 14ms frame time for a while.
I think what you're seeing here is the dynamic energy saving on the device. If you're comfortably hitting your framerate target, then the OS can reduce the clock speed of the CPU/GPU to reduce energy consumption and heat generation.
Unfortunately, this behaviour makes it very difficult to profile performance. I'm not aware of any way to disable this for performance measuring, or to view the current CPU/GPU clock speed (or the number of cores that have been shut down) to confirm the cause of confusing measurements.
I've not looked at your traces, but it's possible that the one you think is slower is actually marginally faster, and comes just under the threshold where the OS decides to half the clock speeds to save energy.

Interpreting downward spikes in Time Profiler

I'd appreciate some help on how I should interpret some results I get from Time Profiler and Activity Monitor. I couldn't find anything on this on the site, probably because the question is rather specific. However, I imagine I'm not the only one not sure what to read into the spikes they get on the Time Profiler.
I'm trying to figure out why my game is having regular hiccups on the iPhone 4. I'm trying to run it at 60 FPS, so I know it's tricky on such an old device, but I know some other games manage that fine. I'm using Unity, but this is a more general question about interpreting Instruments results. I don't have enough reputation to post images, and I can only post two links, so I can't post everything I'd like.
Here is what I get running my game on Time Profiler:
Screenshot of Time Profiler running my game
As far as I understand (but please correct me if I'm wrong), this graph is showing how much CPU my game uses during each sample the Time Profiler takes (I've set the samples to be taken once per millisecond). As you can see, there are frequent downward spikes in that graph, which (based on looking at the game itself as it plays) coincide with the hiccups in the game.
Additionally, the spikes are more common while I touch the device, especially if I move my finger on it continuously (which is what I did while playing our game above). (I couldn't make a comparable non-touching version because my game requires touching, but see below for a comparison.)
What confuses me here is that the spikes are downward: If my code was inefficient, doing too many calculations on some frames, I'd expect to see upward spikes, now downward. So here are the theories I've managed to come up with:
1) The downward spikes represent something else stealing CPU time (like, a background task, or the CPU's speed itself varying, or something). Because less time is available for my processing, I get hiccups, and it also shows as my app using less CPU.
2) My code is in fact inefficient, causing spikes every now and then. Because the processing takes isn't finished in one frame, it continues onto the next, but only needs a little extra time. That means that on that second frame, it uses less CPU, resulting in a downward spike. (It is my understanding that iOS frames are always equal legnth, say, 1/60 s, and so the third frame cannot start early even if we spent just a little extra time on the second.)
3) This is just a sampling problem, caused by the fact that the sampling frequency is 1ms while the frame length is about 16ms.
Both theories would make sense to me, and would also explain why our game has hiccups but some lighter games don't. 1) Lighter games would not suffer so badly from CPU stolen, because they don't need that much CPU to begin with. 2) Lighter games don't have as many spikes of their own.
However, some other tests seem to go against each of these theories:
1) If frames always get stolen like this, I'd expect similar spikes to appear on other games too. However, testing with another game (from the App Store, also using Unity), I don't get them (I had an image to show that but unfortunately I cannot post it).
Note: This game has lots of hiccups while running in the Time Profiler as well, so hiccups don't seem to always mean downward spikes.
2) To test the hypothesis that my app is simply spiking, I wrote a program (again in Unity) that wastes a consistent amount of milliseconds per frame (by running a loop until the specified time has passed according to the system clock). Here's what I get on Time Profiler when I make it waste 8ms per frame:
Screenshot of Time Profiler running my time waster app
As you can see, the downward spikes are still there, even though the app really shouldn't be able to cause spikes. (You can also see the effect of touching here, as I didn't touch it for the first half of the visible graph, and touched it continuously for the second.)
3) If this was due to unsync between the framerate and the sampling, I'd expect there to be a lot more oscillation there. Surely, my app would use 100% of the milliseconds until it's done with a frame, then drop to zero?
So I'm pretty confused about what to make of this. I'd appreciate any insight you can provide into this, and if you can tell me how to fix it, all the better!
Best regards,
Tommi Horttana
Have you tried unity's profiler? Does it show simillar results? Note that unity3d has two profilers on ios:
editor profiler - pro only (but there is a 30 day trial)
internal profiler - you have to enable it in xcode project's source
Look at http://docs.unity3d.com/Manual/MobileProfiling.html, maybe something will hint you.
If i had to guess, I'd check one of the most common source timing hickups - the mono garbage collector.
Try running it yourself in a set frequency (like every 250ms) and see if there is a difference in the pattern:
System.GC.Collect();

performance issues with air app on iphone 4

Currently I am working on an Air app for iOS and Android. Air 3.5 is targeted.
Performance on iPhone 4 / 4s has been acceptable overall, after a lot of optimising: gpu rendering, StageQuality.LOW, avoiding vectors as much as possible etc. I really put a lot of effort in boosting performance.
Still, every once in a while, the app becomes very slow. There is no precise point in time or action or combination of actions after which this occurs. Sometimes, it doesn't occur for days. But when it occurs, only killing the app and launching it again helps, because the app stays slow after that. So I am not talking about minor hiccups that
The problem occurs only on (some) iPhones 4 and 4s. Not on iPad 3,4, iPhone 5, any Android device...
Has anyone had similar experiences and pointers as to where a solution might be found?
What happens when gpu memory fills up? Or device memory? Could this be involved?
Please don't expect Adobe Air to have performance as Native Apps. I am developing App with Adobe Air as well.
By the sound of your development experience. I think it's to do with memory issue, because the performance is not too bad at the begging stage, but it gets bad overtime (so u have to kill the app). I suggest you looking into memory leaking issue.
Hopefully my experience can help you.
I had a similar problem where sometime during gameplay the framerate would drop from 30fps to an unrecoverable 12fps. At first I thought I was running out of GPU memory and it was falling back on rendering with CPU.
Using Adobe Scout I found that when this occurred, the rendering time was ridiculousness high.
Updating to Air 3.8, I fixed the problem by limiting the amount of bitmaps that were being rendered and in memory at once. I would only create new instances of backgrounds for appropriate levels, and then flagging them for garbage collection when the level ended, waiting a few seconds and then moving to the next level.
What might solve your problem is if you reduce the amount of textures you have in memory at one time, only showing the ones you need to. If you want to swap out active textures for new ones, set all the objects with that texture data to null:
testMovieClip = null;
and remove all listeners from it so that garbage collection will pick it up.
Next, you can force garbage collection with AIR:
System.gc();
Instantiate the new texture you want to render a few frames after calling gc. Monitor resources with Scout and the iOS companion app to confirm that it's working.
You could also try to detect when the framerate drops, and set some objects to null then force garbage collection. In my case, if I moved my game to an empty frame for a few seconds with garbage collection, the framerate would recover and the game would resume rendering with GPU.
Hope this helps!

Maximum (practical) Memory Use on iPod Touch 4G, iOS 5

We have a memory-intensive 3D app which is primarily targeted at iPad 2 and iPhone 4S, but it works on iPod Touch 4G and iPhone 3GS as well. We have found that the smaller memory footprint on the iPod Touch 4G, combined with the retina display, makes this platform more susceptible to out-of-memory errors. iOS5 also seems to have lowered the available memory somewhat.
It's relatively easy for us to lower the resolution of 3D models, based on the platform we're using, but we have to set that resolution before loading, and thus we cannot effectively lower it dynamically based on memory pressure warnings from the O/S.
We've tuned the memory usage based on trial and error, but we've found that devices that haven't been rebooted in a long time (e.g., months) have a lot less useable memory than devices which have been rebooted recently. (Even if you kill off all the running apps.)
I'm wondering what other iOS app developers use as their practical memory limit for iPod Touch 4G apps?
While keeping all the caveats that everyone is offering in mind, my personal general rule of thumb has been that in sensible weather you can expect to have around the following:
512MB device -> 200MB usable (iPhone 4-4S, iPad 2)
256MB device -> 100MB usable (iPhone 3GS, iPad, iPod Touch 3G-4G)
128MB device -> 50MB usable (iPhone 3G, iPod Touch 1G-2G)
And if you want to rigorously withstand insensible weather without otherwise making a point of being flexibly responsive with memory usage, you can halve those numbers, or even third them. But it will be fairly difficult to guarantee sterling reliability if you can't throw anything overboard when conditions become dire. It's more like a sliding scale of how much performance you're willing to throw away for how much reliability at that point.
In environment predictability terms, iOS is a lot more like the PC than a dedicated machine, for better and worse, with the added bonus of a drill sergeant for an OS.
Recently I found this awesome tool to find what is the maximum memory capacity of any iOS device. We can also find at which memory level we received the Low Memory warning.
here is the link: https://github.com/Split82/iOSMemoryBudgetTest
It's hard to give an actual number because of all the external allocations the OS does on your behalf in UIKit and OpenGL. I try to keep my own allocations to around 30MB, with 50MB sort of my top end. I've pushed it as high as 90MB, but I got jettisoned a lot at that level so it's probably a bad idea unless the task using all that memory is very brief.
If you need to hack around your current problem you could just detect the problematic devices up front and turn down your graphics engine's resolution at startup. You can get exact device info or you could check for display scaling (retina) combined with number of processor cores and amount of RAM to determine what quality level to use.
I've had great success reducing my memory usage by using mapped files in place of loading data into RAM and you may want to give that a try if you have any large data allocations.
Also watch out for views/controls leaking from UIKit as they consume a lot of memory and can lead to being jettisoned at seemingly random times. I had some code which leaked child views from several view controllers. Eventually those leaks would chew up my app, though my app's memory usage didn't reflect the problem directly.

In image processing, what is real time?

in image processing applications what is considered real time? Is 33 fps real time? Is 20 fps real time? If 33 and 20 fps are considered real time then is 1 or 2 fps also real time?
Can anyone throw some light.
In my experience, it's a pretty vague term. Often, what is meant is that the algorithm will run at the rate of the source (e.g. a camera) supplying the images; however, I would prefer to state this explicitly ("the algorithm can process images at the frame rate of the camera").
Real time image processing = produce output simultaneously with the input.
The input may be 25 fps but you may choose to process 1 of every 5 frames(that makes 5 fps processing) and your application is still real time.
TV streaming software: all the frames are processed.
Security application and the input is CCTV security cams: you may choose to skip some frames to fit the performance.
3d game or simulation: fps changes depending on the current scene.
And they are all real time.
Strictly speaking, I would say real-time means that the application is generating images based on user input as it occurs, e.g. a mouse movement which changes the facing of an avatar.
How successful it is at this task - 1 fps, 10 fps, 100 fps, etc - is actually another question.
Real-time describes an approach, not a performance metric.
If however you ask what is the slowest fps which passes as usable by a human, the answer is about 15, I think.
i think it depends on what the real time application is. If the app is showing slideshows with 1 picture every 3 seconds, and the app can process 1 picture within this 3 seconds and show it, then it is real time processing.
If the movie is 29.97 frames per second, and the app can process all 29.97 frames within the second, then it is also real time.
An example is, if an app can take the movie from a VCR or Cable's analog output, and compress it into 29.97 frames per second video and also send all that info to a remote area for another person to watch, then it is real time processing.
(Hard) Real time is when an outcome has no value when delivered too early or too late.
Any FPS is real time provided that displayed frames represent what should be displayed at the very instant they are displayed.
The notion of real-time display is not really tied to a specific frame rate - it could be defined as the minimum frame rate at which movement is perceived as being continuous. So for slow moving objects in a visual frame (e.g. ships in a harbour, or stars in the night sky) a relatively slow frame rate might suffice, whereas for rapid movement (e.g. a racing car simulator) a much higher frame rate would be needed.
There is also a secondary consideration of latency. A real-time display must have sufficiently low latency in relation to other events (e.g. behaviour of a real-time simulation) that there is no perceptible lag in display updates.
That's not actually an easy question (even without taking into account differences between individulas).
Wikipedia has a good article explaining why. For what it's worth, I think cinema films run at 24fps so, if you're happy with that, that's what I'd consider realtime.
It depends on what exactly you are trying to do. For some purposes 1fps or even 2 spf (Seconds per frame) could be considered real-time. For others thats way too slow ...
That said, real-time means that it takes as long (or less) to process x frames as it would take to just present those x frames.
It depends.
automatic aircraft cannon - 1000 fps
monitoring - 10 - 15 fps
authentication - 1 fps
medical devices - 1 fph
I guess the term is used with different meanings in different contexts. In industrial image processing, real time processing is usually the opposite of offline processing. In offline processing applications, you record images (many of them) and process them at a later time. In real time processing, the system that acquires the images also processes them, at the same time, so the processing frame rate must not be higher than the acquisition frame rate.
Real-time means your implementation is fast enough to meet some deadline. The deadline is part of your system's specification. If it's an interactive UI and the users are not too picky, 15Hz update can be OK, although it can feel laggy. If you're using it to drive a car along the motorway 30Hz is about right. If it's a missile, well, maybe 100Hz?

Resources