I am having some trouble grasping why my app performance is at 24fps as opposed to the usual 30fps.
The frame time for both CPU and GPU varies between 6-18 ms, and the GPU utilization never surpasses 55%. Doesn't this mean that the frame rate should be higher?
When I use 'Analyze Performance', Xcode tells me:
Your performance is not limited by the OpenGL ES commands issued. Use the Instruments tool to investigate where your application is bottlenecked.
I am a beginner at this, so can someone explain to me how the frame time can be so low, yet the frame rate so high? (The device is not the issue)
Edit 4/2/2013
New development:
This frame rate drop only occurs when I run the app from Xcode (I know this because when the app performance is poor, the sound is not in sync and accelerometer sensitivity is lowered). When I stop running from Xcode and run the app directly from my iPod, the frame rate is perfect. Now I am wondering if there really are performance issues with my app. Is there any way that Xcode could be impeding on the app's performance by running tests or monitoring the device?
You should set frame rate to your avCaptureConnection.And then it shoud be ok.
Related
I am using Metal for rendering live video frames plus some custom control (a circular slider) for zooming that I implemented using Quartz 2D API. When I run the app in debugger, I see FPS drop from 30 to sometimes 11 and zoom is not smooth on older devices such as iPad Mini 2. I then run the code in Time Profiler and surprisingly, there is no fps drop in Time Profiler. App runs smooth in Profiler. How do I know what is causing fps drop in debug?
It's probably the Metal Validation layer that's active for your debug scheme. It's not typically surprising that performance of programs is worse in general when debugging (due to lack of optimizations, or asserts being enabled, etc.).
If you want to get similar Metal performance when debugging, you can try disabling the Metal Validation in the scheme settings. But, then, of course, you lose the actual debugging benefit of the validation of your use of Metal.
i'm working on a IOS audio application.
I've noticed when I do lots of stuff on the main thread the cpu usage for the audio thread actually drops. With a little debugging I tracked the strange behaviour to a CADisplayLink timer were I do lots of stuff to update the UI. When I removed this method the cpu usage for the audio thread averaged around 10% but with the CADisplayLink method running the cpu usage dropped to around 5%.
As an experiment, I removed all my code in the CADisplayLink method and inserted a massive while loop just to slow down the main thread and see what would happen. The cpu usage dropped to around 5% just as before so I could confirm that it wasn't my code.
I'm testing on an iPad Pro 10.5" 2nd gen. The above doesn't seem to happen on the simulator.
Does anyone know why I am seeing this strange behaviour?
Cheers!
I just had this question answered over on the Audiobus dev forum.
Looks like it's just CPU scaling. When the device has more to do, it scales up the CPU power and runs faster. Mystery solved :).
Setup:
CADisplayLink on main thread, configured to fire every interval
iOS 10.2
OpenGLES 2.0
iPhone 6
.
-(void)callbackFromCADisplayLink:(CADisplayLink *)dl
{
u64 tStart = high_res_clock_now();
<Process input, advance game world, prepare graphics commands>
// Frame processed
u64 prePresentElapsed = high_res_clock_now() - tStart;
[myEAGLContext presentRenderbuffer:GL_RENDERBUFFER];
// Graphics commands submitted
u64 postPresentElapsed = high_res_clock_now() - tStart;
}
What I'm finding is:
prePresentElapsed is consistently in the 0.5-2.5ms range.
There are essentially 2 graphics modes:
"Fast mode": where postPresentElapsed is consistently in 1.5-4ms range
"Slow mode": where postPresentElapsed is consistently hovering at 16ms
The system starts in "Fast mode", but degenerates to "Slow mode" seemingly randomly (doesn't appear to be associated with a large frame spike), and then stays in "Slow mode" until the app is put into inactive/background state, then back into active state.
Clearly, it appears presentRenderbuffer is blocking due to downstream effects of vsync.
Questions:
What causes the switch between modes?
How can I reliably stay in "Fast mode"
iOS is very active in modifying the clock speed of the CPU and GPU. As far as the OS is concerned, the ideal state is for your app to run consistently at 60fps with the lowest possible clock speed (which sounds like what is happening in your slow mode)
When your app launches, the clock speed starts out high, once things have had a little while to settle down, the OS gets the measure of your app and slows the clock speed down as much as it possibly can without affecting the user experience. It does this to save the users battery and keep the device cool.
It's quite frustrating, because there's no way to disable, control or even monitor this behaviour as far as I'm aware, so it makes performance measurement a lot harder. It also means that you're bound to miss the odd frame when the app has a busy frame, because the clock speed management won't be able to raise the clock speed until it's too late.
I've been trying to optimize scrolling of my UICollectionView, and the Core Animation profiler has me puzzled...
When the app is idle (no scrolling or interaction in anyway) I'm averaging around 59-60 fps, but occasionally it will drop down to 7 or 12 fps:
Is this expected behavior? Because I'm not interacting with the app when this drop happens I don't visually see anything, but I'm curious if this is something I should be troubleshooting.
Other times when profiling core animation bottlenecks I've seen fps drop down to 0 fps when idle/not interacting with the app.
The app isn't crash or freezing, so is this some sort of bug in Instruments? (I'd expect consistently to be 0fps or close to 60fps when nothing is happening in the app).
Update:
Here's an example of the FPS graph after running the profiler a few minutes later (I'd tried turning on rasterization for a type of view, but then reverted back to not rasterizing, so although the project was rebuilt, the codebase is the same):
Here I'm getting between 32 and 55 fps when interacting with the app, and dropping down to 0 fps when idle.
From my subjective perspective I'm not noticing anything major between what I'm seeing between these two examples, but from Xcode's perspective I'm seeing two different stories.
Does anyone know what's happening here?
Currently I am working on an Air app for iOS and Android. Air 3.5 is targeted.
Performance on iPhone 4 / 4s has been acceptable overall, after a lot of optimising: gpu rendering, StageQuality.LOW, avoiding vectors as much as possible etc. I really put a lot of effort in boosting performance.
Still, every once in a while, the app becomes very slow. There is no precise point in time or action or combination of actions after which this occurs. Sometimes, it doesn't occur for days. But when it occurs, only killing the app and launching it again helps, because the app stays slow after that. So I am not talking about minor hiccups that
The problem occurs only on (some) iPhones 4 and 4s. Not on iPad 3,4, iPhone 5, any Android device...
Has anyone had similar experiences and pointers as to where a solution might be found?
What happens when gpu memory fills up? Or device memory? Could this be involved?
Please don't expect Adobe Air to have performance as Native Apps. I am developing App with Adobe Air as well.
By the sound of your development experience. I think it's to do with memory issue, because the performance is not too bad at the begging stage, but it gets bad overtime (so u have to kill the app). I suggest you looking into memory leaking issue.
Hopefully my experience can help you.
I had a similar problem where sometime during gameplay the framerate would drop from 30fps to an unrecoverable 12fps. At first I thought I was running out of GPU memory and it was falling back on rendering with CPU.
Using Adobe Scout I found that when this occurred, the rendering time was ridiculousness high.
Updating to Air 3.8, I fixed the problem by limiting the amount of bitmaps that were being rendered and in memory at once. I would only create new instances of backgrounds for appropriate levels, and then flagging them for garbage collection when the level ended, waiting a few seconds and then moving to the next level.
What might solve your problem is if you reduce the amount of textures you have in memory at one time, only showing the ones you need to. If you want to swap out active textures for new ones, set all the objects with that texture data to null:
testMovieClip = null;
and remove all listeners from it so that garbage collection will pick it up.
Next, you can force garbage collection with AIR:
System.gc();
Instantiate the new texture you want to render a few frames after calling gc. Monitor resources with Scout and the iOS companion app to confirm that it's working.
You could also try to detect when the framerate drops, and set some objects to null then force garbage collection. In my case, if I moved my game to an empty frame for a few seconds with garbage collection, the framerate would recover and the game would resume rendering with GPU.
Hope this helps!