i'm working on a IOS audio application.
I've noticed when I do lots of stuff on the main thread the cpu usage for the audio thread actually drops. With a little debugging I tracked the strange behaviour to a CADisplayLink timer were I do lots of stuff to update the UI. When I removed this method the cpu usage for the audio thread averaged around 10% but with the CADisplayLink method running the cpu usage dropped to around 5%.
As an experiment, I removed all my code in the CADisplayLink method and inserted a massive while loop just to slow down the main thread and see what would happen. The cpu usage dropped to around 5% just as before so I could confirm that it wasn't my code.
I'm testing on an iPad Pro 10.5" 2nd gen. The above doesn't seem to happen on the simulator.
Does anyone know why I am seeing this strange behaviour?
Cheers!
I just had this question answered over on the Audiobus dev forum.
Looks like it's just CPU scaling. When the device has more to do, it scales up the CPU power and runs faster. Mystery solved :).
Related
I've been trying to optimize scrolling of my UICollectionView, and the Core Animation profiler has me puzzled...
When the app is idle (no scrolling or interaction in anyway) I'm averaging around 59-60 fps, but occasionally it will drop down to 7 or 12 fps:
Is this expected behavior? Because I'm not interacting with the app when this drop happens I don't visually see anything, but I'm curious if this is something I should be troubleshooting.
Other times when profiling core animation bottlenecks I've seen fps drop down to 0 fps when idle/not interacting with the app.
The app isn't crash or freezing, so is this some sort of bug in Instruments? (I'd expect consistently to be 0fps or close to 60fps when nothing is happening in the app).
Update:
Here's an example of the FPS graph after running the profiler a few minutes later (I'd tried turning on rasterization for a type of view, but then reverted back to not rasterizing, so although the project was rebuilt, the codebase is the same):
Here I'm getting between 32 and 55 fps when interacting with the app, and dropping down to 0 fps when idle.
From my subjective perspective I'm not noticing anything major between what I'm seeing between these two examples, but from Xcode's perspective I'm seeing two different stories.
Does anyone know what's happening here?
Been trying to find this bug for days now with no solution. Developing a ios game uising swift and only UIKit. My app displays a lot of small images (about 70 a time). Some uianimations are running repeatingly. After a while my app show some performance lags (tested on a device). Xcode shows only 30MB of memory usage and about 97% CPU time used. Using instruments didnt really help (im not using a lot of memory anyway). How can I track this bug down, this seems so weird to me.
The problem is that using UIKit for such graphics is not the best solution, as it is working through CPU, not GPU. And this is the reason, why application is lagging.
The other reason for it to show only 30MB of memory used, as it does not show memory used for uncompressed images. When you display image on the screen, or use UIViews with drawRect:, it takes really a lot of memory.
Problem in a nutshell
I have been building an IOS application in recent weeks and have run into some trouble.The application is plays an animation by manipulating and then drawing an image raster multiple times per second. The image is drawn by assigning it to a UIViews CALayer like so self.layer.contents = (id)pimage.CGImage; The calculation and rendering are seperated in two CADisplayLinks.
This animation technique achieves a satisfactory performance on the IPhone 6.1 simulator but when it is build on the physical device (Iphone 4s running IOS 6.1.3) it experiences a significant slow down. The slow down is so bad that it actually makes the application unusable.
Suspected Issues
I have read, in this question Difference of memory organization between iOS device and iPhone simulator , that the simulator is allowed to use far more memory than the actual device. However, while observing my apps memory usage in in "instruments", I noticed that the total memory usage never exceeds 3Mbs. So Im unsure if that is actually the problem but it's probably worth pointing out.
According to this question, Does the iOS-Simulator use multiple cores? , the IOS simulator runs of an intel chip while actual my device uses an apple A5 chip. I suspect that this may also be the cause of the slowdown.
I am considering rewriting the animation in Open GL, however Id first like to try and improve the existing code before I take any drastic steps.
Any help in identifying what the problem is would be greatly appreciated.
Update
Thanks to all those who offered suggestions.
I discovered while profiling that the main bottleneck was actually clearing the image raster for the next animation. I decided to rewrite the rendering of the animations in opengl. It didn't take as long as anticipated. The app now achieves a pretty good level of performance and is a little bit simpler.
This is a classic problem. The simulator is using the resource of your high-powered workstation/laptop.
Unfortunately the only solutions is to go back and optimize your code, especially the display stuff.
Typically, you want to try to minimize the drawing time from the computation time, which it sounds like you are doing, but make sure you don't compute on the main thread.
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul);
dispatch_async(queue, ^{
// Do the computation
});
You can use instruments while running on the device, so the CoreGraphics instruments is available to see what is using all the time and point to the offending code. Unfortunately, you probably already know what it is and it's just going to come down to optimizations.
The slowdown is most likely related to blitting the images. I assume you are using a series of still images that get changed in the display look callback. I believe that if you can use CALayers that get added to your primary view/layer (while removing the old one), and which contain already CGImageRefs, you can then use CGContextDrawImage() to blit the image in the layer's drawInContext method. Set the context to use copy not blend, so it just replaces the old bits.
You can use a dispatch queue to create CALayer subclasses containing an image on a secondary thread, then of course the drawing happens on the main queue. You can use some throttling to maintain a queue of CALayers of 10 or so, and replenishing them as they are consumed.
if this doesn't do it then OpenGL may help, but again none of this helps moving bits between the processor and the GPU (since you are using stacks of images, not just animating one).
I am having some trouble grasping why my app performance is at 24fps as opposed to the usual 30fps.
The frame time for both CPU and GPU varies between 6-18 ms, and the GPU utilization never surpasses 55%. Doesn't this mean that the frame rate should be higher?
When I use 'Analyze Performance', Xcode tells me:
Your performance is not limited by the OpenGL ES commands issued. Use the Instruments tool to investigate where your application is bottlenecked.
I am a beginner at this, so can someone explain to me how the frame time can be so low, yet the frame rate so high? (The device is not the issue)
Edit 4/2/2013
New development:
This frame rate drop only occurs when I run the app from Xcode (I know this because when the app performance is poor, the sound is not in sync and accelerometer sensitivity is lowered). When I stop running from Xcode and run the app directly from my iPod, the frame rate is perfect. Now I am wondering if there really are performance issues with my app. Is there any way that Xcode could be impeding on the app's performance by running tests or monitoring the device?
You should set frame rate to your avCaptureConnection.And then it shoud be ok.
Currently I am working on an Air app for iOS and Android. Air 3.5 is targeted.
Performance on iPhone 4 / 4s has been acceptable overall, after a lot of optimising: gpu rendering, StageQuality.LOW, avoiding vectors as much as possible etc. I really put a lot of effort in boosting performance.
Still, every once in a while, the app becomes very slow. There is no precise point in time or action or combination of actions after which this occurs. Sometimes, it doesn't occur for days. But when it occurs, only killing the app and launching it again helps, because the app stays slow after that. So I am not talking about minor hiccups that
The problem occurs only on (some) iPhones 4 and 4s. Not on iPad 3,4, iPhone 5, any Android device...
Has anyone had similar experiences and pointers as to where a solution might be found?
What happens when gpu memory fills up? Or device memory? Could this be involved?
Please don't expect Adobe Air to have performance as Native Apps. I am developing App with Adobe Air as well.
By the sound of your development experience. I think it's to do with memory issue, because the performance is not too bad at the begging stage, but it gets bad overtime (so u have to kill the app). I suggest you looking into memory leaking issue.
Hopefully my experience can help you.
I had a similar problem where sometime during gameplay the framerate would drop from 30fps to an unrecoverable 12fps. At first I thought I was running out of GPU memory and it was falling back on rendering with CPU.
Using Adobe Scout I found that when this occurred, the rendering time was ridiculousness high.
Updating to Air 3.8, I fixed the problem by limiting the amount of bitmaps that were being rendered and in memory at once. I would only create new instances of backgrounds for appropriate levels, and then flagging them for garbage collection when the level ended, waiting a few seconds and then moving to the next level.
What might solve your problem is if you reduce the amount of textures you have in memory at one time, only showing the ones you need to. If you want to swap out active textures for new ones, set all the objects with that texture data to null:
testMovieClip = null;
and remove all listeners from it so that garbage collection will pick it up.
Next, you can force garbage collection with AIR:
System.gc();
Instantiate the new texture you want to render a few frames after calling gc. Monitor resources with Scout and the iOS companion app to confirm that it's working.
You could also try to detect when the framerate drops, and set some objects to null then force garbage collection. In my case, if I moved my game to an empty frame for a few seconds with garbage collection, the framerate would recover and the game would resume rendering with GPU.
Hope this helps!