I am writing IOS app where many CALayers (hundreds) are displayed at the same time on screen. Update of interface (at rate about 50-60Hz) includes removing/adding some CALayers from/to layers hierarchy and changing "hidden" state of other CALayers. All updates are made under common CATransaction with ANY animations swithed off.
Frequent removing/adding CALayers from hierarchy doesn't lead to any problems with graphics performance. But frequent changing of "hidden" state of CALayers leads to degradation of FPS so graphics updates stop being smooth and stable. It seems that GPU always late with processing updated screen.
Does anyone know what is the reason for this effect? In official documentation there is no any word about such side effects of hiding/showing layers. Adding/removing and hiding/showing layers seem to have similar consequences to what is made with layers: when layer is hidden it is just not processed at all what is equivalent to being removed (except maybe that it is still cached somewhere in GPU..?). Maybe someone can explain that or faced the same problem?
Related
I'm creating an app where there're up to 40 UIViews where every view stores a drawing of a stick on it which is available in several positions, rotated to 30 degree angle, 45 degree angle etc). Background of a View is transparent. These views can intersect with each other, so I need the UIViews to be transparent in order a user could see both drawings from overlapped and overlapping view. I wonder if this affects a performance of an application seriously? (all this transparency of all 40 UIViews). And how I can track how much memory or CPU my app currently uses.
I recommend watching WWDC 2012 Session 238 - iOS App Performance: Graphics and Animations, which covers these questions.
As a broad answer:
The iPhone will probably handle your 40-view requirement fine—but its impossible to know for sure without trying it out, and without more context (are they being animated? Are they scrolling?)
More views creates more performance problems, because all of the views need to be packaged up and shipped off to be rendered (by backboardd I think).
Transparency will hurt application performance. I believe the core reason is that transparent views need to be drawn in an off-screen buffer rather than be painted over existing content (something like that).
Use Instruments for Profiling
Profile your GPU usage using the Open GL ES Driver (look at 'Device Utilization')
Measure CPU usage using Time Profiler
Measure FPS and check for common performance problems using the CoreAnimation instrument
I wouldn't bother thinking about this until you actually see performance issues. If you do, I can't recommend that WWDC session enough—it covers things like what strategy you should take to optimize performance (e.g. moving work to the GPU as long as it can handle more; the basics of profiling, etc.) as well as tips and tricks based on the implementation details of iOS.
I am making a game, and it involves a sandstorm. I decided that the basic concept would be that I would make an image that looks roughly like a sandstorm, and then decorate it with some particles/whatever else it takes.
I ran into an issue at step one. I threw together a simple image for testing purposes:
I added that to my game, and the FPS dropped by 60%. I was surprised by the effect one image had, but I wasn't too worried about it. I cut the resolution of the image in half, and again, lots of lag.
Is spritekit/iOS really that bad at handling moderately sized images with alpha? I read on another question that the simulator is bad at rendering, but that can't be the entire problem.
Is there any hope for getting this to render without slicing my performance? The particles work well, everything else runs at 60fps just fine, but the addition of this image is apparently a severe drain on resources.
EDIT: I tested my game out on my phone, and I got no lag. So apparently, the simulator is just really bad at rendering after all. At the same time, I am curious as to how to speed up performance, as there is clearly some kind of lag going on.
I'm no expert on SpriteKit, but I had similar experiences with plain core animation and layering.
The issue is that an image with alpha, even for the "opaque" parts of it, it introduces a redrawing call on all the sublayers underneath it for every time it moves. First check if this is actually the problem, and then, try one of this, and see if it improves:
SKCropNode could prevent for rendering the underneath elements
Tile the image so only the border has alpha
Snapshot the underneath layers.
Reduce the amount of nodes being rendered, hide the ones that are "under the sandstorm".
And you should be using real devices to test performance of your game, you cannot rely on the simulator for that.
I am adding multiple UIImageView to a UIView to perform operations such as drag,pinch and zoom images.I have added gesture recogniser to all the UIImageViews.Since i'm adding multiple images(UIImageViews) it has brought down the performance of my app.Does any one have a better solution to perform this? Thanks
The adding of many images should not generally, cause enough of a problem that your app would slow down. For example, to illustrate the point with an absurd example, I added 250 (!) image views each with three gestures, and it works fine on an iPad 3, including the animating of the images into their final resting place/size/rotation.
Two observations:
Are you doing anything computationally intensive with your image views? For example:
Simply adding shadows with Quartz 2D has a huge performance impact because it's actually quite computationally expensive. In the unlikely even that you're using layer shadows, you can try using shouldRasterize, which can mitigate the problem, but not solve it. There are other (kludgy) techniques for doing computationally efficient shadows if that's the problem.
Another surprising computationally intensive process is if your images are (for example) PNGs with transparency settings or if you have reduced the alpha/opacity for your views.
What is the resolution/size of the images being loaded? If the images are very large, the image view will render them according to the contentMode, but it can be very slow if you're taking large images and scaling them down. You should use screen resolution images if possible.
These are just a few examples of things that seem so innocuous, but are really quite computationally expensive. If you're doing any Quartz embellishments on your image views, I'd suggest temporarily paring them back and see if you see any changes.
In terms of diagnosing the performance problems yourself, I'd suggest watching the following two WWDC videos:
WWDC 2012 - #211 - Building Concurrent User Interfaces on iOS includes a fairly pragmatic demonstration of Instruments to identify the source of performance problems. This video is clearly focused on one particular solution (the moving of computationally expensive processes into the background and implementing a concurrent UI), which may or may not apply in this case, but I like the Instruments demonstration.
WWDC 2012 - #235 - iOS App Performance: Responsiveness is a more focused discussion on how one measures responsiveness in apps and techniques to address problems. I don't find the instruments tutorial to be quite as good as the prior video, but it does go into more detail.
Hopefully this can get you going. If you are still stumped, you should share some relevant code regarding how the views are being added/configured and what the gestures are doing. Perhaps you can also clarify the nature of the performance problem (e.g. is it in the initial rendition, is it a low frame rate while the gestures take place, etc.).
I have a custom view (inherited from UIView) in my app. The custom view overrides
- (void) drawRect:(CGRect) rect
The problem is: the drawRect: executes many times longer on iPad 3 than on iPad 2 (about 0.1 second on iPad 3 and 0.003 second on iPad 2). It's about 30 times slower.
Basically, I am using some pre-created layers and draw them in the drawRect:. The last call
CGContextDrawLayerAtPoint(context, CGPointZero, m_currentLayer);
takes most of the time (about 95% of total time in drawRect:)
What might be slowing things so much and how should I fix the cause?
UPDATE:
There are no threads directly involved. I do call setNeedsDisplay: in one thread and drawRect: gets called from another but that's it. The same goes for locks (there are no locks used).
The view gets redrawn in response to touches (it's a coloring book app). On iPad 2 I get reasonable delay between a touch and an update of the screen. I want to achieve the same on iPad 3.
So, the iPad 3 is definitely slower in a lot of areas. I have a theory about this. Marco Arment noted that the method renderInContext is ridiculously slow on the new iPad. I also found this to be the case when trying to create a magnifying glass for a custom text view. In the end I had to forego renderInContext for custom Core Graphics drawing.
I've also been having problem hitting the dreaded wait_fences errors on my core graphics drawing here: Only on new iPad 3: wait_fences: failed to receive reply: 10004003.
This is what I've figured out so far. The iPad 3 obviously has 4 times the pixels to drive. This can cause problems in two place:
First, the CPU. All core graphics drawing is done by the CPU. In the case of rotational events, if the CPU takes too long to draw, it hits the wait_fences error, which I believe is simply a call that tells the device to wait a little longer to actually perform the rotation, thus the delay.
Transferring images to the GPU. The GPU obviously handles the retina resolution just fine (see Infinity Blade 2). But when core graphics draws, it draws its images directly to the GPU buffers to avoid memcpy. However, either the GPU buffers haven't changes since the iPad 2 or they just didn't make them large enough, because it's remarkably easy to overload those buffers. When that happens, I believe the CPU writes the images to standard memory and then copies them to the GPU when the GPU buffers can handle it. This, I think is what causes the performance problems. That extra copy is time consuming with so many pixels and slows things down considerably.
To avoid memcpy I recommend several things:
Only draw what you need. Avoid drawing anything offscreen at all costs. If you're drawing a large view, but only display part of that view (subviews covering it, for example) try to find a way to only draw what is visible.
If you have to draw a large view, consider breaking the view up in to parts either as subviews or sublayers (probably sublayers in your case). And only redraw what you need. Take the notability app, for example. When you zoom in, you can literally watch it redraw one square at a time. Or in safari you can watch it update squares as you scroll. Unfortunately, I haven't had to do this so I'm uncertain of the methodology.
Try to keep your drawings simple. I had an awesome looking custom core text view that had to redraw on every character entered. Very slow. I changed the background to simple white (in core graphics) and it sped up well. Even better would be for me to not redraw the background.
I would like to point out that my theory is conjecture. Apple doesn't really explain what exactly they do. My theory is just based on what they have said and how the iPad responds as well as my own experimentation.
UPDATE
So Apple has now released the 2012 WWDC Developer videos. They have two videos that may help you (requires developer account):
iOS App Performance: Responsiveness
iOS App Performance: Graphics and Animation
One thing they talk about I think may help you is using the method: setNeedsDisplayInRect:(CGRect)rect. Using this method instead of the normal setNeedsDisplay and making sure that your drawRect method only draws the rect given to it can greatly help performance. Personally, I use the function: CGContextClipToRect(context, rect); to clip my drawing only to the rect provided.
As an example, I have a separate class I use to draw text directly to my views using Core Text. My UIView subclass keeps a reference to this object and uses it to draw it's text rather than use a UILabel. I used to refresh the entire view (setNeedsDisplay) when the text change. Now I have my CoreText object calculate the changed CGRect and use setNeedsDisplayInRect to only change the portion of the view that contains the text. This really helped my performance when scrolling.
I ended up using approach described in #Kurt Revis answer for similar question.
I minimized number of layers used, added UIImageView and set its image to an UIImage wrapping my CGImageRef. Please read the mentioned answer to get more details about the approach.
In the end my application become even simpler than before and works with almost identical speed on iPad 2 and iPad 3.
I am wondering if what I'm attempting is just a bad idea. I'm currently working in monotouch. Is it possible to draw a screen-sized (on my iPhone 4 its about 320x460) buffer onto a UIView of equal size fast enough so that animated changes to that buffer look smooth to the end user (need it to be around 20ms per draw).
I've attempted many different implementations. The best one so far seems to be using an in-memory CGLayer and calling context.DrawLayer() to apply it to the view inside of Draw(). But even that takes 30-40ms per DrawLayer.
I'm writing my own tile-image control, and aside from performance, the idea is working well. I just can't figure out how to get the buffer onto the UIView fast enough.
Any ideas?
I've been dealing with custom views a lot lately, and i've had a bunch of performance problems, too.
All of these performance issues could be solved by determining the elements that need to be redrawn, and, more importantly, the elements that do not need to be redrawn.
Then, split the contents in the layer into individual sublayers and only redraw them if necessary. The good thing is, animations and so on are very smooth for those individual layers. (Their content is only a simple bitmap and does not change until you tell it to).
The only limitation i've come across was, that you cannot use CG blend modes (e.g. multiply) for the sublayers. As far as i know that is not possible. You can only use those blend modes inside the CG code used to draw the contents of the sublayers, but after that they are all composed in "normal" mode.
It really depends on what you are drawing.
If you are just drawing a solid filled color, that should not be a problem. The question is how much of the surface you are changing, and how you are changing it.
Again, it depends on what you are drawing and whether you could offload some of the work to the GPU. For example if you have static parts of your interface that will remain the same, or are animated/updated independently, you could use a different layer for those areas and let the GPU compose those.
Layers have the advantage that they are composited by the GPU, and they are backed by their own bitmaps. Once you draw into the surface of the layer, the OS will cache the result in the GPU and compose all of your layers at the same time.
Then you can determine which parts of your application actually need to be redrawn and only redraw those sections on each frame.
But again, it really will depend a lot on what you are trying to do.