In Apple's Core Animation documentation, it says there are two rendering paths involved. From what I know, CALayer caches bitmap data of a UIView content. There are two ways of providing content of a CALayer. One is implementing drawRect: or other CALayer drawing methods, the other is setting a bitmap to the property contents of CALayer.
Here I'm wondering, what happens behind the scene if neither of the above two things is done? I believe there is a private drawing path UIView uses in this situation. What is this private drawing path consists of? How does it work?
The crux of CALayer is that it's GPU-backed. In modern graphics and animation, you want to minimize the number of times your bitmap data crosses between the CPU and the GPU. These operations are costly.
CALayer always uses a private drawing path, whether you use setContents: or drawRect:. In fact, the underlying plumbing of CALayer handles both of these in essentially the same way. When you setContents: , CALayer takes the image you gave it, and uploads it to the GPU via OpenGL (probably nowadays Metal) calls. When you drawRect: the CALayer gives you a context into which you can draw, and then it does the same thing with the bitmap data.
If you don't set contents or implement drawRect, you can still do things like set the layer's background color, border, corner radius, etc. This is being rendered by CALayer's GPU-based private drawing path.
CALayer does not draw any of its content using drawRect:. Only view based drawing techniques like Core Graphics use drawRect: but the only problem with this ways of things is that it is done on the CPU and the main thread making it an expensive process. So instead, Core Animation manipulates a cached bitmap of your apps content in the graphics hardware directly which is far much more optimised. You update or provide the initial contents for a Core Animation layer through one of its delegates (displayLayer: or drawLayer:inContext) or using the contents property as you have mentioned. All layer objects in Core Animation are derived from CALayer.
CALayer is simply a layer object belonging to Core Animation which in itself is simply a support system for UIView or subclasses of UIView. Core Animation is not a drawing technology in the sense it cannot create primitive shapes like Quartz, Open GL ES and Metal can. Instead, Core Animation allows you to manipulate an already existing view and it does this by caching the bitmap data of a UIView and sending this off to the graphics hardware to be manipulated. We say Core Animation is a support system and all its work relies on using layer objects of which CALayer is the main type. It can only do this of course if a view has a layer and a view does not actually need a layer to exist. However, in iOS all views come with a layer attached by default. We say views in iOS are "layer backed". In MacOS, you need to actually add Core Animation support for views.
The actual drawing of the contents of a CALayer happens in a few ways. The first is changing the contents property of the CALayer as you have mentioned and you do this by giving it a CGImageRef. The second is by implementing or overriding in a subclass the CALayer delegate displayLayer: which creates a bitmap and sets it to the content property of a layer. The third is by implementing or overriding in a subclass the CALayer delegate drawLayer:inContext which creates a bitmap, creates a graphics context to draw into that bitmap, and then calls your delegate method to fill the bitmap.
In iOS we do not usually worry about how the content of our view's layers are rendered. Since all views are layer backed, iOS manages how to render these views in the most efficient way possible using the methods I've just described. This is an optimisation to save you time and it makes layers very easy to use. You'll usually worry about overriding these delegates or subclassing them if you are developing for MacOS where views are not always layer backed. You might also worry about this if you decide not to use the default CALayer in iOS, for example you might change a view's layer from CALayer to CAMetalLayer. Or perhaps if you are looking for a performance optimisation and even this in a small number of cases.
There are three ways to provide content to a layer.
- Assign an image object directly to the layer object’s contents property.
- Assign a delegate object to the layer and let the delegate draw the layer’s content.
- Define a layer subclass and override one of its drawing methods to provide the layer contents yourself.
If we don't implement drawRect: or setting the contents property or subclass the layer, it will use the default way which is the second one to provide content to a Layer-backed views's layer. The layer will capture the content of the view and render it.
Related
I have a pretty basic question: How do you choose between using UIView and CAShapeLayer when you want to draw shapes (I'm not talking about textfields, switches or other controls, just drawing)?
My understanding is that UIView (as part of UIKit) uses a normal CALayer under the hood to draw its content. If this is correct, then CAShapeLayer (or CALayer in general) would be the exact same thing, only without the extras UIKit gives you.
Then, when does using a UIView make sense, and when does using CAShapeLayer make sense?
Is CAShapeLayer faster? Is UIKit more optimised for gesture recognition or user interaction in general?
To give you more context, here's what I was trying to do when this question came up:
I want these red circles to rotate around the center circle. However, the user should be able to tap on the red circles while they are rotating.
Here, I see two main options (there may be more) to add one of those red circles:
Create a UIView, manipulate its layer's cornerRadius and rotate it using CGAffineTransform.
Create a CAShapeLayer with a UIBezierPath. I could rotate it using CATransform3D
The only problem I have here is the user interaction. As it's constantly moving (rotating), I'd have to access the correct frame. I can do that using the presentation layer (which I think UIView is also using under the hood).
At this point, I'm not sure whether to use UIViews or CAShapeLayers. Also, I'm not sure if animating it this way is the correct way in this case. There may be better options that will also erase the question about which one to use.
Thanks for your thoughts about this.
Quoting from Apple's doc
Layers are not a replacement for your app’s views—that is, you cannot
create a visual interface based solely on layer objects. Layers
provide infrastructure for your views. Specifically, layers make it
easier and more efficient to draw and animate the contents of views
and maintain high frame rates while doing so. However, there are many
things that layers do not do. Layers do not handle events, draw
content, participate in the responder chain, or do many other things.
For this reason, every app must still have one or more views to handle
those kinds of interactions.
In iOS, every view is backed by a corresponding layer object but in OS
X you must decide which views should have layers. In OS X v10.8 and
later, it probably makes sense to add layers to all of your views.
However, you are not required to do so and can still disable layers in
cases where the overhead is unwarranted and unneeded. Layers do
increase your app’s memory overhead somewhat but their benefits often
outweigh the disadvantage, so it is always best to test the
performance of your app before disabling layer support.
When you enable layer support for a view, you create what is referred
to as a layer-backed view. In a layer-backed view, the system is
responsible for creating the underlying layer object and for keeping
that layer in sync with the view. All iOS views are layer-backed and
most views in OS X are as well. However, in OS X, you can also create
a layer-hosting view, which is a view where you supply the layer
object yourself. For a layer-hosting view, AppKit takes a hands off
approach with managing the layer and does not modify it in response to
view changes.
Read : https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/CoreAnimation_guide/CoreAnimationBasics/CoreAnimationBasics.html#//apple_ref/doc/uid/TP40004514-CH2-SW3
Now time for Some QNA
Question 1:
My understanding is that UIView (as part of UIKit) uses a normal
CALayer under the hood to draw its content. If this is correct, then
CAShapeLayer (or CALayer in general) would be the exact same thing,
only without the extras UIKit gives you
Again Quoting Apple doc
Layer-backed views create an instance of the CALayer class by default,
and in most cases you might not need a different type of layer object.
However, Core Animation provides different layer classes, each of
which provides specialized capabilities that you might find useful.
Choosing a different layer class might enable you to improve
performance or support a specific type of content in a simple way
Clearly when you try to draw various shapes using CAShapeLayer is beneficial in terms of performance when compared to CALayer.
Question 2
Is CAShapeLayer faster? Is UIKit more optimised for gesture
recognition or user interaction in general?
Clearly Layers do not handle events, draw content, participate in the responder chain, or do many other things. So none of the layer can recognize user interactions no matter whether its CALayer or CAShapeLayer
Question 3
At this point, I'm not sure whether to use UIViews or CAShapeLayers
As you have specified in your question you want user interactions of red circles and because we are now aware of the fact that CAShapeLayer/CALayer will not respond to user interaction its pretty clear that you have to use UIView rather than layer.
So I just discovered QuartzCore, and I am now considering to replace a UIImageView containing a bitmap with a UIView subclass doing things like
CGContextFillEllipseInRect(contextRef, rect);
They would look exactly the same: the bitmap is just a little filled circle.
The very view I'm replacing is playing an important role in my app: it's being dragged around a lot.
Question: performance wise: should I bother? I can imagine that the vector circle is being recalculated all of the time, while the bitmap is just buffered. Or that the vector is easier to digest than a bitmap.
Can anyone advise?
thanks ahead
All UIView's on iOS are layer backed. So drawRect will only be called once and you will draw to the CALayer backing the view. You can have it draw again by calling setNeedsDisplay. When you are dragging the view around and drawing it, the view will render from the layer backing. Using a UIImageView is also layer backed and so the end result should be two layer backed views. The one place where you may see a difference is in low memory situations when the view is not visible (though I am not sure).
I always used CoreGraphics and CoreAnimation, I understand how each of them works on their own, but not those edge cases when one have to talk with the other. I also understand that UIViews are a nice wrapper for CALayer, where CALayer does all the heavy lifting of rendering, and the UIView adds the touch-based responsiveness.
But, all the questions I have seen thus far, attack the problem from one side or the other, not the interplay between them, specially between CoreGraphics and CALayer.
Anyway, my question is ...
How does CoreGraphics relate to CALayer?
My understanding is that a CALayer wraps the CoreGraphics methods to draw itself, but does it once, and can live with the snapshot of itself until invalidated. But, how these drawing methods interplay with the sublayers of that layer? Are they exclusive?
For example, what happens when I have a UIView that has sub-views, and I overload the drawRect method? How does that affect the drawing of its sublayers?
Is it even a good idea to intermix the two inside the same function?
Also, I'm asking only about iOS, I understand that Mac is a different beast (and also have those fancy CIFilters, bastards!).
Prior Research
Here's some related questions I've researched beforehand:
confusion regarding quartz2d, core graphics, core animation, core images. This question asks the differences between each other, and the chosen answer indeed delivers, but it answers for each individual library as if the other didn't exist.
To Drawrect or not to Drawrect. Another great question, but it addresses only the subject of drawing CoreGraphics vs handing the problem to UIKit, but anyway, the chosen answer delivers parts of the puzzle.
Animating Pie Slices with Custom CALayer. Must be one of the most valuable tutorials I've seen in this subject, it's the only one that has guided me through to drawing a CALayer
What is different between CoreGraphics and CoreAnimation Absolutely disappointed on how quick the asker accepted the answer, I feel that there's a whole lot more going in here.
Various WWDC videos, but I haven't seen one that explains in detail the general scope. If anyone replies with a WWDC video that does, I'll consider that a valid answer.
I'll try to answer your question at a conceptual, 20,000ft level. I will try to disclaim my points where I'm over-generalizing, but I'll attempt to hit the common case.
Perhaps the easiest way to think about it is this: In the GPU's memory you have textures which, for the purposes of this discussion, are bitmap images. A CALayer might have a texture associated with it, or it might not. These cases would correspond to a layer with a -drawRect: method, and a layer that exists solely to contain sublayers, respectively. Conceptually, each layer that has a texture associated with it has a different texture all it's own (there are some details and optimizations that make this not strictly/universally true, but in the general, abstract case, it can help to think of it this way.) With that in mind, a superlayer's -drawRect: method has no effect on any of its sublayers' -drawRect: methods, and (again, in the general case) a sublayer's -drawRect: method has no effect on its superlayer's -drawRect: method. Each draws into its own texture (also called a "backing store") and then, based on the layer tree and the associated geometries and transforms, the GPU composites all these textures together into what you see on the screen. When one of the layers is invalidated, directly or indirectly (via -setNeedsDisplayInRect:), then when CA goes to display the next frame on screen, the invalid layers will be redrawn by virtue of having their -drawRect: methods called. That will update the associated texture, and once all the invalid layers' textures are updated, the GPU will composite them, generating the final bitmap that you see on-screen.
So to answer your first question: In the general case, no, there is no interplay between the -drawRect: methods of distinct CALayers.
As to your second question: For the purposes of this discussion you can think of UIViews as being the same as CALayers. The interrelationship with respect to drawing and textures is largely unchanged from that of non-UIView CALayers.
To your third question: Is it a good idea to intermix UIViews and CALayers? Every UIView has a CALayer backing it (all views in UIKit are layer-backed, which is not normally the case on OSX.) So at some level, they're "intermixed" whether you want them to be or not. It is perfectly fine to add CALayer sublayers to the layer that backs a UIView, although that layer will not have all the added functionality that UIView brings to the party. If that layer's purpose is just to generate an image for display, then that's fine. If you want the sub-layer/view to be a first class participant in touch handling, or to be positioned/sized using AutoLayout, then it will need to be a UIView. It's probably best to think of a UIView as a CALayer with a bunch of extra functionality added to it. If you need that functionality, use a UIView. If you don't, use a CALayer.
In sum: CoreGraphics is an API for drawing 2D graphics. One common use of CG (but not the only one) is to draw bitmap images. CoreAnimation is an API providing an organized framework for manipulating bitmaps on-screen. You could meaningfully use CoreAnimation without ever calling a CoreGraphics drawing primitive, for example, if all your textures were backed by images that were compiled into your application at build time.
Hopefully this is helpful. Leave comments if you need clarification, and I'll edit to oblige.
I'm using CALayers to display a couple images. I do so by creating the layer and setting its contents property to a CGImageRef. I do not set a delegate on my CALayer.
The layer displays fine, but when another layer moves on top of the first layer, the lower layer's contents are "erased." I'm assuming the CALayer is calling the default delegate and drawing nothing. How do I make my CALayer persist its contents?
Thanks.
The lower layer should not be erased by adding a new layer on top. My guess is that the lower layer is being covered (and thus obscured) by the layer you've added. Try making the new layer smaller than the original layer as a test.
Note that if you call certain methods like setNeedsDisplay on a layer, it WILL cause the layer to discard it's contents.
Do you have any code that might be forcing the layer to redraw? (Like calling setNeedsDisplay, as mentioned above.) That would cause the symptom you are seeing.
I have this app using which one can draw basic shapes like rectangle, eclipse, circle, text etc.
I also allow free form drawing, which is stored as set-of-points, on the canvas.
Also a user can resize and move around these objects by operating on the selection handles that appear when an object is selected.
In addition the user should be able to zoom and pan the canvas.
I need some inputs on how to efficiently implement this drawing functionality.
I have following things in mind -
Use UIView's InvalidateRect and drawRect
Have a UIView for the main canvas and for each inserted object - invalidate the correspoding rect and redraw all the objects which intersects that rect in the drawRect function of the UIView.
Have a UIView and use CALayer ?
every one keep mentioning about the CALayer , I dont have much idea on this, before I venture into this I wanted a quick input on whether this route is worth taking.
like, https://developer.apple.com/library/ios/#qa/qa1708/_index.html
Have a UIImageView as canvas and when drawing each object, we do this
i) Draw the object into offscreen CGContext, basically, create a new CGContext by using UIGraphicsBeginImageContext, draw the shape, extract the image out of this CG context and use that as source of UIImageView's image property, but here how do I invalidate only a part of the UIImageView so that only that area gets refreshed.
Could you please suggest what is the best approach?
Is there any other efficient way to get this done?
Thanks.
Using a UIImage is more efficient for rendering multiple objects. But Using a CALayer is more efficient when moving and modifying a single object because you don't have to modify the other objects. So I think the best approach is to use a UIImage for general drawing and a CALayer for the shape that is being modified. In other words:
use a CALayer to draw the shape being added or modified, but don't draw it on the UIImage
use a UIImage to draw the other shapes
But OpenGL is still the most efficient solution, but don't bother with that if you don't have too many objects to draw.
If you want to draw polygons, you'll have to use Quartz framework, and have your drawing methods based on CALayer. It doesn't really matter which view you'll put your CALayers in, UIImageView or UIView. I'll say UIView since you won't be needing UIImageView's properties or methods for drawing.