What are the differences between a UIView and a CALayer? - ios

Both have most of the same attributes, both support different kind of animations, both represent different data.
What are the differences between a UIView and a CALayer?

On iOS, every UIView is backed by a Core Animation CALayer, so you are dealing with CALayers when using a UIView, even though you may not realize it. Unlike NSViews on the Mac, which evolved before Core Animation existed, UIViews are intended to be lightweight wrappers around these CALayers.
As I describe in the similar question "When to use CALayer on the Mac/iPhone?", working directly with CALayers doesn't give you significant performance advantages over UIViews. One of the reasons you might want to build a user interface element with CALayers instead of UIViews is that it can be very easily ported to the Mac. UIViews are very different from NSViews, but CALayers are almost identical on the two platforms. This is why the Core Plot framework lays out its graphs using CALayers instead of other UI elements.
One thing UIViews provide over CALayers is built-in support for user interaction. They handle hit-testing on touches and other related actions that you would need to build yourself if managing a hierarchy of CALayers. It's not that hard to implement this yourself, but it is extra code you'd need to write when building a CALayer-only interface.
You will often need to access the underlying layers for a UIView when performing more complex animations than the base UIView class allows. UIView's animation capabilities have grown as the iOS SDK has matured, but there are still a few things that are best done by interacting with the underlying CALayer.

From the Ray Wenderlich blog (Tutorial)
CALayers are simply classes representing a rectangle on the screen
with visual content. “But wait a darn minute,” you may say, “that’s
what UIViews are for!” That’s true, but there’s a trick to that:
every UIView contains a root layer that it draws to!

Simply speaking,UIView inherit from UIResponder, handle events from users, contains CALayer, which inherit from NSObject, mainly focus on rendering, animation etc.

UIView is a container for CALayers. Using UIKit.
CALayer where we draw the contents. Using CoreGraphics
If you work with custom control like features it would be great to go ahead with single view containing more layers for accurate native rendering. Since CALayers are weightless than UIView.
To create common skeleton for Mac and iOS, follow the design for your app using CALayers. Since it is available in both platform.
UIView having feature like touch events achieved using delegates -(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event, tochesStart like events and other UIKit features.
To work with CALayers use Core Graphics knowledge.For any simple view rendering UIView is enough.

UIView: Views have more complex hierarchy layouts. They can receive user interactions like taps, pinches, cliks and more. Working with UIViews happens on the main thread, it means it is using CPU power.
CALayer: Layers on other hand have simpler hierarchy. That means they are faster to resolve and quicker to draw on the screen. There is no responder chain overhead unlike with views. Layers are drawn directly on the GPU. It happens on a separate thread without burdening the CPU.
For more details: https://medium.com/#fassko/uiview-vs-calayer-b55d932ff1f5

The big difference is UIView is designed for CocoaTouch on mobile device. It adds some event handler which CALayer did not provide.

Related

In iOS, what's the difference between UIView Animations and Core Animations?

For instance I believe CGAffineTransform is core animation. I know how to rotate an object, scale and move an object in core animation but is there a way to do it in UIView Animation?
Apple describe in it's docs:
"In iOS, animations are used extensively to reposition views, change
their size, remove them from view hierarchies, and hide them."
View Programming Guide - Apple
The article continues with a fairly strong emphasis on using Core Animations rather than UIView Animations for more detailed animations.
"In places where you want to perform more sophisticated animations, or
animations not supported by the UIView class, you can use Core
Animation and the view’s underlying layer to create the animation.
Because view and layer objects are intricately linked together,
changes to a view’s layer affect the view itself."
Same Doc - View Programming Guide - Apple
So basically put, you manipulate the view's layer to change the view. I don't think it wise to manipulate the view directly when Core Animations exist for this purpose. So my own answer is no, you can't (and probably shouldn't) do to the UIView with UIAnimations what you can do to the Layer with Core Animations.
More info:
Core Animation Programming Guide by Apple
I hope this helps answer your question and that the resources provided will help further your insight into your question.

What advantage does CALayer confer w.r.t. this example (rather than just using UIView)? [duplicate]

Both have most of the same attributes, both support different kind of animations, both represent different data.
What are the differences between a UIView and a CALayer?
On iOS, every UIView is backed by a Core Animation CALayer, so you are dealing with CALayers when using a UIView, even though you may not realize it. Unlike NSViews on the Mac, which evolved before Core Animation existed, UIViews are intended to be lightweight wrappers around these CALayers.
As I describe in the similar question "When to use CALayer on the Mac/iPhone?", working directly with CALayers doesn't give you significant performance advantages over UIViews. One of the reasons you might want to build a user interface element with CALayers instead of UIViews is that it can be very easily ported to the Mac. UIViews are very different from NSViews, but CALayers are almost identical on the two platforms. This is why the Core Plot framework lays out its graphs using CALayers instead of other UI elements.
One thing UIViews provide over CALayers is built-in support for user interaction. They handle hit-testing on touches and other related actions that you would need to build yourself if managing a hierarchy of CALayers. It's not that hard to implement this yourself, but it is extra code you'd need to write when building a CALayer-only interface.
You will often need to access the underlying layers for a UIView when performing more complex animations than the base UIView class allows. UIView's animation capabilities have grown as the iOS SDK has matured, but there are still a few things that are best done by interacting with the underlying CALayer.
From the Ray Wenderlich blog (Tutorial)
CALayers are simply classes representing a rectangle on the screen
with visual content. “But wait a darn minute,” you may say, “that’s
what UIViews are for!” That’s true, but there’s a trick to that:
every UIView contains a root layer that it draws to!
Simply speaking,UIView inherit from UIResponder, handle events from users, contains CALayer, which inherit from NSObject, mainly focus on rendering, animation etc.
UIView is a container for CALayers. Using UIKit.
CALayer where we draw the contents. Using CoreGraphics
If you work with custom control like features it would be great to go ahead with single view containing more layers for accurate native rendering. Since CALayers are weightless than UIView.
To create common skeleton for Mac and iOS, follow the design for your app using CALayers. Since it is available in both platform.
UIView having feature like touch events achieved using delegates -(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event, tochesStart like events and other UIKit features.
To work with CALayers use Core Graphics knowledge.For any simple view rendering UIView is enough.
UIView: Views have more complex hierarchy layouts. They can receive user interactions like taps, pinches, cliks and more. Working with UIViews happens on the main thread, it means it is using CPU power.
CALayer: Layers on other hand have simpler hierarchy. That means they are faster to resolve and quicker to draw on the screen. There is no responder chain overhead unlike with views. Layers are drawn directly on the GPU. It happens on a separate thread without burdening the CPU.
For more details: https://medium.com/#fassko/uiview-vs-calayer-b55d932ff1f5
The big difference is UIView is designed for CocoaTouch on mobile device. It adds some event handler which CALayer did not provide.

Swift: Drawing a UIBezierPath based on touch in a UIView

I've been looking at this thread as I'm trying to implement the same thing. However, I see that the Canvas class is implemented as a subclass of UIImageView. I'm trying to do the same thing except in a UIView. How will using a UIView rather than UIImageView affect the implementation of this solution? I see self.image used a couple times, but I don't know how I'd change that since I don't think that is available in a generic UIView.
Yes, you can implement this as a UIView subclass. Your model should hold the locations of the touch events (or the paths constructed from those locations) and then the drawRect of the view can render these paths. Or you create CAShapeLayer objects associated with those paths, too. Both approaches work fine.
Note, there is some merit to the approach of making snapshots (saved as UIImage) objects that you either show in a UIImageView or manually draw in drawRect of your UIView subclass. As your drawings get more and more complicated, you'll start to suffer performance issues if your drawRect has to redraw all of path segments (it can become thousands of locations surprisingly quickly because there are a lot of touches associated with a single screen gesture) upon every touch.
IMHO, I think that other answer you reference goes too far, making a new snapshot upon every touchesMoved. When you look at full resolution image for retina iPad or iPhone 6 plus, that's a large image snapshot to create upon every touch event. I personally adopt a hybrid approach: My drawRect or CAShapeLayer will render the current path associated with the current gesture (or the collection of touchesMoved events between touchesBegan and touchesEnded), but when the gesture finishes, it will create a new snapshot.
In the answer to that question, self.image is drawn into the drawing context first, then drawing is applied on top, then finally the image is updated to be the old image with new content drawn on top.
Since you just want to add a UIBezierPath, I'd just create a CAShapeLayer into which you place your bezier path, and place it on top of your views backing layer (self.view.layer). There's no need to do anything with DrawRect.

CALayer & drawRect

In View and Window Architecture is stated, quote:
Views work in conjunction with Core Animation layers to handle the rendering and animating of a view’s content. Every view in UIKit is backed by a layer object (usually an instance of the CALayer class), which manages the backing store for the view and handles view-related animations.
Farther in the "The View Drawing Cycle" section is stated:
The UIView class uses an on-demand drawing model for presenting content. When a view first appears on the screen, the system asks it to draw its content. The system captures a snapshot of this content and uses that snapshot as the view’s visual representation.
Does that mean, that the content drawn in a view in its drawRect method call, is captured in a snapshot in saved in its backing core animation layer?
If not, where do this content snapshot "reside"?
If not, does that mean that CALayer is used to render "static" content, content that doesn't change very often, and drawRect is used to render content that changes often, for example in a game app?
p.s.
The questions are not related to any particular code implementation.
I just want to understand the ios view-layer architecture.
Does that mean, that the content drawn in a view in its drawRect method call, is captured in a snapshot in saved in its backing core animation layer?
Yes. Everything uses layers under the hood. UIView's -drawRect will capture what you draw and (as far as I know) set the content on a sublayer of the view. It might even do this on the main layer object. That's where the 'snapshot' is saved.
If not, does that mean that CALayer is used to render "static" content,
content that doesn't change very often, and drawRect is used to render
content that changes often, for example in a game app?
How often the content changes doesn't really affect the choice. There is not much difference in using drawRect vs. manually creating CALayers. It depends on how you want to organize the sub-elements in your views, or if you want to create low level reusable layer objects without the details of UIView (e.g. CATextLayer). If you have various different sub-elements to draw then you may split them into different layers with their own custom drawing code. If you just have one simple piece of content to draw, you can do that all in a single drawRect implementation.
Having said this, you do need to be aware that each layer will end up being a separate GPU "element", so there can be performance benefits to reducing the number of layers you have, or using the shouldRasterize+rasterizationScale properties of a parent layer. This will take a snapshot of an entire layer hierarchy and create a single rasterized image to be rendered instead of n separate ones.
Does that mean, that the content drawn in a view in its drawRect method call, is captured in a snapshot in saved in its backing core animation layer?
Two words: "implementation details"
If not, does that mean that CALayer is used to render "static" content, content that doesn't change very often, and drawRect is used to render content that changes often, for example in a game app?
Not exactly. Layers are very good at animating content (as hinted by the framework name Core Animation). drawRect is good for advanced drawing but can be to slow to redraw every frame (obviously depending on what your are drawing).
I didn't see any mention of the Core Animation Programming Guide in your question. It is a good place to learn more about the layer part of views.
Every UIView has an underlying CALayer (which is what actually gets rendered on the screen).
A CALayer is just a bitmap (holds pixels). When you call setNeedsDisplay to your view the CALayer gets marked for redrawing. At the end of the run loop, after events are processed, a CGContextRef gets created and the drawRect delegate gets called. You then draw things to the created context which then gets copied into the bitmap and ultimately composited with other layers to be displayed on the screen.
So yes, the "snapshot" is stored in the CALayer. This is just an optimization so that the layers don't have to redraw themselves unless they're flagged to be redrawn (with setNeedsDisplay).

A CATiledLayer-enabled UIView with drawRect-defined subviews crashes due to abnormal memory usage

We have an out-of-memory crash on the iPad 3 which we traced to the following scenario:
A UIView which uses a CATiledLayer and draws content (say, a PDF) has subviews with their own drawRect methods (which, for example, highlight search results). This makes Core Animation consume tons of memory (100+ MBs in the VM Tracker instrument), and can easily lead to a crash. While this issue exists on all devices, only on the iPad's Retina display does the cache size grow too large.
This can be reproduced with Apple's PhotoScroller example: subclass UIView, uncomment drawRect, and add an instance to the TilingView. The app will crash on iPad 3. Commenting drawRect resolves the memory usage.
Now, we can drop the subviews and do the drawing in the top-most UIView. However, working with subviews is convenient (since we're representing different, independent layers on top of the PDF). Two questions:
What is a good work-around? Preferably one that allows us to continue working with multiple views.
Why is this happening, exactly? I guess the cache mechanism is working overtime, but it would be great to understand the technical details behind it.
Thanks!
EDIT:
I want to elaborate on Kai's answer. The problem was indeed unrelated to CATiledLayer, but to the usage of UIViews that implemented drawRect.
In the case of PhotoScroller, I created a UIView which was of a size of the image - 2000x2000 and more, which creates a huge backing store if drawRect is present.
In the case of our app, the overlay views are full-screen (=~11 MBs on iPad 3) and we have about 5 of them per page. We keep up to three pages in memory while scrolling, and that means more than 150 MBs extra memory. Not fun.
So the solution is to optimize drawRect away, or use less such views. Back to the drawing board it is :-)
To 2.: Whenever you implement drawRect in a UIView subclass and have lots of instances of that class, your memory usage will grow dramatically. Reason is that a lot of optimization tricks in UIKit view/subview handling (e.g. when zooming or scrolling) don't work with such objects, because the framework doesn't know what you're doing/what you are drawing.
So - independent of retina or not - avoid implementing drawRect, especially when having many objects or many layers of subviews.
To 1.: I didn't exactly get what you are trying, but I implemented an PDF-Viewer which is also able to show additional content on top of the PDF. I did it all with normal UIView hierarchies, images etc. and I fear that's the only reliable work around you'll get
My experience:
Never add subviews to a UIView that's backed by a CATiledLayer
Never add sublayers to a CATiledLayer
Unfortunately, that seems to be the only practical answer - Apple's implementation goes horribly wrong in many different ways (not just performance - the rendering itself starts to exhibit visual artifact bugs, some of Apple's rendering code goes weird, etc).
In practice, I always do this:
UIView : view
+-- UIView w/ CATiledLayer : tiledLayerView
+-- UIview : subViewsView
...and safely add views and subviews to "subViewsView". Works fine.

Resources