In Quartz2D, can I draw any shapes without using drawRect:(CGRect)rect method?
Yes. If you want to draw to a bitmap (as one example) and produce a CGImage, you could certainly create a CGBitmapContext, then use CoreGraphics as usual using that as your context.
If you want to draw to the display, do your work from within drawRect:, using the supplied graphics context.
Related
In Apple's Core Animation documentation, it says there are two rendering paths involved. From what I know, CALayer caches bitmap data of a UIView content. There are two ways of providing content of a CALayer. One is implementing drawRect: or other CALayer drawing methods, the other is setting a bitmap to the property contents of CALayer.
Here I'm wondering, what happens behind the scene if neither of the above two things is done? I believe there is a private drawing path UIView uses in this situation. What is this private drawing path consists of? How does it work?
The crux of CALayer is that it's GPU-backed. In modern graphics and animation, you want to minimize the number of times your bitmap data crosses between the CPU and the GPU. These operations are costly.
CALayer always uses a private drawing path, whether you use setContents: or drawRect:. In fact, the underlying plumbing of CALayer handles both of these in essentially the same way. When you setContents: , CALayer takes the image you gave it, and uploads it to the GPU via OpenGL (probably nowadays Metal) calls. When you drawRect: the CALayer gives you a context into which you can draw, and then it does the same thing with the bitmap data.
If you don't set contents or implement drawRect, you can still do things like set the layer's background color, border, corner radius, etc. This is being rendered by CALayer's GPU-based private drawing path.
CALayer does not draw any of its content using drawRect:. Only view based drawing techniques like Core Graphics use drawRect: but the only problem with this ways of things is that it is done on the CPU and the main thread making it an expensive process. So instead, Core Animation manipulates a cached bitmap of your apps content in the graphics hardware directly which is far much more optimised. You update or provide the initial contents for a Core Animation layer through one of its delegates (displayLayer: or drawLayer:inContext) or using the contents property as you have mentioned. All layer objects in Core Animation are derived from CALayer.
CALayer is simply a layer object belonging to Core Animation which in itself is simply a support system for UIView or subclasses of UIView. Core Animation is not a drawing technology in the sense it cannot create primitive shapes like Quartz, Open GL ES and Metal can. Instead, Core Animation allows you to manipulate an already existing view and it does this by caching the bitmap data of a UIView and sending this off to the graphics hardware to be manipulated. We say Core Animation is a support system and all its work relies on using layer objects of which CALayer is the main type. It can only do this of course if a view has a layer and a view does not actually need a layer to exist. However, in iOS all views come with a layer attached by default. We say views in iOS are "layer backed". In MacOS, you need to actually add Core Animation support for views.
The actual drawing of the contents of a CALayer happens in a few ways. The first is changing the contents property of the CALayer as you have mentioned and you do this by giving it a CGImageRef. The second is by implementing or overriding in a subclass the CALayer delegate displayLayer: which creates a bitmap and sets it to the content property of a layer. The third is by implementing or overriding in a subclass the CALayer delegate drawLayer:inContext which creates a bitmap, creates a graphics context to draw into that bitmap, and then calls your delegate method to fill the bitmap.
In iOS we do not usually worry about how the content of our view's layers are rendered. Since all views are layer backed, iOS manages how to render these views in the most efficient way possible using the methods I've just described. This is an optimisation to save you time and it makes layers very easy to use. You'll usually worry about overriding these delegates or subclassing them if you are developing for MacOS where views are not always layer backed. You might also worry about this if you decide not to use the default CALayer in iOS, for example you might change a view's layer from CALayer to CAMetalLayer. Or perhaps if you are looking for a performance optimisation and even this in a small number of cases.
There are three ways to provide content to a layer.
- Assign an image object directly to the layer object’s contents property.
- Assign a delegate object to the layer and let the delegate draw the layer’s content.
- Define a layer subclass and override one of its drawing methods to provide the layer contents yourself.
If we don't implement drawRect: or setting the contents property or subclass the layer, it will use the default way which is the second one to provide content to a Layer-backed views's layer. The layer will capture the content of the view and render it.
I have rendered some basic shapes using quartz-2d. I came across two methods to draw a line. First is to get a context using UIGraphicsGetCurrentContext() and then draw the line using CGContextAddLineToPoint.
And other way is to define a UIBezierPath object and draw using its function addLineToPoint
[bezierPath addLineToPoint:CGPointMake(10, 10)];
And then I have to add bezierPath to the context using CGContextAddPath.
So I wanted to know the difference between these two approaches as both are used to draw only a line. Is there a performance issue between these two? Also which method is better under what circumstances.
The UIBezierPath is an object from the UIKit, and allow you to use a set of function mixing particular curve with control point but also with a normal line.
Being an object from the UIKit you are not able to use CGContextAddLineToPoint, but anyway at the end you need to add the path in the context with CGContextAddPath.
With CGContexAddLineToPoint instead you draw directly on the context.
So my suggest is to use this last way if you have not particular reasons (like amazing curve with different control point). Otherwise, use UIBezierPath.
The CGContextAdd.. functions are lower-level C Quartz2D API that bridge between CGContext and CGPathRef instances. You use a CGContext when you want to draw something, while CGPathRef is the structure that manages geometric shapes.
On the other hand, UIBezierPath is an Objective-C UIKit class that wraps around CGPath, and provides also some bridge to CGContext functions, for example with setFill, setStroke, fill or stroke methods.
Is totally safe to mix the two approaches, but it's sure that CGContext gives you more tools. Performance and memory are better with direct CGContext calls, but performance difference are absolutely negligible in 99% of the cases.
I want to pre-render some graphics into CGLayer for fast drawing in future.
I found that CGLayerCreateWithContext requires a CGContext parameter. It can be easily found in drawRect: method. But I need to create a CGLayer outside of drawRect:. Where should I get CGContext?
Should I simply create temporary CGBitmapContext and use it?
UPDATE:
I need to create CGLayer outside of drawRect: because I want to initialize CGLayer before it is rendered. It is possible to init once on first drawRect call but it's not beautiful solution for me.
There is no reason to do it outside of drawRect: and in fact there are some benefits to doing it inside. For example, if you change the size of the view the layer will still get made with the correct size (assuming it is based on your view's graphics context and not just an arbitrary size). This is a common practice, and I don't think there will be a benefit to creating it outside. The bulk of the CPU cycles will be spent in CGContextDrawLayer anyway.
You can create it by this function, you can render your content in the render block
typedef void (^render_block_t)(CGContextRef);
- (CGLayerRef)rendLayer:(render_block_t) block {
UIGraphicsBeginImageContext(CGSizeMake(100, 100));
CGContextRef context = UIGraphicsGetCurrentContext();
CGLayerRef cgLayer = CGLayerCreateWithContext(context, CGSizeMake(100, 100), nil);
block(CGLayerGetContext(cgLayer));
UIGraphicsEndImageContext();
return cgLayer;
}
I wrote it few days ago. I use it to draw some UIImages in mutable threads.
You can download the code on https://github.com/PengHao/GLImageView/
the file path is GLImageView/GLImageView/ImagesView.m
I have this app using which one can draw basic shapes like rectangle, eclipse, circle, text etc.
I also allow free form drawing, which is stored as set-of-points, on the canvas.
Also a user can resize and move around these objects by operating on the selection handles that appear when an object is selected.
In addition the user should be able to zoom and pan the canvas.
I need some inputs on how to efficiently implement this drawing functionality.
I have following things in mind -
Use UIView's InvalidateRect and drawRect
Have a UIView for the main canvas and for each inserted object - invalidate the correspoding rect and redraw all the objects which intersects that rect in the drawRect function of the UIView.
Have a UIView and use CALayer ?
every one keep mentioning about the CALayer , I dont have much idea on this, before I venture into this I wanted a quick input on whether this route is worth taking.
like, https://developer.apple.com/library/ios/#qa/qa1708/_index.html
Have a UIImageView as canvas and when drawing each object, we do this
i) Draw the object into offscreen CGContext, basically, create a new CGContext by using UIGraphicsBeginImageContext, draw the shape, extract the image out of this CG context and use that as source of UIImageView's image property, but here how do I invalidate only a part of the UIImageView so that only that area gets refreshed.
Could you please suggest what is the best approach?
Is there any other efficient way to get this done?
Thanks.
Using a UIImage is more efficient for rendering multiple objects. But Using a CALayer is more efficient when moving and modifying a single object because you don't have to modify the other objects. So I think the best approach is to use a UIImage for general drawing and a CALayer for the shape that is being modified. In other words:
use a CALayer to draw the shape being added or modified, but don't draw it on the UIImage
use a UIImage to draw the other shapes
But OpenGL is still the most efficient solution, but don't bother with that if you don't have too many objects to draw.
If you want to draw polygons, you'll have to use Quartz framework, and have your drawing methods based on CALayer. It doesn't really matter which view you'll put your CALayers in, UIImageView or UIView. I'll say UIView since you won't be needing UIImageView's properties or methods for drawing.
How do I implement "regular" drawing (as normally would be done in a drawRect method) on top of an OpenGL animation running in the background? My app is the OpenGL app that is the default Xcode game app template. The GLKViewController does not have a drawRect method, and when I add one, it never gets called. I tried to implement drawing code in the drawInRect method (which does exist) but I get run time errors.
So to summarize: I'd like to draw stuff (lines, paths, whatever) NOT using OpenGL, but using regular quartz primitives and display this on top of an existing 3d rendering.
To make sure drawRect is being called, you should probably go the other route: Create a standard Cocoa Touch project, alter the + (Class)layerClass method of the main view to return [CAEAGLLayer class], then start drawing with that. Note that the CAEAGLLayer documentation specifically warns against doing what you want to do:
Avoid drawing other layers on top of the CAEAGLLayer object. If you must draw other, non OpenGL content, you might find the performance cost acceptable if you place transparent 2D content on top of the GL content and also make sure that the OpenGL content is opaque and not transformed.
Check out the GLPaint project for a simple OpenGL ES project showing the layerClass override (in PaintingView.m). They use layoutSubviews and touchesBegan/Moved/Ended to do the drawing.