I've written an iOS app in which I'm using CGLayer quite successfully. While researching ways to squeeze a bit more performance out of this app, I came across this blog post: http://iosptl.com/posts/cglayer-no-longer-recommended/ in which the author very broadly states that CGLayer is to never be used. An individual post alone is not cause for concern, but I've also found people referring to this post as something to abide by.
No real specifics are offered. For instance, the author states that "sometimes it's faster, sometimes it's slower". This makes me wonder if the concern is that, in general, programmers will not use this object correctly.
I suppose this question is for the seasoned Cocoa/Cocoa Touch developers. Is there any merit to this? Is CGLayer indeed something to avoid and if so, are there specific, measurable reasons as to why?
To start with my answer, I would like to conclude that its totally your own design decision that whether you would use CGLayer in your app or not.
The real thing is that if you are drawing something on-screen, it will possibly buy you nothing on iOS platform. On iOS, the basic screen composition block is a CALayer. CALayer takes a Quartz(CG) graphics context to draw that on screen and that might be a context created by CGLayer itself. Now, CALayer being hardware accelerated by itself, would try to cache any graphics content to graphics card and reuse them. And that's the purpose what we had used a CGLayer previously.
Also, if offscreen rendering is concerned, a CALayer can do that when shouldRasterize is set to YES and under some other circumstances. However, keep in mind that offscreen composition is again another task performed by CPU before handing over the rendered content to GPU. So again there is no clear winner.
CGLayer would be particularly handy when creating a CG context that wont be drawn on screen, like a PDF context.
I'm not sure why Apple development team has asked one to avoid CGLayer completely. Might be there are some underlying architectural flaw but that is undocumented, till date. However, until we are sure about that and we have existing apps designed over CGLayer architecture, I dont find any specific reason to completely abandon that.
For me the most relevant point is what the author wrote in one of his follow up comments:
As I understand it from the Core Graphics team, they basically haven’t touched it [CGLayer] since before the iPhone came out. It was one of those things that sounded really good, but didn’t work out in practice. But it’s not actually broken, so there’s no reason to deprecate it. And as I mentioned, if you have awesome CGLayer code, I don’t see any reason to replace it. CGLayer isn’t bad. It’s just not maintained like other parts of CG.
It would be helpful if Apple's Quartz 2D Programming Guide (updated 2014) didn't contain the following prominent comment box:
Note: Bitmap graphics contexts are sometimes used for drawing offscreen. Before you decide to use a bitmap graphics context for this purpose, see Core Graphics Layer Drawing. CGLayer objects (CGLayerRef) are optimized for offscreen drawing because, whenever possible, Quartz caches layers on the video card.
No. Ignore the "Never" cited in the blog, unless you never profile the impact CGLayer has on your app.
Consider CGLayers as a potential optimization for your program. CGLayers have the potential to affect your program's performance and resource consumption in positive and negative ways (a tradeoff in many cases). In the abstract, it's much like a cache (which have their own costs). Alternative caching mechanisms have their own associated costs, and CGLayer may or may not be the best caching implementation for your program.
Related
At the moment i can draw a route on a map.
On the map i can zoom and i can pan. If the route is very big it goes really slow.
Therefor i want to do it with OpenGl.
From the map i can convert coordinateToPixel and get the current zoom.
I thought it would be the best to base the translation and zoom on that for the transformation matrix.
I never worked with OpenGL before. I have been reading stuff for the last few hours but most stuff i read is outdated or goes into things i don't care about for now like shaders.
Can someone provide me with resources for simple stuff like on the image?
I never worked with OpenGL before.
You are asking a lot, and I do mean a lot, of work from yourself if you want to switch from using native iOS drawing methods to using an advanced real-time rendering system that you don't even know yet.
I agree with Brad Larson that you are going to go much further and faster by leveraging the tools in iOS for your purpose. However, that does not mean you can't improve performance while using them.
I have found that when using Core Graphics for complicated drawing, you can dramatically reduce the time it takes to render a drawing by drawing it on a background thread. And Apple makes it much much easier to learn and use Grand Central Dispatch than the time it would take you to do all of this in OpenGL.
I learned how to use dispatch queues also for the single purpose of making drawing go faster. The simple technique is to render in the background, then take the results to the main thread for displaying them. Since you already have your drawing code figured out, you won't have to do much extra work to take this extra step, and I think you will be impressed with the performance.
I saw an improvement of at least 5 - 10 times in drawing speed when I implemented Core Graphics drawing with dispatch queues. They are really awesome.
Is it even possible to continue rendering and updating while resizing the window so that it doesn't stretch?
This is pretty deeply baked into XNA's Game class's behaviour. I'm dealing with this exact problem now, but I don't have a good solution yet. EDIT: I how now found a solution - but it doesn't answer the bit about scaling - so I've posted it as a question/answer pair over here.
You could possibly dive in with reflection and disconnect the events that pause the game's timer when you start resizing (and unpause it when you stop). I haven't tried this yet, I'm a bit loathe to do it without understanding why they exist in the first place.
(Personally I am thinking of having my game subscribe to the resize start/end events as well, and then pumping Update myself on an appropriate timer until XNA comes back. I wasn't going to worry about the scaling of the display.)
One way to work around this problem is to replace the Game class entirely. The XNA WinForms Sample provides a suitable replacement - although you have to implement your own Draw/Update loop and timing. I've just tested this in an old level editor and it works just as you want when resized.
Although it does slow down quite a bit when you make the window larger, as it constantly re-allocates the backbuffer to make it bigger. You could replace that behaviour to make it over-allocate the backbuffer size, so it doesn't reallocate so often.
The underlying problem has something to do with win32, and is described in some detail in this thread on GameDev.net. But it doesn't really provide a satisfying solution either.
It might be interesting to note that the WinForms sample draws on its OnPaint method (and you get a loop by constantly calling Invalidate). Whereas XNA's built-in Game class subscribes to Application.Idle.
I can already hear the wrenching guts of a thousand iOS developers.
No, I am not noob.
Why is -drawRect faster for UITableView performance than having multiple views?
I understand that compositing operations take place on the GPU. But compositing is a one-time operation; once the layers are committed to memory, it is no different from a cached buffer that, from the point of view of the GPU, gets translated in and out of view. Compare this to using Core Graphics in drawRect, which employ an unknown amount of operations on the CPU to produce pixels that end up getting cached in CALayers anyway. What's the difference if it all ends up cached and flattened anyway?
Also, if you're handling cell reuse properly, you shouldn't need to regenerate views on each call to -cellForRowAtIndexPath. In fact, there may be a performance benefit to having the state data (font, font size, text color, attributes, etc) cached by UIView/CALayer objects than having them constantly recreated during -drawRect.
Why the craze for drawRect? Can someone give me pointers?
When you talking about optimization, you need to provide specific situations and conditions and limitations. Because optimization is all about micro-management. Otherwise, it's meaningless.
What's the basis of your faster? How did you measured it? What's the numbers?
For example, no-op or very simple -drawRect: can be faster, but it doesn't mean it always does.
I don't know internal design of CA neither. So here are my guesses.
In case of static content
It's weird that your drawing code is being called constantly. Because CALayer caches drawing result, and won't draw it again until you send setNeedsDisplay message. If you don't update cell's content, it's just same with single bitmap layer. Should be faster than multiple composited layers because it doesn't need composition cost. If you're using only small number of cells which are enough to be exist all in the pool at same time, it doesn't need to be updated. As RAM becomes larger in recent model, it's more likely to happen in recent models.
In case of dynamic content
If it is being updated constantly, it means you're actually updating them yourself. So maybe your layer-composited version would also being updated constantly. It means it is being composited again for every frame. It could be slower by how it is complex and large. If it's complex and large and have a lot of overlapping areas, it could be slower. I guess CA will draw everything strictly if it can't determine what area is fine to ignore. Unlike you can choose what to draw or not.
In case of actual drawing is done in CPU
Even you configure your view as pure composition of many layers, each sublayers should be drawn eventually. And drawing of their content is not guaranteed to be done in GPU. For example, I believe CATextLayer is drawing itself in CPU. (because drawing text with polygons on current mobile GPU doesn't make sense in performance perspective) And some filtering effects too. In that case, overall cost would be similar and plus it requires compositing cost.
In case of well balanced load of CPU and GPU
If your GPU is very busy for heavy load because there're too many layers or direct OpenGL drawings, your CPU may be idle. If your CG drawing can be done within the idle CPU time, it could be faster than giving more load to GPU.
None of them is your case?
If your case is none of situations I listed above, I really want to see and check the CG code draws faster than CA composition. I wish you attach some source code.
well, your program could easily end up moving and converting a lot of pixel data if going back and forth from GPU to CPU based renderers.
as well, many layers can consume a lot of memory.
I'm only seeing half the conversation here, so I might have misunderstood. Based on my recent experiences optimizing CALayer rendering, and investigating the ways Apple does(n't) optimize stuff you'd expect to be optimized...
What's the difference if it all ends up cached and flattened anyway?
Apple ends up creating a separate GPU element per layer. If you have lots of layers, you have lots of GPU elements. If you have one drawRect, you only have one element. Apple often does NOT flatten those, even where they could (and possibly "should").
In many cases, "lots of elements" is no issue. But if they get to be large ... or there's enough of them ... or they're bad sizes for OpenGL ... AND (see below) they get stored in CPU instead of on GPU, then things start to get nasty. NB: in my experience:
"enough": 40+ in memory
"large": 100x100 points (200x200 retina pixels)
Apple's code for GPU elements / buffers is well optimized in MOST places, but in a few places it's very POORLY optimized. The performance drop is like going off a cliff.
Also, if you're handling cell reuse properly, you shouldn't need to
regenerate views on each call to -cellForRowAtIndexPath
You say "properly", except ... IIRC Apple's docs tell people not to do it that way, they go for a simpler approach (IMHO: weak docs), and instead re-populate all the subviews on every call. At which point ... how much are you saving?
FINALLY:
...doesn't all this change with iOS 6, where the cost of creating a UIView is greatly reduced? (I haven't profiled it yet, just been hearing about it from other devs)
Sometimes the term Graphics Context is a little bit abstract. Are they actually system resource, but they are resource from the Graphics Card, just like a file handle is system resource from the hard drive or any permanent storage device?
Just as a file handle has some states about whether the file handle is for read only or read/write, and the current position for the next read operating -- the states, a Graphics Context has states about the current stroke color, stroke width, or any relevant data. (update: and in write mode, we can go to any point in a 200MB file and change data, just like we have the canvas of the Graphics Context and draw things on top of it)
So Graphics Context are actually global, system-wide resource. They are not part of the application singleton or anything, just like a file or file handle is not (necessarily) part of the application singleton.
And if there is no powerful graphics card (or if the graphics card ran out of resource already), then the operating system can simulate a Graphics Context using low level graphics routines using bitmaps, instead of letting the graphics card handle it.
Is this how a Graphics Context work actually, on iOS and most other common OS in general?
I think it's best not to think of a Graphics Context in terms of a specific system resource. As far as I know, the graphics context doesn't correspond to any specific resource anymore than an any class 'object' does, besides memory of course. Really, the Graphics context is designed to provide a 'canvas' for the core graphics functions to operate on. The truth is, Apple doesn't give us the specific details of how a graphics context works internally. But there are several things we do know about it:
The graphics context is basically a 'state' more than anything else. It holds information such as stoke/fill color, line width, etc for a particular set of drawing routines.
It doesn't process on the GPU. Instead it processes (does all it's drawing) on the CPU and 'passes' the resulting image (some form of a bit map) to the GPU for display/animation (actually it renders the image directly to the GPU's buffers). This is why the 'renderInContext' method isn't working so well in the new iPad 3. renderInContext gives you the image first, which involves rendering and copying the image. If you wish to then display it, it must be passed back to Core Graphics which then writes the image back out. On the iPad 3, this involves a lot of memory (depending on the size of the view) and can easily overflow buffers.
The graphics contexts given to the 'drawRect' method of UIView is designed to provide a context that is as efficient as possible. This is why you can't draw anything in a view outside a context, nor can you create your own context for a view to draw in. The actual drawing is handled in the run loop, which is why we use this method to flag a UIView as needing to be drawn: [view setNeedsDisplay].
The graphics contexts for UIViews are drawn on the main thread and yes, again, processed on the CPU. This does mean overly complex drawings can tie up your main application, but now days with multi-core processors that's not so much of a problem.
You can create a graphics context, but only to draw to draw to an image. This is exactly the same thing as what a UIView context does, except that it's meant to be used by you rather than drawn to the screen or animated. Since iOS 4, you can process these image contexts in other threads (besides the main thread).
If you're looking to do GPU drawing, I believe the only way to do this is to use OpenGL if you're using iOS. If you're using MacOS, I think you can actually enable Quartz (core-graphics...same thing) drawing on the GPU using QuartzGL. But it may not be worth the effort, see this article: Mac QuartzGL (2D drawing on the graphics card) performance
Update
As you can see in the comments below, the current arrangement Apple has for Quartz drawing is probably the best, especially since views are drawn directly to the GPU buffers. There is a temptation to think that processing anything visual should be done on the GPU but the truth is, GPU's aren't designed for vector drawings. They're designed to handle massive transforms, lighting, texture mapping, etc. By using the CPU to process vector drawing and leaving everything else to the GPU Apple has split the graphics processing appropriately. Moreover, you're not loosing any efficiency in the data transfer between the CPU and GPU since Quartz is drawing directly to the GPU's buffer (which avoids that onerous memcpy).
Sometimes the term Graphics Context is a little bit abstract.
Yes, intentionally so. Quartz is meant to be an abstraction, a general-purpose drawing system. It may or may not perform some optimizations with the graphics hardware, internally, but you don't get to have much visibility into that. And the kinds of optimizations it makes may change over time and with different kinds of graphics hardware.
Are they actually system resource, but they are resource from the Graphics Card
No, absolutely not. Quartz is a software renderer -- it works even when there is no graphics hardware present, and can draw to things like PDFs where the graphics hardware wouldn't be of any use.
Internally, Quartz (and its interfaces with the rest of the OS) may have a few "fast paths" that take advantage of the GPU in some situations. But that's by no means the common case.
Just as a file handle has some states about whether the file handle is for read only or read/write, and the current position for the next read operating -- the states, a Graphics Context has states about the current stroke color, stroke width, or any relevant data.
This is correct.
So Graphics Context are actually global, system-wide resource.
No. Quartz is just a library that runs code within your app. If you make a new CGContext, only your app is affected -- exactly the same way as if your code created a new instance of one of your own classes.
And if there is no powerful graphics card (or if the graphics card ran out of resource already), then the operating system can simulate a Graphics Context using low level graphics routines using bitmaps, instead of letting the graphics card handle it.
You have the two cases flipped. In general Quartz is working in software, with bitmaps. In a few cases, it may use the GPU to get those bitmaps on the screen faster, if everything is lined up exactly right.
I've been looking at a lot of iOS user interfaces that have been customized. I wonder, is it better to customize the UI using images or using libraries like CoreGraphics and Quartz, or is it on a per case basis, as in I use libs for some elements and images for others?
It is very hard to guess your particular situation. I can state that iOS gives us a lot of leverages to make any custom interface. I would use:
images for complicated graphic elements, buttons, icons, arrows, etc.
images + stretching to get complicated backgrounds/elements
custom drawing all that contain lines, ellipses, squares, lineral and/or circular gradients, simple image preprocessing, etc.
The key idea is - to find balance between memory usage and processing time. Note: from my experience - interfaces based on images which created by professional designer looks awesome.
Case-by-case basis. Images can be drawn more quickly but use more memory; custom drawing, whether via Core Graphics or Quartz, uses less memory but takes more time.
Case by case. If you want a lot of complex graphics that aren't lines and don't change much, use images. If you just need lines/gradients, or if you want things to move and morph, you'll need to use quartz.
It depends on you, as well. Would you rather write code for quartz for an hour and debug it, or would you rather spend an hour in photoshop? How fast are you at PS? Do you already know Quartz?
It depends on a lot of things, so "case-by-case".
Determine the complexity of each approach. (nontrivial) Icons are a good example of an image, while large gradients are a good use for drawing. Drawing can take some time/experience to get right, compared to graphic assets, but you can reuse that implementation later and use less memory in many cases (images can also use less memory - depending on what you're drawing). Complex static images can take time to render if drawn so... there are a number of things to consider in order to achieve the best balance. Using the gradient vs. image example, quality and time are also factors -- resizing/scaling a simple image can take a lot of CPU or have artifacts a rendered gradient would not have. Much of it comes down to experience, knowing the implementations you use well, and a lot of sampling/profiling to determine what is simple/complex/consumes a lot of memory, and so on.