CALayer vs CGContext, which is a better design approach? - ios

I have been doing some experimenting with iOS drawing. To do a practical exercise I wrote a BarChart component. The following is the class diagram (well, I wasnt allowed to upload images) so let me write it in words. I have a NGBarChartView which inherits from UIView has 2 protocols NGBarChartViewDataSource and NGBarChartViewDelegate. And the code is at https://github.com/mraghuram/NELGPieChart/blob/master/NELGPieChart/NGBarChartView.m
To draw the barChart, I have created each barChart as a different CAShapeLayer. The reason I did this is two fold, first I could just create a UIBezierPath and attach that to a CAShapeLayer object and two, I can easily track if a barItem is touched or not by using [Layer hitTest] method. The component works pretty well. However, I am not comfortable with the approach I have taken to draw the barCharts. Hence this note. I need expert opinion on the following
By using the CAShapeLayer and creating BarItems I am really not
using the UIGraphicsContext, is this a good design?
My approach will create several CALayers inside a UIView. Is there a
limit, based on performance, to the number of CALayers you can
create in a UIView.
If a good alternative is to use CGContext* methods then, whats the
right way to identify if a particular path has been touched
From an Animation point of view, such as the Bar blinking when you
tap on it, is the Layer design better or the CGContext design
better.
Help is very much appreciated. BTW, you are free to look at my code and comment. I will gladly accept any suggestions to improve.
Best,
Murali

IMO, generally, any kind of drawing shapes needs heavy processing power. And compositing cached bitmap with GPU is very cheap than drawing all of them again. So in many cases, we caches all drawings into a bitmap, and in iOS, CALayer is in charge of that.
Anyway, if your bitmaps exceed video memory limit, Quartz cannot composite all layers at once. Consequently, Quartz have to draw single frame over multiple phases. And this needs reloading some textures into GPU. This can impact on performance. I am not sure on this because iPhone VRAM is known to be integrated with system RAM. Anyway it's still true that it needs more work on even that case. If even system memory becomes insufficient, system can purge existing bitmap and ask to redraw them later.
CAShapeLayer will do all of CGContext(I believe you meant this) works instead of you. You can do that yourself if you felt needs of more lower level optimization.
Yes. Obviously, everything has limit by performance view. If you're using hundreds of layers with large alpha-blended graphics, it'll cause performance problem. Anyway, generally, it doesn't happen because layer composition is accelerated by GPU. If your graph lines are not so many, and they're basically opaque, you'll be fine.
All you have to know is once graphics drawings are composited, there is no way to decompose them back. Because composition itself is a sort of optimization by lossy compression, So you have only two options (1) redraw all graphics when mutation is required. Or (2) Make cached bitmap of each display element (like graph line) and just composite as your needs. This is just what the CALayers are doing.
Absolutely layer-based approach is far better. Any kind of free shape drawing (even it's done within GPU) needs a lot more processing power than simple bitmap composition (which will become two textured triangles) by GPU. Of course, unless your layers doesn't exceeds video memory limit.
I hope this helps.

Related

Monogame - how to have draw layer while on SpriteSortMode.Texture

I have a problem in which in my game I have to use SpriteSortMode.Texture because I have a lot of objects with few textures, so I cannot afford to use SpriteSortMode.BackToFront.
The thing is this means I cannot draw by layers, unless I do SpriteBatch.Begin with the exact same settings, which is what I'm currently doing.
I only have 3 draw layers I need - a Tileset surface, Objects like rocks or characters on the surface, and UI.
Other solutions I've found is using texture quads (which supposedly also improves tileset drawing performance), going 3D with orthogonal view which I haven't researched yet.
I'm hoping there's a better to make this work.
Why would having a lot of objects with few textures mean you have to use SpriteSortMode.Texture?
"This can improve performance when drawing non-overlapping sprites of uniform depth." says the MSDN page, and this is clearly not what you are doing.
Just use the default SpriteSortMode.Deferred and draw things back to front in order.

Swift: Fastest way to draw a rectangle

I'm making a game that pretty much entirely consists of solid-colored rectangles. Currently, I'm using SKSpriteNodes for all of the rectangles in it, since they need to be animated and touched, but while creating a few hundred of them, I found them to cause a lot of lag.
All I need is a rectangle I can draw with a solid color, no textures or anything, but for some reason, these cause a lot of lag.
If possible, I want to avoid OpenGL as I tried it before and it took me months to do a single thing. I just feel like there must be a fast way that I can't find. Any help will be appreciated!
The simplicity of your skin (i.e. just a rectangle rather than a more complex image) is unlikely the cause of your performance problems. Consider experiences like this one animating 10k particles. Start with Instruments. That's how you find where your bottlenecks are. At these scales (100s), it is almost always an O(n^2) algorithm that has snuck into your system, not the act of drawing.

Is there are way to draw a pixel in SpriteKit?

I am trying to put together ray tracing routine and the width/shade of every pixel in a single ray will change over the length of the line. Is there a way in SpriteKit to draw a single pixel on the screen? or should I be doing this using UIImage?
SpriteKit isn't a pixel drawing API. It's more of a "moving pictures around on the screen" API. There are a couple of cases, though, where it makes sense to do custom drawing, and SpriteKit (as of iOS 8 and OS X 10.10) has a few facilities for this.
If you want to create custom art to apply to sprites, with that art being mostly static (that is, not needing to be redrawn frequently as SpriteKit animates the scene), just use the drawing API of your choice — Core Graphics, OpenGL, emacs, whatever — to create an image. Then you can stick that image in an SKTexture object to apply to sprites in your scene.
To directly munge the bits inside a texture, use SKMutableTexture. This doesn't really provide drawing facilities — you just get to work with the raw image data. (And while you could layer drawing tools on top of that — for example, by creating a CG image context from the data and rewriting the data with CG results — that'd slow you down too much to keep up with the animation framerate.)
If you need high-framerate, per-pixel drawing, your best bet is to do that entirely on the GPU. Use SKShader for that.
If your application is redrawing every frame, then you can use an offscreen buffer to dynamically update pixels, aka SKMutableTexture. I'm not sure how slow it will be.

OpenGL ES 2 2D layered drawing

I am rewriting an iPad drawing application with OpenGL ES2 instead of Core Graphics.
I have already written a subclass of GLKView that can draw line segments and I can just drag a GLKView in my storyboard and set it a custom class. So far, drawing works, but I also want to implement layers like in Photoshop and GIMP.
I thought of creating multiple GLKViews for each layer and letting UIKit handle the blending and reordering, but that won't allow blend modes and may not have a good performance.
So far, I think doing everything in one GLKView is the best solution. I guess I will have to use some kind off buffer as a layer. My app should also be able to handle undo/redo, so maybe I will have to use textures to store previous data.
However, I am new to openGL so my question is:
How should I implement layers?
Since the question is very broad, here is a broad and general answer that should give you some starting points for more detailed researches.
Probably a good way would be to manage the individual layers as individual textures. With the use of framebuffer objects (FBOs) you can easily render directly into a texture for drawing inside the layers. Each texture would (more or less) persistently store the image of a single layer. For combining the layers you would then render each of the layer textures one over the other (in the appropriate order, whatever that may be) using a simple textured quad and with the blending functions you need.

Cocos2d: GPU having to process the non visible CCSprite quads

Given one texture sheet is it better to have one or multiple CCSpriteBatchNodes? Or does this not affect at all the GPU computational cost in processing the non visible CCSprite quads?
I am thinking about performance and referring to this question and answer I got. Basically it suggests that I should use more than one CCSpriteBatchNode even if I have only one file. I don't understand if the sentence "Too many batched sprites still affects performance negatively even if they are not visible or outside the screen" is applicable also having two CCSpriteBatchNode instead of one. In other words, does the sentence refer to this "The GPU is responsible for cancelling draws of quads that are not visible due to being entirely outside the screen. It still needs to process those quads."? And if so it should meant that it doesn't really matter how may CCSpriteBatchNode instances I have using the same texture sheet, right?
How can I optimize this? I mean, how can I avoid the GPU having to process the non visible quads?
Would you be able to answer to at least the questions in bold?
First case: Too many nodes (or sprites) in the scene and many of them are out of screen/visible area. In this case for each sprite, GPU has to check if its outside the visible area or not. Too many sprite-nodes means too much load on GPU.
Adding more CCSpriteBatchNode should not effect the performance. Because the sprite-sheet bitmap is loaded to the GPU memory, and an array of coordinates is kept by the application for drawing individual sprites. So if you put 2 images in 2 different CCSpriteBatchNodes or 2 images in 1, it will be same for both CPU and GPU.
How to optimize?
The best way would be to remove the invisible nodes/sprites from the parent. But it depends on your application.
FPS drops certainly because of two reasons:
fillrate - when a lot of sprites overlap each others (and additionally if we render high-res texture into small sprite)
redundant state changes - in this case the heaviest are shader and texture switches
You can render sprites outside of screen in single batch and this doesn't drop performance singnificantly. Pay attention that rendering sprite with zero opacity (or transparent texture) takes the same time as non-transparent sprite.
First of all, this really sounds like a case of premature optimization. Do a test with the number of sprites you expect to be on screen, and some added, others removed. Do you get 60 fps on the oldest supported device? If yes, good, no need to optimize. If no, tweak the code design to see what actually makes a difference.
I mean, how can I avoid the GPU having to process the non visible quads?
You can't, unless you're going to rewrite how cocos2d handles drawing of sprites/batched sprites.
it doesn't really matter how may CCSpriteBatchNode instances I have using the same texture sheet, right?
Each additional sprite batch node adds a draw call. Given how many sprites they can batch into a single draw call, the benefit far outweighs the drawbacks. Whether you have one, two or three sprite batch nodes makes absolutely no difference.

Resources