Swift: Fastest way to draw a rectangle - ios

I'm making a game that pretty much entirely consists of solid-colored rectangles. Currently, I'm using SKSpriteNodes for all of the rectangles in it, since they need to be animated and touched, but while creating a few hundred of them, I found them to cause a lot of lag.
All I need is a rectangle I can draw with a solid color, no textures or anything, but for some reason, these cause a lot of lag.
If possible, I want to avoid OpenGL as I tried it before and it took me months to do a single thing. I just feel like there must be a fast way that I can't find. Any help will be appreciated!

The simplicity of your skin (i.e. just a rectangle rather than a more complex image) is unlikely the cause of your performance problems. Consider experiences like this one animating 10k particles. Start with Instruments. That's how you find where your bottlenecks are. At these scales (100s), it is almost always an O(n^2) algorithm that has snuck into your system, not the act of drawing.

Related

OpenCV - Detect rough, hand-drawn circles with obstructions

I've been trying to extract hand-drawn circles from a document for a while now but every attempt I make doesn't have the level of consistency I need.
Process Album
The problem I keep coming up against is when 2 "circles" are too close they become a single contour, ruining my attempt to detect if a contour is curved. I'm sure there must be a better way to extract these circles, but their imperfection and inconsistency are really stumping me.
I've tried many other ways to single out the curves, the most accurate of which being:
Rather than use dilation to bridge the gap between the segmented contours, find the endpoints and attempt to continue the curve until it hits another contour.
Problem: I can't effectively find the turning points of the contour, otherwise this would be my preferable method
I apologize if this question is deemed "too specific", but I feel like Computer Vision stuff like this can always be applied elsewhere.
Thanks ahead of time for any and all help, I'm about at the end of my rope here.
EDIT: I've just realized the album wasn't working correctly, I think it should be fixed now though.
It looks like a very challenging problem so it is very likely that the things I am going to write wouldn't work very well in practice.
In order to ease the problem, I would probably try to remove as much of other stuff from the image as possible.
If the template of the document is always the same, it might be worth trying to remove horizontal and vertical lines along with grayed areas. For example, given the empty template, substract it from the document that you are processing. Probably, it might be possible to get rid of the text also. This would result in an image with only parts of hand drawn circles.
On such image, detecting circles or ellipses with hough transform might give some results (although shapes might be far from circles or ellipses).

SKLabelNodes drop fps

I have a little game based on SpriteKit.
In this game I use lots of Nodes with letters (or combinations of letters) on it that user can move around to build words.
Those nodes are basically SKSpriteNodes with SKLabelNodes on them.
When I have a considerably large amount of nodes, Draw count increases and FPS drop dramatically.
Obviously, when I remove SKLabelNodes, Draw count stays low. But I still need those letters.
The question is, what is the best way to add those letters without dropping FPS?
There are three ways to do this, each is a different blend of compromises.
The first, would be the easiest, is to use shouldRasterize on the existing labels. This doesn't seem to exist for labels in Sprite Kit. DOH!
Use bitmapped textures as letters on objects, actually as sprites, the thing that Sprite Kit handles best. This will involve using a bitmap font generator, such as the excellent BMGlyph as pointed out by Whirlwind in the comments.
This will give not be easy because the coding part will be a little more labour intensive, but you should get the absolute best performance this way.
You can swap letters, too, still, but you will need to think of them as subsections of the texture rather than as letters. An array or dictionary with each letter's position in the texture assigned to something easy to remember will be both performant and easy to use. But labour intensive to setup. Much more so than SKLabelNode
Or, you could go wild, and create textures with code, by using an SKLabelNode on a virtual object and then "rendering" or "drawing" that to a texture, and then using that texture(s) for each letter onto the objects/sprites. Similar to how BMGlyph works, but MUCH more time consuming and much less flexible.
BMGlyph is the best combination of speed and ease of use, and it has some pretty fancy effects for creating nice looking text, too.

SKPhysicsBody slowing down program

I have a random maze generator that starts building small mazes then progress into massive levels. The "C"s are collectables and the "T"s are tiles. the "P" is the player starting position. I included a sample tile map below.
The performance issue is not when I have a small 6x12 pattern like here; it shows up when I've got a 20x20 pattern for example.
Each character is a tile, and each tile has it's own SKPhysicsBody. The tiles are not square, they are complex polygons and the tiles don't quite touch each other.
The "C"s need to be able to be removed one at a time, and the "T"s are permanent for the level and don't move. Also the maze only shows 6x4 section of tiles at a time and moves the background to the view centered on the player.
I've tried making the T's and C's rectangles which drastically improves performance (but still slower than desired) although the user won't care for this, the shape of the tile is just too different.
Are there any performance tricks you pros can muster up to fix this?
TTTTTT
TCTTCT
TCCCCT
TTCTCT
TCCTCT
TTCCTT
TTTCTT
TTCCCT
TCCTCT
TCTTCT
TTCCCT
TTPTTT
The tiles are not square, they are complex polygons
I think this is your problem. Also, if your bodies are dynamic, setting them static will drastically improve performance. You can also try pooling. And be aware, that performance on the simulator is drastically lower than on the real device.
What kind of collision method are you using?
SpriteKit provides several possibilities to define the shape of the SKPhysicsBody. The best performance provides a rectangle or a circle:
myPhysicsBody = SKPhysicsBody(rectangleOfSize: mySprite.size)
You can also define more complex shapes like a triangle, which have a worse performance.
Using the texture (SpriteKit will use all non transparent pixels to detect the shape by itself) has the worst performance:
myPhysicsBody = SKPhysicsBody(texture: mySprite.texture, size: mySprite.size)
Activating 'usesPreciseCollisionDetection' will also have a negative impact on your performance.

Cocos2d: GPU having to process the non visible CCSprite quads

Given one texture sheet is it better to have one or multiple CCSpriteBatchNodes? Or does this not affect at all the GPU computational cost in processing the non visible CCSprite quads?
I am thinking about performance and referring to this question and answer I got. Basically it suggests that I should use more than one CCSpriteBatchNode even if I have only one file. I don't understand if the sentence "Too many batched sprites still affects performance negatively even if they are not visible or outside the screen" is applicable also having two CCSpriteBatchNode instead of one. In other words, does the sentence refer to this "The GPU is responsible for cancelling draws of quads that are not visible due to being entirely outside the screen. It still needs to process those quads."? And if so it should meant that it doesn't really matter how may CCSpriteBatchNode instances I have using the same texture sheet, right?
How can I optimize this? I mean, how can I avoid the GPU having to process the non visible quads?
Would you be able to answer to at least the questions in bold?
First case: Too many nodes (or sprites) in the scene and many of them are out of screen/visible area. In this case for each sprite, GPU has to check if its outside the visible area or not. Too many sprite-nodes means too much load on GPU.
Adding more CCSpriteBatchNode should not effect the performance. Because the sprite-sheet bitmap is loaded to the GPU memory, and an array of coordinates is kept by the application for drawing individual sprites. So if you put 2 images in 2 different CCSpriteBatchNodes or 2 images in 1, it will be same for both CPU and GPU.
How to optimize?
The best way would be to remove the invisible nodes/sprites from the parent. But it depends on your application.
FPS drops certainly because of two reasons:
fillrate - when a lot of sprites overlap each others (and additionally if we render high-res texture into small sprite)
redundant state changes - in this case the heaviest are shader and texture switches
You can render sprites outside of screen in single batch and this doesn't drop performance singnificantly. Pay attention that rendering sprite with zero opacity (or transparent texture) takes the same time as non-transparent sprite.
First of all, this really sounds like a case of premature optimization. Do a test with the number of sprites you expect to be on screen, and some added, others removed. Do you get 60 fps on the oldest supported device? If yes, good, no need to optimize. If no, tweak the code design to see what actually makes a difference.
I mean, how can I avoid the GPU having to process the non visible quads?
You can't, unless you're going to rewrite how cocos2d handles drawing of sprites/batched sprites.
it doesn't really matter how may CCSpriteBatchNode instances I have using the same texture sheet, right?
Each additional sprite batch node adds a draw call. Given how many sprites they can batch into a single draw call, the benefit far outweighs the drawbacks. Whether you have one, two or three sprite batch nodes makes absolutely no difference.

CALayer vs CGContext, which is a better design approach?

I have been doing some experimenting with iOS drawing. To do a practical exercise I wrote a BarChart component. The following is the class diagram (well, I wasnt allowed to upload images) so let me write it in words. I have a NGBarChartView which inherits from UIView has 2 protocols NGBarChartViewDataSource and NGBarChartViewDelegate. And the code is at https://github.com/mraghuram/NELGPieChart/blob/master/NELGPieChart/NGBarChartView.m
To draw the barChart, I have created each barChart as a different CAShapeLayer. The reason I did this is two fold, first I could just create a UIBezierPath and attach that to a CAShapeLayer object and two, I can easily track if a barItem is touched or not by using [Layer hitTest] method. The component works pretty well. However, I am not comfortable with the approach I have taken to draw the barCharts. Hence this note. I need expert opinion on the following
By using the CAShapeLayer and creating BarItems I am really not
using the UIGraphicsContext, is this a good design?
My approach will create several CALayers inside a UIView. Is there a
limit, based on performance, to the number of CALayers you can
create in a UIView.
If a good alternative is to use CGContext* methods then, whats the
right way to identify if a particular path has been touched
From an Animation point of view, such as the Bar blinking when you
tap on it, is the Layer design better or the CGContext design
better.
Help is very much appreciated. BTW, you are free to look at my code and comment. I will gladly accept any suggestions to improve.
Best,
Murali
IMO, generally, any kind of drawing shapes needs heavy processing power. And compositing cached bitmap with GPU is very cheap than drawing all of them again. So in many cases, we caches all drawings into a bitmap, and in iOS, CALayer is in charge of that.
Anyway, if your bitmaps exceed video memory limit, Quartz cannot composite all layers at once. Consequently, Quartz have to draw single frame over multiple phases. And this needs reloading some textures into GPU. This can impact on performance. I am not sure on this because iPhone VRAM is known to be integrated with system RAM. Anyway it's still true that it needs more work on even that case. If even system memory becomes insufficient, system can purge existing bitmap and ask to redraw them later.
CAShapeLayer will do all of CGContext(I believe you meant this) works instead of you. You can do that yourself if you felt needs of more lower level optimization.
Yes. Obviously, everything has limit by performance view. If you're using hundreds of layers with large alpha-blended graphics, it'll cause performance problem. Anyway, generally, it doesn't happen because layer composition is accelerated by GPU. If your graph lines are not so many, and they're basically opaque, you'll be fine.
All you have to know is once graphics drawings are composited, there is no way to decompose them back. Because composition itself is a sort of optimization by lossy compression, So you have only two options (1) redraw all graphics when mutation is required. Or (2) Make cached bitmap of each display element (like graph line) and just composite as your needs. This is just what the CALayers are doing.
Absolutely layer-based approach is far better. Any kind of free shape drawing (even it's done within GPU) needs a lot more processing power than simple bitmap composition (which will become two textured triangles) by GPU. Of course, unless your layers doesn't exceeds video memory limit.
I hope this helps.

Resources