Best Practice for interactive 2D programming way in iOS - ios

I want to create a diagram application, I can create some shapes.
Every shape can be moved in the canvas.
What is the best way to implement it? Now I know is just two way:
with only UIView, draw all shapes in this UIView. When touch events
reached, redraw everything.
Create a UIView for each Shape, and every UIView can response UIEvent independently
Is there any other good way? The first is too complicated. The second seems has bad performance ?

Either will work, but each have pros and cons. Specifically:
Single UIView: This approach would require you to create a CALayer for each shape and then do your own hit-testing and finger-dragging when the shape is moved. This approach will perform much better if you have many shapes (be sure to use an indexed lookup to do hit testing rather than an O(N) search) since CALayers are lightweight.
Do not take the approach o drawing the shapes in their current location in a single UIView via a single call to drawRect:. This will perform extremely poorly, especially when you are move the shapes during a drag, and as you indicate, is very complicated to implement well
One UIView Per Shape: This approach is very easy to program, as you don't have to do the hit-testing and the touch get's sent to the shape being touched. This approach will perform well if you have a few shapes (<30, in my experience). If you have a large number of shapes, you start to see issues with frame rate.

Related

Is there a way to create a CGPath matching outline of a SKSpriteNode?

My goal is to create a CGPath that matches the outline of a SKSpriteNode.
This would be useful in creating glows/outlines of SKSpriteNodes as well as a path for physics.
One thought I have had, but I have not really worked much at all with CIImage, so I don't know if there is a way to access/modify images on a pixel level.
Then maybe I would be able to port something like this to Objective-C :
http://www.sakri.net/blog/2009/05/28/detecting-edge-pixels-with-marching-squares-algorithm/
Also very open to other approaches that make this process automated as opposed to me creating shape paths for every sprite I make for physics or outline/glow effects.
What you're looking for is called a contour tracing algorithm. Moore neighbor tracing is popular and works well for images and tilemaps. But do check out the alternatives because they may better fit your purposes.
AFAIK marching squares and contour tracing are closely related, if not the same (class of) algorithms.
An implementation for tilemaps (to create physics shapes from tiles) is included in Kobold Kit. The body of the algorithm is in the traceContours method of KKTilemapLayerContourTracer.m.
It looks more complex than it really is, on the other hand it takes a while to wrap your head around it because it is a "walking" algorithm, meaning the results of prior steps is used in the current step to make decisions.
The KK implementation also includes a few minor fixes specifically for tilemaps (ie two or more horizontally or vertically connected tiles become a single line instead of dividing the line into tile-sized segments). It was also created with a custom point array structure, and when I ported it to SK I decided it would be easier to continue with that and only at the end convert the point arrays to CGPath objects.
You can make certain optimizations if you can safely assume that the shape you're trying to trace is not going to touch the borders, and there can not be any tiles that are only connected diagonally. All of this becomes clearer when you're actually implementing the algorithm for your own purposes.
But as far as a ready-made, fits-all-purposes solution goes: there ain't none.

How to speed drawing a grid of UIViews?

Currently I instantiate a 2-d matrix of UIViews. Each UIView's drawRect is overridden to draw 1 of 2-3 shapes.
As the grid scales larger, I am noticing excessive time spent in the drawRect of each subview. Since I only have 2-3 shapes, I would like to speed up the rendering of the matrix by drawing the 2-3 unique UIViews each only one time, and then somehow instantiate a copy of the appropriate pre-drawn UIView in the matrix.
I have considered capturing the UIView as a UIImage, making a copy of the UIImage and instantiating this copy. I am wondering though, if the overhead of this process makes it not appreciably faster than drawRect.
Can someone point me to a best practice for speed optimization by reusing UIViews in a 2-D matrix?
The quick fix to this issue is to enable the shouldRasterize property of a given UIView that does not require further updates.

Which is the fastest way to add a rectangle on the front window

We can draw rectangle on the UIImage. And we can also add a sub-view with a backgroundcolor or border. I guess there is other methods to make it, too.
Did someone try to analyze them?
Which is the fastest way?
Thanks!
I would say, drawing rectangles using Quartz engine and UIImage is more CPU intensive that using UIView. If your scene is heavy and dynamic, Quartz is the best way of doing drawings because you can update your drawings.
Using UIView is not CPU intensive, but it'll have a heavy memory foot print if you want to draw a lot of rectangles.
So, if you want to draw just one or two rectangle for GUI design, I'd say go for using UIViews. But If you are trying to do some complex drawings involving more shapes go for Quartz.

Design advice for OpenGL ES 2 / iOS GLKit

I'd like to build an app using the new GLKit framework, and I'm in need of some design advice. I'd like to create an app that will present up to a couple thousand "bricks" (objects with very simple geometry). Most will have identical texture, but up to a couple hundred will have unique texture. I'd like the bricks to appear every few seconds, move into place and then stay put (in world coords). I'd like to simulate a camera whose position and orientation are controlled by user gestures.
The advice I need is about how to organize the code. I'd like my model to be a collection of bricks that have a lot more than graphical data associated with them:
Does it make sense to associate a view-like object with each handle geometry, texture, etc.?
Should every brick have it's own vertex buffer?
Should each have it's own GLKBaseEffect?
I'm looking for help organizing what object should do what during setup, then rendering.
I hope I can stay close to the typical MVC pattern, with my GLKViewController observing model state changes, controlling eye coordinates based on gestures, and so on.
Would be much obliged if you could give some advice or steer me toward a good example. Thanks in advance!
With respect to the models, I think an approach analogous to the relationship between UIImage and UIImageView is appropriate. So every type of brick has a single vertex buffer,GLKBaseEffect, texture and whatever else. Each brick may then appear multiple times just as multiple UIImageViews may use the same UIImage. In terms of keeping multiple reference frames, it's actually a really good idea to build a hierarchy essentially equivalent to UIView, each containing some transform relative to the parent and one sort being able to display a model.
From the GLKit documentation, I think the best way to keep the sort of camera you want (and indeed the object locations) is to store it directly as a GLKMatrix4 or a GLKQuaternion — so you don't derive the matrix or quaternion (plus location) from some other description of the camera, rather the matrix or quaternion directly is the storage for the camera.
Both of those classes have methods built in to apply rotations, and GLKMatrix4 can directly handle translations. So you can directly map the relevant gestures to those functions.
The only slightly non-obvious thing I can think of when dealing with the camera in that way is that you want to send the inverse to OpenGL rather than the thing itself. Supposing you use a matrix, the reasoning is that if you wanted to draw an object at that location you'd load the matrix directly then draw the object. When you draw an object at the same location as the camera you want it to end up being drawn at the origin. So the matrix you have to load for the camera is the inverse of the matrix you'd load to draw at that location because you want the two multiplied together to be the identity matrix.
I'm not sure how complicated the models for your bricks are but you could hit a performance bottleneck if they're simple and all moving completely independently. The general rule when dealing with OpenGL is that the more geometry you can submit at once, the faster everything goes. So, for example, an entirely static world like that in most games is much easier to draw efficiently than one where everything can move independently. If you're drawing six-sided cubes and moving them all independently then you may see worse performance than you might expect.
If you have any bricks that move in concert then it is more efficient to draw them as a single piece of geometry. If you have any bricks that definitely aren't visible then don't even try to draw them. As of iOS 5, GL_EXT_occlusion_query_boolean is available, which is a way to pass some geometry to OpenGL and ask if any of it is visible. You can use that in realtime scenes by building a hierarchical structure describing your data (which you'll already have if you've directly followed the UIView analogy), calculating or storing some bounding geometry for each view and doing the draw only if the occlusion query suggests that at least some of the bounding geometry would be visible. By following that sort of logic you can often discard large swathes of your geometry long before submitting it.

CALayer vs CGContext, which is a better design approach?

I have been doing some experimenting with iOS drawing. To do a practical exercise I wrote a BarChart component. The following is the class diagram (well, I wasnt allowed to upload images) so let me write it in words. I have a NGBarChartView which inherits from UIView has 2 protocols NGBarChartViewDataSource and NGBarChartViewDelegate. And the code is at https://github.com/mraghuram/NELGPieChart/blob/master/NELGPieChart/NGBarChartView.m
To draw the barChart, I have created each barChart as a different CAShapeLayer. The reason I did this is two fold, first I could just create a UIBezierPath and attach that to a CAShapeLayer object and two, I can easily track if a barItem is touched or not by using [Layer hitTest] method. The component works pretty well. However, I am not comfortable with the approach I have taken to draw the barCharts. Hence this note. I need expert opinion on the following
By using the CAShapeLayer and creating BarItems I am really not
using the UIGraphicsContext, is this a good design?
My approach will create several CALayers inside a UIView. Is there a
limit, based on performance, to the number of CALayers you can
create in a UIView.
If a good alternative is to use CGContext* methods then, whats the
right way to identify if a particular path has been touched
From an Animation point of view, such as the Bar blinking when you
tap on it, is the Layer design better or the CGContext design
better.
Help is very much appreciated. BTW, you are free to look at my code and comment. I will gladly accept any suggestions to improve.
Best,
Murali
IMO, generally, any kind of drawing shapes needs heavy processing power. And compositing cached bitmap with GPU is very cheap than drawing all of them again. So in many cases, we caches all drawings into a bitmap, and in iOS, CALayer is in charge of that.
Anyway, if your bitmaps exceed video memory limit, Quartz cannot composite all layers at once. Consequently, Quartz have to draw single frame over multiple phases. And this needs reloading some textures into GPU. This can impact on performance. I am not sure on this because iPhone VRAM is known to be integrated with system RAM. Anyway it's still true that it needs more work on even that case. If even system memory becomes insufficient, system can purge existing bitmap and ask to redraw them later.
CAShapeLayer will do all of CGContext(I believe you meant this) works instead of you. You can do that yourself if you felt needs of more lower level optimization.
Yes. Obviously, everything has limit by performance view. If you're using hundreds of layers with large alpha-blended graphics, it'll cause performance problem. Anyway, generally, it doesn't happen because layer composition is accelerated by GPU. If your graph lines are not so many, and they're basically opaque, you'll be fine.
All you have to know is once graphics drawings are composited, there is no way to decompose them back. Because composition itself is a sort of optimization by lossy compression, So you have only two options (1) redraw all graphics when mutation is required. Or (2) Make cached bitmap of each display element (like graph line) and just composite as your needs. This is just what the CALayers are doing.
Absolutely layer-based approach is far better. Any kind of free shape drawing (even it's done within GPU) needs a lot more processing power than simple bitmap composition (which will become two textured triangles) by GPU. Of course, unless your layers doesn't exceeds video memory limit.
I hope this helps.

Resources