When to switch from drawRect: to OpenGL/Metal? - ios

I've been building a 2D game and I've been drawing the playing area inside of a UIViewsubclass where I override drawRect: and draw the game with a lot of UIBezierPath* objects. I'm not very experienced on iOS and I've been wondering if this is the right way to do it.
So I guess my question is How much is enough? When should I stop using UIBezierPathand start using OpenGL or Metal? Can I use these to draw inside an UIViewor they take total control of the screen?

There is no answer on when you should stop using UIBezierPath. You need to ask yourself in the beginning on what tool will you need to use to achieve what you need for your application to work the way you want it to. The core graphics procedure which you use is very simple comparing to openGL or such but the performance is not at its best and mostly you are very limited on what you can even achieve in drawing. In general you should use as easy procedure as possible as long as it works out for you.
OpenGL and Metal are bond to the view and do not take control of your whole window (the screen in iOS). Also you can still add subviews to those views without breaking any drawing or functionality so for instance even a full screen view openGL application can have a simple UIButton to for instance pause the game or make your main character take that big sword and slay the dragon saving the princess.

Related

Need to make a Gantt chart like control in iOS, to draw or to subview?

I'm about ready to begin to create a Gantt chart like control in iOS for my app. I need to show a timeline of events. Basically a bunch of rectangles, some lines/arcs for some decoration, possibly a touch point or two to edit attributes. It will basically be a "full screen" control on my phone.
I see two basic paths to implement this custom UIView subclass:
Simply implement drawRect: and go to town using CoreGraphics calls. Most likely split over a bunch of private methods that do all the drawing work. Possibly cache some information as needed, to help with any sub region hit detection.
Rather than "draw" the graphics, add a bunch of UIViews as children using addSubview: and manipulating their layer properties to get them to show the different graphic pieces, and bounds\frame to get them positioned appropriately. And then just let "drawing" take care of itself.
Is one path better than the other? I may end up trying both in the long run just to see, but I figured I'd seek the wisdom of those who've gone before first.
My guess is that the quicker solution would be to go the drawRect: route, and the subview approach would require more code, but maybe be more robust (easier hit detection, animation support, automatic clipping management, etc). I do want to be able to implement pinch to zoom and the like, long term.
UPDATE
I went with the UICollectionView approach. Which got me selection and scrolling for free (after some surprises). I've been pretty pleased with the results so far.
Going with CoreGraphics is going force you to write many more lines of code than building with UIViews, although it is more performant and better on memory. However, you're likely going to need a more robust solution for managing all of that content. A UICollectionView seems like an appropriate solution for mapping your data on to a view with a custom UICollectionViewCell subclass. This is going to be much quicker to develop than rolling your own, and comes with great flexibility through UICollectionViewLayout subclasses. Pinch to zoom isn't supported out of the box, but there are ways to do it. This is also better for memory than using a bunch of UIViews because of cell reuse, but reloading can become slow with a few hundred items that all have different sizes to be calculated.
When it comes to performance, a well written drawRect: is preferred, especially when you would potentially have to render many many rects. With views, an entire layout system goes to work, much worse if you have autolayout, where an entire layout system goes to town and kills your performance. I recently upgraded our calendar views from view-based to CG-based for performance reasons.
In all other aspects, working with views is much preferred, of course. Interface Builder, easy gesture recognizer setup, OO, etc. You could still create logical classes for each element and have it draw itself in the current context (best to pass a context reference and draw on that), but still not as straight forward.
On newer devices, view drawing performance is quite high actually. But if you want these iPhone 4 and 4S devices, if you want these iPad 3 devices, which lack quite a lot in GPU performance, I would say, depending on your graphs potential sizes, you might have to go the CG way.
Now, you mention pinch to zoom. This is a bitch no matter what. If you write your drawRect: well, you could eventually work your way to tiling and work with that.
If you plan on letting the user move parts of the chart around I would definitely suggest going with the views.
FYI, you will be able to handle pinch to zoom with drawRect just fine.
What would push me to using UIView's in this case would be to support dragging parts of the chart, animating transitions in the chart, and tapping on elements in the chart (though that wouldn't be too hard with drawRect:). Also, if you have elements in your chart that will need heavy CPU usage to render you will get better performance if you need to redraw sub portions of your chart with UIView's since the rendering of the elements is cached to a layer and you will only need to redraw the pieces you care about and not the entire chart.
If you chart will be VERY big AND you want to use drawRect: you will probably want to look at using CATileLayer for you backing so that you don't have the entire layer in memory. This can add added challenges if you only want to render the requested tiles and not the entire area.

Multiple instances of one UIImageView on-screen

I am making a game where bullets are involved. Its a machine gun, so there will be more than one bullet on the screen at the same time. How to I write the code for the properties and actions of one bullet and apply that to all of them, like multiple instances of the one bullet?
The key point to making a game is the concept of sprite, i.e., a light-weight object that has a graphical representation and that you can move around (managing collisions, etc).
You could try to implement sprites on top of CALayers, using Core Animation, or you might decide to use a game framework like Cocos2D.
For the first approach have a look at this short tutorial. This could also help you if you want to implement your sprites using UIImageViews, although you have to keep in mind that CALayer are light-weight, UIView are not, so if you plan to have many of them that could make a difference.
As to the question of replicating the bullet, basically the key suggestion would be using some form of caching, so that you do not end up replicating the same image in memory multiple times. A very basic caching mechanism is available with the UIImage class if you use the convenience constructor imageNamed.
Again, if you plan to make a full fledged game with a good performance (say 40-60 fps), the best suggestion is using Cocos2D, which will offer you all the power of Open GL graphics wrapped in a simple interface.
Have you tried subclassing UIImageView? That way you can have a function createBullet that creates a subclassed UIImageView and adds it to the screen, and in the subclass it can contain functions and properties for animating etc...

CGContextDrawLayerAtPoint is slow on iPad 3

I have a custom view (inherited from UIView) in my app. The custom view overrides
- (void) drawRect:(CGRect) rect
The problem is: the drawRect: executes many times longer on iPad 3 than on iPad 2 (about 0.1 second on iPad 3 and 0.003 second on iPad 2). It's about 30 times slower.
Basically, I am using some pre-created layers and draw them in the drawRect:. The last call
CGContextDrawLayerAtPoint(context, CGPointZero, m_currentLayer);
takes most of the time (about 95% of total time in drawRect:)
What might be slowing things so much and how should I fix the cause?
UPDATE:
There are no threads directly involved. I do call setNeedsDisplay: in one thread and drawRect: gets called from another but that's it. The same goes for locks (there are no locks used).
The view gets redrawn in response to touches (it's a coloring book app). On iPad 2 I get reasonable delay between a touch and an update of the screen. I want to achieve the same on iPad 3.
So, the iPad 3 is definitely slower in a lot of areas. I have a theory about this. Marco Arment noted that the method renderInContext is ridiculously slow on the new iPad. I also found this to be the case when trying to create a magnifying glass for a custom text view. In the end I had to forego renderInContext for custom Core Graphics drawing.
I've also been having problem hitting the dreaded wait_fences errors on my core graphics drawing here: Only on new iPad 3: wait_fences: failed to receive reply: 10004003.
This is what I've figured out so far. The iPad 3 obviously has 4 times the pixels to drive. This can cause problems in two place:
First, the CPU. All core graphics drawing is done by the CPU. In the case of rotational events, if the CPU takes too long to draw, it hits the wait_fences error, which I believe is simply a call that tells the device to wait a little longer to actually perform the rotation, thus the delay.
Transferring images to the GPU. The GPU obviously handles the retina resolution just fine (see Infinity Blade 2). But when core graphics draws, it draws its images directly to the GPU buffers to avoid memcpy. However, either the GPU buffers haven't changes since the iPad 2 or they just didn't make them large enough, because it's remarkably easy to overload those buffers. When that happens, I believe the CPU writes the images to standard memory and then copies them to the GPU when the GPU buffers can handle it. This, I think is what causes the performance problems. That extra copy is time consuming with so many pixels and slows things down considerably.
To avoid memcpy I recommend several things:
Only draw what you need. Avoid drawing anything offscreen at all costs. If you're drawing a large view, but only display part of that view (subviews covering it, for example) try to find a way to only draw what is visible.
If you have to draw a large view, consider breaking the view up in to parts either as subviews or sublayers (probably sublayers in your case). And only redraw what you need. Take the notability app, for example. When you zoom in, you can literally watch it redraw one square at a time. Or in safari you can watch it update squares as you scroll. Unfortunately, I haven't had to do this so I'm uncertain of the methodology.
Try to keep your drawings simple. I had an awesome looking custom core text view that had to redraw on every character entered. Very slow. I changed the background to simple white (in core graphics) and it sped up well. Even better would be for me to not redraw the background.
I would like to point out that my theory is conjecture. Apple doesn't really explain what exactly they do. My theory is just based on what they have said and how the iPad responds as well as my own experimentation.
UPDATE
So Apple has now released the 2012 WWDC Developer videos. They have two videos that may help you (requires developer account):
iOS App Performance: Responsiveness
iOS App Performance: Graphics and Animation
One thing they talk about I think may help you is using the method: setNeedsDisplayInRect:(CGRect)rect. Using this method instead of the normal setNeedsDisplay and making sure that your drawRect method only draws the rect given to it can greatly help performance. Personally, I use the function: CGContextClipToRect(context, rect); to clip my drawing only to the rect provided.
As an example, I have a separate class I use to draw text directly to my views using Core Text. My UIView subclass keeps a reference to this object and uses it to draw it's text rather than use a UILabel. I used to refresh the entire view (setNeedsDisplay) when the text change. Now I have my CoreText object calculate the changed CGRect and use setNeedsDisplayInRect to only change the portion of the view that contains the text. This really helped my performance when scrolling.
I ended up using approach described in #Kurt Revis answer for similar question.
I minimized number of layers used, added UIImageView and set its image to an UIImage wrapping my CGImageRef. Please read the mentioned answer to get more details about the approach.
In the end my application become even simpler than before and works with almost identical speed on iPad 2 and iPad 3.

On iPad, to keep a bitmap of the current main view, how is Core Graphics compared with cocos2D and OpenGL ES?

I heard that for a single view app, it is best to keep the current main view's content in a bitmap, and we paint extra things on this bitmap, and when it is time to do drawRect (when called by iOS), then we just display the bitmap.
Is it true that Core Graphics is quite capable of handling it? But since both cocos2D and OpenGL ES are said to be super fast when handling graphics, is using one of them good for the situation above, making it more smooth or is it not that different from using Core Graphics?
(Or maybe it matters for type of application, such as a simple drawing program, versus a game that refreshes 30fps or 60fps?)
It depends if you need to draw every frame and have your drawings change.
Check out the AccelerometerGraph sample code provided by Apple. This is one way of using Core Animation layers and Core Graphics (Quartz drawing) together to do continuous drawings across many frames without redrawing what has already been drawn.
Cocos2D is less for drawing and more for sprite-based animations where your sprites are pre-rendered images that you have imported (png, etc). It's very very fast at handling multiple images simultaneously and offers batch nodes for consolidating all images to just one single Open GL call.
Quartz (Core Graphics) by itself is not usually the best option for continuous drawing, but is better for generating static images once that you then move around, if necessary.

The most effective way of sprite(frame) animation in Monotouch

I would like to know the best(high performance / FPS) way of doing sprite (frame) animations in a MonoTouch project.
I'm doing this very effectively in Android by decoding whole sprite image, clip it at desired sizes and display frames at desired pace.
So, what's the most effective way of doing this in Monotouch, in terms of FPS.
Thanks in advance.
Have you looked at using MonoGame?
The SpriteBatch class works well for us. It is OpenTK/OpenGL underneath, so you could repurpose it for your needs if you don't want to use full MonoGame.
It depends on what are the demands that you have for your sprites.
If your requirements fall in the scope of what CoreAnimation can do, you can just your image into a UIView and use CoreAnimation to animate the view.
If you need more control, MonoGame provides a more comprehensive set of primitives for dealing with sprites.

Resources