ios iPaint from WWDC 2012 - ios

In the Video Session 506 of Apples WWDC 2012 they showed a painting app which is made for high performance drawings (so the frame rate never gets below 30).
I tried to replicate the code but get stuck on multiple points.
What I am looking for is a basic drawing app (lines, Squares, Circles, bezier paths), which performs well even after hundreds of lines have been drawn.
The basic approach is to save the drawn lines (or circles bezierpaths etc) to an image after a certain numbers of them have been drawn, and then only refresh the new drawings, therefore not having to redraw all the already drawn lines.
But somehow I never get to a higher performance. How do I need to implement this? Do I need multiple layers? And how do I manage that not all layers in a view are redrawn, but only a certain sublayer?
If someone could provide me with a short example of a few lines drawn on an layer, then saving that layer to an image, an then drawing on top of that I would really appreciate it.
Thank you for any help to recreate the iPaint application, which is unfortunately not available for download from apple.

That is only half of the puzzle. The other half is to only refresh the minimum possible area of the view (via setNeedsDisplayInRect:). However, I have been through many different ways of drawing via Core Graphics. The caching is fine, but I don't use it anymore. I set the update rectangle as above, and then test each path before I stroke it (testing is fast, stroking is slow). If it is inside the update box, I stroke it, otherwise I ignore it.

I did not look at that session, but a traditional Quartz speedup has been to use CGLayers (not CALayers). You can think of a CGLayer as a cached drawing which may or may not be a bitmap (the system decides how best to cached it). If you have a backing bitmap context, you can use that as your "image" and draw the CGLayers into that (and then discard the layers) as you see fit. Read up on CGLayer (its in the Quartz documentation) and then see if this was what they talked about in that session.

Related

How can I draw an image with many tiny modifications?

I am drawing many audio meters on a view and finding that drawRect can not keep up with the speed of the audio change. In practice only a very small part of the image changes at a time so I really only want to draw the incremental changes.
I have created a CGLayer and when the data changes I use CGContextBeginPath, CGContextMoveToPoint, CGContextAddLineToPoint and CGContextStrokePath to draw in the CGLayer.
In drawRect in the view I use CGContextDrawLayerAtPoint to display the layer.
When the data changes I draw just the difference by drawing a line over the top in the CGLayer. I had assumed it was like photoshop and the new data just draws over the old but I now believe that all the lines I have ever drawn remain present in the layer. Is that correct?
If so is there a way to remove lines from a CGLayer?
What exactly do you mean by 'audio meter' show some snapshots of your intended designs. Shows us some code...
These are my suggestions-
1) Yes the new data just draws on top of CGLayer unless you release it CGLayerRelease(layer)
2) CGContextStrokePath is an expensive operation. You may want to create a generic line stroke and store them in UIImage. Then reuse the UIImage everytime your datachanges.
3) Simplest solution: use UIProgressView if you just want to show audio levels.
I now believe that all the lines I have ever drawn remain present in the layer. Is that correct?
Yes.
If so is there a way to remove lines from a CGLayer?
No. There is not. You would create a new layer. Generally, you create a layer for what is drawn repeatedly.
Your drawing may be able to be simplified by drawing rects rather than paths.
For some audio meters, dividing the meter into multiple pieces may help (you could use a CGLayer here). Similarly, you may be able to just draw rectangles selectively and/or clip drawing, images, and/or layers.

What is the best way to use layers and partial rendering in iOS for speed

I'm working on a graphing application which I wrote using Core Graphics. I have a buffer which accumulates data, and I render it on the screen. It's super slow and I want to avoid going to openGL if possible. According to the profiler, drawing my graph data is what's killing me (which consists of a number of points which are converted to a path, followed by the calls AddPath, DrawPath)..
This is what I want to do, my question is how to best implement it using layers / views / etc..
I have a grid and some text. I want this to be rendered in a CALayer (or some other layer/view?) and only update when required (the graph is rescaled).
Only a portion of the data needs to be refreshed. I want to take the previous screen buffer, erase a rectangle's worth of data (or cover it with a white box) and then draw only the portion of the graphs that have changed.
I then want to merge the background layer with the foreground graphs to generate the composite image. This requires the graph layer to have a transparent background so as not to obscure the grid.
I've looked at using CAlayer as a sublayer, but it doesn't seem to provide a simple way to draw a line. CAShapeLayer seems a bit better, but it looks like it can only draw a single line. I want the grid to be composed of multiple lines.
What's the best approach and combination of objects to allow me to do this?
Thanks,
Reza
I'd have a CGLayerRef that was used for drawing the path into. For each new point I'd draw just the new segment. When the graph got to full width I'd create a new CGLayerRef and start drawing the new line segments into that.
What happens to the previous layer as it's drawn over by the new layer depends on how your graph is displayed, but you could clear the section which is now underneath the new layer (using CGContextSetBlendMode(context, kCGBlendModeClear);) or you could choose to blend them together in some other way.
Drawing the layers each time you make a change to the lines they contain is relatively cheap compared to drawing all of the line segments themselves.
Technically, there would also be CALayers used to manage the drawing of the CGLayerRefs to the screen (via the delegate relationship drawLayer:inContext:), but all of the line drawing is done using the CGLayerRefs context and then the CGLayerRef is drawn as a whole into the CALayers context (CGContextDrawLayerInRect(context, frame, backingCGLayer);).

iOS FloodFill : UIImage vs Core Graphics

I'm considering building an app that would make heavy use of a flood fill / paint bucket feature. The images I'd be coloring are simply like coloring book pages; white background, black borders. I'm debating which is better to use UIImage (by manipulating pixel data) or drawing the images with Core Graphics and changing the fill color on touch.
With UIImage, I'm unable to account for retina images properly; it destroys the image when I write the context into a new UIImage, but I can probably figure out. I open to tips though...
With CoreGraphics, I have no idea how to calculate which shape to fill when a user touches an area and then actually filling that area. I've looked but I have not turned up a successful search.
Overall, I believe the optimal solution is using CoreGraphics, since it'll be lighter overall and I won't have to keep several copies of the same image for different sizes.
Thoughts? Go easy on me! It's my first app and first SO question ;)
I'd suggest using Core Graphics.
Instead of images, define the shapes using CGPath or NSBezierPath, and use Core Graphics to stroke and/or fill the shapes. Filling shapes is then as easy as switching drawing mode from just stroking to stroking and filling.
Creating even more complex shapes is made much easier with the "PaintCode" app (which lets you draw and creates the path code for you).
As your first app, I would suggest something with a little less custom graphics fiddling, though.

Free hand painting and erasing using UIBezierPath and CoreGraphics

I have been trying so much but have no solution find out yet. I have to implement the painting and erasing on iOS so I successfully implemented the painting logic using UIBezierPath. The problem is that for erasing, I implemented the same logic as for painting by using kCGBlendModeClear but the problem is that I cant redraw on the erased area and this is because in each pass in drawRect i have to stroke both the painting and erasing paths. So is there anyway that we can subtract erasing path from drawing path to get the resultant path and then stroke it. I am very new to Core Graphics and looking forward for your reply and comments. Or any other logic to implement the same. I can't use eraser as background color because my background is textured.
You don't need to stroke the path every time, in fact doing so is a huge performance hit. I guarantee if you try it on an iPad 3 you will be met with a nearly unresponsive screen after a few strokes. You only need to add and stroke the path once. After that, it will be stored as pixel data. So don't keep track of your strokes, just add them, stroke them, and get rid of them. Also look into using a CGLayer (you can draw to that outside the main loop, and only render it to your rect in the main loop so it saves lots of time).
These are the steps that I use, and I am doing the exact same thing (I use a CGPath instead of UIBezierPath, but the idea is the same):
1) In touches began, store the touch point and set the context to either erase or draw, depending on what the user has selected.
2) In touches moved, if the point is a certain arbitrary distance away from the last point, then move to the last point (CGContextMoveToPoint) and draw a line to the new point (CGContextAddLineToPoint) in my CGLayer. Calculate the rectangle that was changed (i.e. contains the two points) and call setNeedsDisplayInRect: with that rectangle.
3) In drawRect render the CGLayer into the current window context ( UIGraphicsGetCurrentContext() ).
On an iPad 3 (the one that everyone has the most trouble with due to its enormous pixel count) this process takes between 0.05 ms and 0.15ms per render (depending on how fast you swipe). There is one caveat though, if you don't take the proper precautions, the entire frame rectangle will be redrawn even if you only use setNeedsDisplayInRect: My hacky way to combat this (thanks to the dev forums) is described in my self answered question here, Otherwise, if your view takes a long time to draw the entire frame (mine took an unacceptable 150 ms) you will get a short stutter under certain conditions while the view buffer gets recreated.
EDIT With the new info from your comments, it seems that the answer to this question will benefit you -> Use a CoreGraphic Stroke as Alpha Mask in iPhone App
Hai here is the code for making painting, erasing, undo, redo, saving as picture. you can check sample code and implement this on your project.
Here

-[CALayer setNeedsDisplayInRect:] causes the whole layer to be redrawn

I'm subclassing CALayer to provide my own drawing in method. For optimization I call -[MyLayer setNeedsDisplayInRect:] instead of -[MyLayer setNeedsDisplay]. In the drawing method I get the rect which should be redrawn via CGContextGetClipBoundingBox().
If I use this layer as the base layer of an UIView every thing works as expected. The problem arises, as soon as I use my custom layer as a sublayer of an other CALayer. Than CGContextGetClipBoundingBox() always returns the rect of the bounds of that layer.
Any ideas?
[EDIT]
It seems, that there is no guaranty, that the content of the CALayer is cached and only the dirty part gets redrawn. I did a small test and stored the rect that needs display as a separate property. The result was, that only this part was visible on the screen.
I'll now render to an image context and keep that image as a cache. In the draw method I'll only display the cached image.
Apple's documentation is unfortunately conflicting as the docs on -setNeedsDisplayInRect do not indicate whether the method works in practice. Based on my own experience, this technote sets it straight:
Note that, because of the way that iPhone/iPod touch/iPad updates its screen, the entire view will be redrawn if you call -setNeedsDisplayInRect: or -setNeedsDisplay:.
That being said, there are a number of things you can look into if you think that you are hitting a wall due to redundant drawing.
If drawing images, the biggest performance improvement you can make is to use images of the same dimensions at which you draw. If they're not, try to cache your image by rendering it to some offscreen bitmap context and bring it back later on.
Check out the shouldRasterize property on CALayer. This can be a godsend if you are trying to manipulate a layer whose sublayers potentially constitute a complex layer hierarchy. Be sure to check out how you're doing in Instruments by ticking the Color Hits Green and Misses Red box in the Core Animation instrument. If you see a lot of red, chances are using shouldRasterize is hurting more than it's helping.
Even better than shouldRasterize is to flatten your layer hierarchy, as then you can avoid the extra overhead that shouldRasterize incurs when flattening your layer hierarchy real time. Of course this is not always possible, but don't be afraid to try :)
If you're drawing images, try experimenting with your blending mode. If you happen to be drawing opaque images, there's no need to be using normal source over methods (which use both read/write bandwidth). Try kCGBlendModeCopy, which allows you to eliminate read bandwidth overhead.
Check out CGLayerRef, which allows you to cache Core Graphics output across various calls to your drawing methods. My experience is that, unless you're doing some hardcore pixel pushing, that this ends up being more costly than just redrawing. See this for an interesting read.
Above all, Instruments is your friend. Check out a couple videos from past WWDCs (2012, 2011, and 2010); they all have great info about how to fine-tune performance.
Please feel free to ask any further questions if something I've said makes little sense.

Resources