I'm working on yet another drawing app with canvas that is many times bigger than screen.
I need some advice/direction on how to that.
Basically what i want is to scroll around this big canvas, drawing only in visible region.
I was thinking of two approaches:
Have 64x64 (or whatever) "tiles" to draw on, and then on scroll just load new tiles.
Record all user strokes (points) and on scroll calculate which are in specified region, and draw them, using only screen-size canvas.
If this matters, i'm using cocos2d for prototype.
Forget the 2000x200 limitation, I have an open source project that draws 18000 x 18000 NASA images.
I suggest you break this task into two parts. First, scrolling. As was suggested by CodaFi, when you scroll you will provide CATiledLayers. Each of those will be a CGImageRef that you create - a sub image of your really huge canvas. You can then easily support zooming in and out.
The second part is interacting with the user to draw or otherwise effect the canvas. When the user stops scrolling, you then create an opaque UIView subclass, which you add as a subview to your main view, overlaying the view hosting the CATiledLayers. At the moment you need to show this view, you populate it with the proper information so it can draw that portion of your larger canvas properly (say a circle at this point of such and such a color, etc).
You would do your drawing using the drawRect: method of this overlay view. So as the user takes action that changes the view, you do a "setDisplayInRect:" as needed to force iOS to call your drawRect:.
When the user decides to scroll, you need to update your large canvas model with whatever changes the user has made, then remove the opaque overlay, and let the CATiledLayers draw the proper portions of the large image. This transition is probably the most tricky part of the process to avoid visual glitches.
Supposing you have a large array of object definitions used for your canvas. When you need to create a CGImageRef for a tile, you scan through it looking for overlap between the object's frame and the tile's frame, and only then draw those items that are required for that tile.
Many mobile devices don't support textures over 2048x2048. So I would recommend:
make your big surface out of large 2048x2048 tiles
draw only the visible part of the currently visible tile to the screen
you will need to draw up to 4 tiles per frame, in case the user has scrolled to a corner of four tiles, but make sure you don't draw anything extra if there is only one visible tile.
This is probably the most efficient way. 64x64 tiles are really too small, and will be inefficient since there will be a large repeated overhead for the "draw tile" calls.
There is a tiling example in Apples ScrollViewSuite Doesn't have anything to do with the drawing part but it might give you some ideas about how to manage the tile part of things.
You can use CATiledLayer.
See WWDC2010 session 104
But for cocos2d, it might not work.
Related
I am drawing many audio meters on a view and finding that drawRect can not keep up with the speed of the audio change. In practice only a very small part of the image changes at a time so I really only want to draw the incremental changes.
I have created a CGLayer and when the data changes I use CGContextBeginPath, CGContextMoveToPoint, CGContextAddLineToPoint and CGContextStrokePath to draw in the CGLayer.
In drawRect in the view I use CGContextDrawLayerAtPoint to display the layer.
When the data changes I draw just the difference by drawing a line over the top in the CGLayer. I had assumed it was like photoshop and the new data just draws over the old but I now believe that all the lines I have ever drawn remain present in the layer. Is that correct?
If so is there a way to remove lines from a CGLayer?
What exactly do you mean by 'audio meter' show some snapshots of your intended designs. Shows us some code...
These are my suggestions-
1) Yes the new data just draws on top of CGLayer unless you release it CGLayerRelease(layer)
2) CGContextStrokePath is an expensive operation. You may want to create a generic line stroke and store them in UIImage. Then reuse the UIImage everytime your datachanges.
3) Simplest solution: use UIProgressView if you just want to show audio levels.
I now believe that all the lines I have ever drawn remain present in the layer. Is that correct?
Yes.
If so is there a way to remove lines from a CGLayer?
No. There is not. You would create a new layer. Generally, you create a layer for what is drawn repeatedly.
Your drawing may be able to be simplified by drawing rects rather than paths.
For some audio meters, dividing the meter into multiple pieces may help (you could use a CGLayer here). Similarly, you may be able to just draw rectangles selectively and/or clip drawing, images, and/or layers.
I'm working on a graphing application which I wrote using Core Graphics. I have a buffer which accumulates data, and I render it on the screen. It's super slow and I want to avoid going to openGL if possible. According to the profiler, drawing my graph data is what's killing me (which consists of a number of points which are converted to a path, followed by the calls AddPath, DrawPath)..
This is what I want to do, my question is how to best implement it using layers / views / etc..
I have a grid and some text. I want this to be rendered in a CALayer (or some other layer/view?) and only update when required (the graph is rescaled).
Only a portion of the data needs to be refreshed. I want to take the previous screen buffer, erase a rectangle's worth of data (or cover it with a white box) and then draw only the portion of the graphs that have changed.
I then want to merge the background layer with the foreground graphs to generate the composite image. This requires the graph layer to have a transparent background so as not to obscure the grid.
I've looked at using CAlayer as a sublayer, but it doesn't seem to provide a simple way to draw a line. CAShapeLayer seems a bit better, but it looks like it can only draw a single line. I want the grid to be composed of multiple lines.
What's the best approach and combination of objects to allow me to do this?
Thanks,
Reza
I'd have a CGLayerRef that was used for drawing the path into. For each new point I'd draw just the new segment. When the graph got to full width I'd create a new CGLayerRef and start drawing the new line segments into that.
What happens to the previous layer as it's drawn over by the new layer depends on how your graph is displayed, but you could clear the section which is now underneath the new layer (using CGContextSetBlendMode(context, kCGBlendModeClear);) or you could choose to blend them together in some other way.
Drawing the layers each time you make a change to the lines they contain is relatively cheap compared to drawing all of the line segments themselves.
Technically, there would also be CALayers used to manage the drawing of the CGLayerRefs to the screen (via the delegate relationship drawLayer:inContext:), but all of the line drawing is done using the CGLayerRefs context and then the CGLayerRef is drawn as a whole into the CALayers context (CGContextDrawLayerInRect(context, frame, backingCGLayer);).
I currently have a free hand drawing iPad app, that adds lines to a mutable path via quad curves in the touches methods then calls setNeedsDisplayInRect on the new area.
Problem is when the drawing (path) gets rather large, it takes longer to redraw, and begins to bog down. As well as whenever the user changes the brush size or color, it applies this to overlapping parts of the previously drawn path on redraw.
To counter this, I call renderInContext in a background thread in touchesEnded, and merge this with another UIImage in an imageview behind the draw view. Then clear the draw view.
This also helps so when the user hits save, the drawing is usually already rendered in a single UIImage - ready to go.
This works fine on other devices, but on he iPad 3 retina display, the performance is really awful and tends to crash whenever the user lifts his finger multiple times when drawing quickly.
I am seeking any type of advice for best practice in handling this type of situation? Aside from adding additional views to render off of in the background to prevent the main and background thread from accessing the same view at a time - which sounds rather hack-ish - I feel like I'm beating a dead horse?
In my current app, I made a working implementation that works fine on iPad 2 as well as 3, regardless of path length or number of paths. It seems that the graphics card is better at drawing lots of small paths then a few large paths, and either one is faster than rendering an image into a context. So, what I do is even if the user is continuously drawing, I break the path into many smaller paths and add those to an array. This approach gives me one advantage, and one disadvantage.
Advantage: The ability to zoom and redraw the image crisply
Disadvantage: Can't do pixel perfect erasing
As far as multiple colors, I made a subclass of UIBezierPath that includes a color property. Since colors are now serializable via NSCoding, they are easily saveable. In addition, I have a "stroke" object, which holds all of the paths the user created in one continuous stroke. This way I can handle undo / redo correctly.
Hope this info helps.
I have been trying so much but have no solution find out yet. I have to implement the painting and erasing on iOS so I successfully implemented the painting logic using UIBezierPath. The problem is that for erasing, I implemented the same logic as for painting by using kCGBlendModeClear but the problem is that I cant redraw on the erased area and this is because in each pass in drawRect i have to stroke both the painting and erasing paths. So is there anyway that we can subtract erasing path from drawing path to get the resultant path and then stroke it. I am very new to Core Graphics and looking forward for your reply and comments. Or any other logic to implement the same. I can't use eraser as background color because my background is textured.
You don't need to stroke the path every time, in fact doing so is a huge performance hit. I guarantee if you try it on an iPad 3 you will be met with a nearly unresponsive screen after a few strokes. You only need to add and stroke the path once. After that, it will be stored as pixel data. So don't keep track of your strokes, just add them, stroke them, and get rid of them. Also look into using a CGLayer (you can draw to that outside the main loop, and only render it to your rect in the main loop so it saves lots of time).
These are the steps that I use, and I am doing the exact same thing (I use a CGPath instead of UIBezierPath, but the idea is the same):
1) In touches began, store the touch point and set the context to either erase or draw, depending on what the user has selected.
2) In touches moved, if the point is a certain arbitrary distance away from the last point, then move to the last point (CGContextMoveToPoint) and draw a line to the new point (CGContextAddLineToPoint) in my CGLayer. Calculate the rectangle that was changed (i.e. contains the two points) and call setNeedsDisplayInRect: with that rectangle.
3) In drawRect render the CGLayer into the current window context ( UIGraphicsGetCurrentContext() ).
On an iPad 3 (the one that everyone has the most trouble with due to its enormous pixel count) this process takes between 0.05 ms and 0.15ms per render (depending on how fast you swipe). There is one caveat though, if you don't take the proper precautions, the entire frame rectangle will be redrawn even if you only use setNeedsDisplayInRect: My hacky way to combat this (thanks to the dev forums) is described in my self answered question here, Otherwise, if your view takes a long time to draw the entire frame (mine took an unacceptable 150 ms) you will get a short stutter under certain conditions while the view buffer gets recreated.
EDIT With the new info from your comments, it seems that the answer to this question will benefit you -> Use a CoreGraphic Stroke as Alpha Mask in iPhone App
Hai here is the code for making painting, erasing, undo, redo, saving as picture. you can check sample code and implement this on your project.
Here
Blending and offscreen-rendering are both expensive in Core Animation.
One can see them in Core Animation instrument in Instruments, with Debug Options:
Here is my case:
Display 50x50 PNG images on UIImageViews. I want to round the images with a 6-point corer radius. The first method is to set UIImageView.layer's cornerRadius and masksToBounds which causes offscreen-rendering. The second method is to make PNG image copies with transparent corners which causes blending(because of the alpha channel).
I've tried both, but I can't see significant performance difference. However, I still want to know which is worse in theory and best practices if any.
Thanks a lot!
Well, short answer, the blending has to occur either way to correctly display the transparent corner pixels. However, this should typically only be an issue if you want the resulting view to also animate in some way (and remember, scrolling is the most common type of animation). Also, I'm able to recreate situations where "cornerRadius" will cause rendering errors on older devices (iPhone 3G in my case) when my views become complex. For situations where you do need performant animations, here are the recommendations I follow.
First, if you only need the resources with a single curve for the rounded corners (different scales are fine, as long as the desired curvature is the same), save them that way to avoid the extra calculation of "cornerRadius" at runtime.
Second, don't use transparency anywhere you don't need it (e.g. when the background is actually a solid color), and always specify the correct value for the "opaque" property to help the system more efficiently calculate the drawing.
Third, find ways to minimize the size of transparent views. For example, for a large border view with transparent elements (e.g. rounded corners), consider splitting the view into 3 (top, middle, bottom) or 7 (4 corners, top middle, middle, bottom middle) parts, keeping the transparent portions as small as possible and marking the rectangular portions as opaque, with solid backgrounds.
Fourth, in situations where you're drawing lots of text in scrollViews (e.g. highly customized UITableViewCell), consider using the "drawRect:" method to render these portions more efficiently. Continue using subviews for image elements, in order to split the render time between the overall view between pre-drawing (subviews) and "just-in-time" drawing (drawRect:). Obviously, experimentation (frames per second while scrolling) could show that violating this "rule-of-thumb" may be optimal for your particular views.
Finally, making sure you have plenty of time to experiment using the profiling tools (especially CoreAnimation) is key. I find that it's easiest to see improvements using the slowest device you want to target, and the results look great on newer devices.
After watching WWDC videos and having some experiments with Xcode and Instruments I can say that blending is better then offscreen rendering. Blending means that system requires some additional time to calculate color of pixels on transparent layers. The more transparent layers you have (and bigger size of these layers) then blending takes more time.
Offscreen rendering means that system will make more then one rendering iteration. At first iteration system will make rendering without visualization just to calculate bounds and shape of area which should be rendered. In next iterations system does regular rendering (depends on calculated shape) including blending if required.
Also for offscreen rendering system creates a separate graphics context and destroys it after rendering.
So you should avoid offscreen rendering and it's better to replace it with blending.