Upgrading from CoreGraphics - ios

I've written my first iOS app, Amaziograph, which uses Core Graphics.
My app is a drawing app and it employs drawing a lot of lines (Up to 30 lines one by one, at different locations + some shadow to simulate brush blur, and it needs to appear as if all lines are drawn at the same time) with CG, which I find to be slow. In fact, when I switch to Retina and try drawing just a single line with my finger, I need to wait a second or so before it is drawn.
I realised that Core Graphics no longer meets my app's requirements as I'd like to make it use the Retina display's advantages and add some photoshop-style brushes.
My question is, is there a graphics library more faster and powerful than Core Graphics, but with simple interface. All I need is drawing simple lines with size, opacity, softness and possibly with some more advanced brushes. I'm thinking of OpenGL after I saw Apple's GLPaint app, but it seems a bit complicated for me with all those framebuffers, contexts and so on. I am looking for something that has similar to CG's ideology, so it wouldn't take much time rewriting my code. Also, right now I'm doing all my drawing in UIImage views, so it would be nice to draw on top of UIImages directly.
Here is an extract of the code I'm using to draw right now:
//...Begin image contest >> draw the previous image in >> set stroke style >>
CGContextBeginPath(currentContext);
CGContextMoveToPoint(currentContext, lastPoint.x, lastPoint.y-offset);
CGContextAddLineToPoint(currentContext, currentPoint.x, currentPoint.y-offset);
CGContextStrokePath(currentContext);
//Send to an UIImage and end image contest...

You are not going to find another graphics library with better performance than Core Graphics for the iOS platforms. Most likely your application can be significantly optimised, there are many tricks to use. You might be interested in the WWDC video 506 from 2012:
http://developer.apple.com/videos/wwdc/2012/

Optimizing 2D Graphics and Animation Performance
They demonstrate a paint application using Core Graphics that works at full frame rate.

Related

iOS FloodFill : UIImage vs Core Graphics

I'm considering building an app that would make heavy use of a flood fill / paint bucket feature. The images I'd be coloring are simply like coloring book pages; white background, black borders. I'm debating which is better to use UIImage (by manipulating pixel data) or drawing the images with Core Graphics and changing the fill color on touch.
With UIImage, I'm unable to account for retina images properly; it destroys the image when I write the context into a new UIImage, but I can probably figure out. I open to tips though...
With CoreGraphics, I have no idea how to calculate which shape to fill when a user touches an area and then actually filling that area. I've looked but I have not turned up a successful search.
Overall, I believe the optimal solution is using CoreGraphics, since it'll be lighter overall and I won't have to keep several copies of the same image for different sizes.
Thoughts? Go easy on me! It's my first app and first SO question ;)
I'd suggest using Core Graphics.
Instead of images, define the shapes using CGPath or NSBezierPath, and use Core Graphics to stroke and/or fill the shapes. Filling shapes is then as easy as switching drawing mode from just stroking to stroking and filling.
Creating even more complex shapes is made much easier with the "PaintCode" app (which lets you draw and creates the path code for you).
As your first app, I would suggest something with a little less custom graphics fiddling, though.

Replicating UIView drawRect in OpenGL ES

My iOS application draws into a bitmap (same size as my view) using Core Graphics. I want to push updated regions of the bitmap to the screen. (I've used the standard UIView drawRect method but I have some good reasons to switch to OpenGL).
I just want to replicate the same behavior as UIView/CALayer drawRect but in an OpenGL view. Essentially I would like to update dirty rectangles on my OpenGL view. Nothing more.
So far I've been able to create an OpenGL ES 1.1 view and push my entire bitmap on screen using a single quad (texture on a vertex array) for each update of my bitmap. Of course, this is pretty inefficient since I only need to refresh the dirty rectangle, not the whole view.
What would be the most efficient way to do that in OpenGL ES? Should I use a lattice of quads and update the texture of the quads that intersect with my dirty rectangle? (If I were to use that method, should I use VBO?) Is there a better way to do that?
FYI (just in case), I won't need rotation but will need to scale the entire OpenGL view.
UPDATE:
This method indeed works. However, there's a bug in iOS5.x on retina display devices that produces an artifact when using single buffering. The problem has been fixed in iOS6. I don't yet have a work around.
You could simply just update a part of the texture using TexSubImage, and redraw your standard full-screen quad, but with the scissor rect beeing set (glScissor) to the "dirty" part. The GL will then not draw any fragments outside this rect.
For this to work, you must of course use single buffering.

ios iPaint from WWDC 2012

In the Video Session 506 of Apples WWDC 2012 they showed a painting app which is made for high performance drawings (so the frame rate never gets below 30).
I tried to replicate the code but get stuck on multiple points.
What I am looking for is a basic drawing app (lines, Squares, Circles, bezier paths), which performs well even after hundreds of lines have been drawn.
The basic approach is to save the drawn lines (or circles bezierpaths etc) to an image after a certain numbers of them have been drawn, and then only refresh the new drawings, therefore not having to redraw all the already drawn lines.
But somehow I never get to a higher performance. How do I need to implement this? Do I need multiple layers? And how do I manage that not all layers in a view are redrawn, but only a certain sublayer?
If someone could provide me with a short example of a few lines drawn on an layer, then saving that layer to an image, an then drawing on top of that I would really appreciate it.
Thank you for any help to recreate the iPaint application, which is unfortunately not available for download from apple.
That is only half of the puzzle. The other half is to only refresh the minimum possible area of the view (via setNeedsDisplayInRect:). However, I have been through many different ways of drawing via Core Graphics. The caching is fine, but I don't use it anymore. I set the update rectangle as above, and then test each path before I stroke it (testing is fast, stroking is slow). If it is inside the update box, I stroke it, otherwise I ignore it.
I did not look at that session, but a traditional Quartz speedup has been to use CGLayers (not CALayers). You can think of a CGLayer as a cached drawing which may or may not be a bitmap (the system decides how best to cached it). If you have a backing bitmap context, you can use that as your "image" and draw the CGLayers into that (and then discard the layers) as you see fit. Read up on CGLayer (its in the Quartz documentation) and then see if this was what they talked about in that session.

drawing shadow using core graphic and using CALayer

As far as I know we can use core graphic such as CGContextSetShadowWithColor to draw a shadow. However, we can also use CALayer to show the shadow as well.
Question :
what are the differences between 2 of them. Are there any rules to determine when we use core graphic to draw a or when we use CALayer to do the job
I would have to say that using CoreAnimation is always preferred over CoreGraphics, since it's more high level, and abstracts the low-level details of drawing the shadow. (It may also allow apple to optimize the shadow drawing without hurting your code syntax).
However, there are times where you are overriding drawRect: anyways, and you have very specific use for the shadow, not the whole view's layer. You might wanna use CoreGraphics shadows here.
One last note, CoreAnimation gradients are much faster when rendering, take my word for it. I used it on UITableViewCell, and the scroll performance significantly increased, as opposed to using CoreGraphics Gradients. That comes at a price, though. It's a bit worse-looking.

Using OpenGL on top of a MKMapView?

I've got some data that I would like to render on top of a MKMapView using OpenGL. Currently I can sort of achieve this by placing a transparent OpenGL layer on top of the MKMapView and drawing to it using OpenGL commands.
However, the problem becomes synching the drawing of the OpenGL layer with the drawing that MKMapView does. I can kind of get around this by drawing on touch events, this works well until you "flick" the map which causes a continuous series of draws for the animation that I don't detect.
Another idea was to use a MKOverlayView and hope that OpenGL drawing could be done with it. But I'm not sure how exactly to app
I would recommend evaluating BA3's "Altus" mapping engine. It is built entirely in OpenGL, so in the worst case you may simple render to the same context. However, it would probably be better if you could take advantage of their support for geo-located raster, vector and marker elements.
Full disclosure: I'm friends with the authors, but have no financial interest.

Resources