Resizing a GLKView - ios

When a GLKView is resized, there are some behind-the-scenes operations that take place on the buffers and context of that GLKView. During the time it takes to perform these behind-the-scenes operations, drawing to the GLKView does not produce correct results.
In my scenario, I have a GLKView that has setNeedsDisplay enabled, so that anytime I need to update it's contents on screen, I just call -setNeedsDisplay on that
GLKView. I'm using GLKView to draw images, so if I need to draw an image with a different size, I need to also change the size of the GLKView.
The problem: When I change the size of the GLKView and call setNeedsDisplay on that view, the result on screen is not correct. This is because the GLKView is not done finishing the behind-the-scenes operations invoked by the new size change before it tries to draw the new image.
I found a work-around to this by calling: performSelector:#selector(setNeedsDisplay) withObject:nil afterDelay:0 instead of just setNeedsDisplay on the GLKView. This basically forces the main thread to wait for all the behind-the-scenes openGL operations to be completed before calling setNeedsDisplay. Although this works ok, I am wondering if there is a better solution. For example, is there an openGL call to make the thread wait for all openGL operations to be completed before continuing?

The solution was to reset the CIContext object after the GLKView has been resized.

Related

Best way to handle autoresizing of UIView with custom drawing

I'm working on a custom view, that has some specific Core Graphics drawings. I want to handle the view's autoresizing as efficiently as possible.
If I have a vertical line drawn in UIView, and the view's width stretches, the line's width will stretch with it. I want to keep the original width, therefore I redraw each time in -layoutSubviews:
- (void)drawRect:(CGRect)rect
{
[super drawRect:rect];
// ONLY drawing code ...
}
- (void)layoutSubviews
{
[super layoutSubviews];
[self setNeedsDisplay];
}
This works fine, however I don't think this is a efficient approach - unless CGContext drawing is blazing fast.
So is it really fast? Or is there better way to handle view's autoresizing? (CALayer does not support autoresizing on iOS).
UPDATE :
this is going to be a reusable view. And its task is to draw visual representation of data, supplied by the dataSource. So in practice there could really be a lot of drawing. If it is impossible to get this any more optimized, then there's nothing I can do... but I seriously doubt I'm taking the right approach.
It really depends on what you mean by "fast" but in your case the answer is probably "No, CoreGraphics drawing isn't going to give you fantastic performance."
Whenever you draw in drawRect (even if you use CoreGraphics to do it) you're essentially drawing into a bitmap, which backs your view. The bitmap is eventually sent over to the lower level graphics system, but it's a fundamentally slower process than (say) drawing into an OpenGL context.
When you have a view drawing with drawRect it's usually a good idea to imagine that every call to drawRect "creates" a bitmap, so you should minimize the number of times you need to call drawRect.
If all you want is a single vertical line, I would suggest making a simple view with a width of one point, configured to layout in the center of your root view and to stretch vertically. You can color that view by giving it a background color, and it does not need to implement drawRect.
Using views is usually not recommended, and drawing directly is actually preferred, especially when the scene is complex.
If you see your drawing code is taking a considerable toll, steps to optimize drawing further is to minimize drawing, by either only invalidating portions of the view rather than entirely (setNeedsDisplayInRect:) or using tiling to only draw portions.
For instance, when a view is resized, if you only need to draw in the areas where the view has changed, you can monitor and calculate the difference in size between current and previous layout, and only invalidate regions which have changed. Edit: It seems iOS does not allow partial view drawing, so you may need to move your drawing to a CALayer, and use that as the view's layer.
CATiledLayer can also give a possible solution, where you can cache and preload tiles and draw required tiles asynchronously and concurrently.
But before you take drastic measures, test your code in difficult conditions and see if your code is performant enough. Invalidating only updated regions can assist, but it is not always straightforward to limit drawing to a provided rectangle. Tiling adds even more difficulty, as the tiling mechanism requires learning, and elements are drawn on background threads, so concurrency issues also come in play.
Here is an interesting video on the subject of optimizing 2D drawing from Apple WWDC 2012:
https://developer.apple.com/videos/wwdc/2012/?include=506#506

Replicating UIView drawRect in OpenGL ES

My iOS application draws into a bitmap (same size as my view) using Core Graphics. I want to push updated regions of the bitmap to the screen. (I've used the standard UIView drawRect method but I have some good reasons to switch to OpenGL).
I just want to replicate the same behavior as UIView/CALayer drawRect but in an OpenGL view. Essentially I would like to update dirty rectangles on my OpenGL view. Nothing more.
So far I've been able to create an OpenGL ES 1.1 view and push my entire bitmap on screen using a single quad (texture on a vertex array) for each update of my bitmap. Of course, this is pretty inefficient since I only need to refresh the dirty rectangle, not the whole view.
What would be the most efficient way to do that in OpenGL ES? Should I use a lattice of quads and update the texture of the quads that intersect with my dirty rectangle? (If I were to use that method, should I use VBO?) Is there a better way to do that?
FYI (just in case), I won't need rotation but will need to scale the entire OpenGL view.
UPDATE:
This method indeed works. However, there's a bug in iOS5.x on retina display devices that produces an artifact when using single buffering. The problem has been fixed in iOS6. I don't yet have a work around.
You could simply just update a part of the texture using TexSubImage, and redraw your standard full-screen quad, but with the scissor rect beeing set (glScissor) to the "dirty" part. The GL will then not draw any fragments outside this rect.
For this to work, you must of course use single buffering.

Is it possible to persist the contents of a CALayer between drawInContext calls?

Are there any built in abilities to maintain the contents of a CALayer between drawLayer:inContext: calls? Right now I am copying the layer to a buffer and redrawing an image from the buffer every time I called back in drawLayer:inContext: but I'm wondering if CALayer has a way to do this automatically...
I don't believe so. The 'drawInContext' will clear the underlying buffer so that you can draw to it. However, if you forego the drawInContext or drawRect methods, you can set your layer.contents to a CGImage and that will be retained.
I personally do this for almost all of my routines. I overwrite - (void) setFrame:(CGRect)frame to check to see if the frame size has changed. If it has changed I redraw the image using my normal drawing routines but into the context: UIGraphicsBeginImageContextWithOptions(size, _opaque, 0);. I can then grab that image and set it to the imageCache: cachedImage = UIGraphicsGetImageFromCurrentImageContext();. Then I set the layer.Contents to the CGImage. I use this to help cache my drawings, especially on the new iPad which is slow on many drawing routines the iPad 2 doesn't even blink on.
Other advantages to this method: You can share cached images between views if you set up a separate, shared cache. This can really help your memory footprint if you manage your cache well. (Tip: I use NSStringFromCGSize as a dictionary key for shared images). Also, you can actually spin off your drawing routines on a different thread, and then set your layer contents when it's done. This prevents your drawing routines from blocking the main thread (the current image may be stretched in this case though until the new image is set).

On iPhone and iPad, can we draw anything without using drawRect?

It seems that the standard way to draw dots, lines, circles, and Bezier paths is to draw them in inside of drawRect. We don't directly call drawRect, but just let iOS call it and we can use [self setNeedsDisplay] to tell iOS to try to call drawRect when it can...
It also seems that we cannot rely on
[self setClearsContextBeforeDrawing: NO];
to not clear the background of the view before calling drawRect. Some details are in this question: UIView: how to do non-destructive drawing?
How about directly drawing on the screen -- without putting those code in drawRect. For example, in ViewController.m, have some code that directly draw dots, lines, circles on the screen. Is that possible?
Without having to drop into OpenGL, the closest you can do to get around the erasure is to convert the context as an image using something like CGBitmapContextCreateImage. From there, you can retain the image in memory (or write it to disk if necessary), and then when you redraw the view, you first draw this original image into the context and then overlay it with new content.

"Regular" drawing on top of OpenGL layer

How do I implement "regular" drawing (as normally would be done in a drawRect method) on top of an OpenGL animation running in the background? My app is the OpenGL app that is the default Xcode game app template. The GLKViewController does not have a drawRect method, and when I add one, it never gets called. I tried to implement drawing code in the drawInRect method (which does exist) but I get run time errors.
So to summarize: I'd like to draw stuff (lines, paths, whatever) NOT using OpenGL, but using regular quartz primitives and display this on top of an existing 3d rendering.
To make sure drawRect is being called, you should probably go the other route: Create a standard Cocoa Touch project, alter the + (Class)layerClass method of the main view to return [CAEAGLLayer class], then start drawing with that. Note that the CAEAGLLayer documentation specifically warns against doing what you want to do:
Avoid drawing other layers on top of the CAEAGLLayer object. If you must draw other, non OpenGL content, you might find the performance cost acceptable if you place transparent 2D content on top of the GL content and also make sure that the OpenGL content is opaque and not transformed.
Check out the GLPaint project for a simple OpenGL ES project showing the layerClass override (in PaintingView.m). They use layoutSubviews and touchesBegan/Moved/Ended to do the drawing.

Resources