How do I implement "regular" drawing (as normally would be done in a drawRect method) on top of an OpenGL animation running in the background? My app is the OpenGL app that is the default Xcode game app template. The GLKViewController does not have a drawRect method, and when I add one, it never gets called. I tried to implement drawing code in the drawInRect method (which does exist) but I get run time errors.
So to summarize: I'd like to draw stuff (lines, paths, whatever) NOT using OpenGL, but using regular quartz primitives and display this on top of an existing 3d rendering.
To make sure drawRect is being called, you should probably go the other route: Create a standard Cocoa Touch project, alter the + (Class)layerClass method of the main view to return [CAEAGLLayer class], then start drawing with that. Note that the CAEAGLLayer documentation specifically warns against doing what you want to do:
Avoid drawing other layers on top of the CAEAGLLayer object. If you must draw other, non OpenGL content, you might find the performance cost acceptable if you place transparent 2D content on top of the GL content and also make sure that the OpenGL content is opaque and not transformed.
Check out the GLPaint project for a simple OpenGL ES project showing the layerClass override (in PaintingView.m). They use layoutSubviews and touchesBegan/Moved/Ended to do the drawing.
Related
In Apple's Core Animation documentation, it says there are two rendering paths involved. From what I know, CALayer caches bitmap data of a UIView content. There are two ways of providing content of a CALayer. One is implementing drawRect: or other CALayer drawing methods, the other is setting a bitmap to the property contents of CALayer.
Here I'm wondering, what happens behind the scene if neither of the above two things is done? I believe there is a private drawing path UIView uses in this situation. What is this private drawing path consists of? How does it work?
The crux of CALayer is that it's GPU-backed. In modern graphics and animation, you want to minimize the number of times your bitmap data crosses between the CPU and the GPU. These operations are costly.
CALayer always uses a private drawing path, whether you use setContents: or drawRect:. In fact, the underlying plumbing of CALayer handles both of these in essentially the same way. When you setContents: , CALayer takes the image you gave it, and uploads it to the GPU via OpenGL (probably nowadays Metal) calls. When you drawRect: the CALayer gives you a context into which you can draw, and then it does the same thing with the bitmap data.
If you don't set contents or implement drawRect, you can still do things like set the layer's background color, border, corner radius, etc. This is being rendered by CALayer's GPU-based private drawing path.
CALayer does not draw any of its content using drawRect:. Only view based drawing techniques like Core Graphics use drawRect: but the only problem with this ways of things is that it is done on the CPU and the main thread making it an expensive process. So instead, Core Animation manipulates a cached bitmap of your apps content in the graphics hardware directly which is far much more optimised. You update or provide the initial contents for a Core Animation layer through one of its delegates (displayLayer: or drawLayer:inContext) or using the contents property as you have mentioned. All layer objects in Core Animation are derived from CALayer.
CALayer is simply a layer object belonging to Core Animation which in itself is simply a support system for UIView or subclasses of UIView. Core Animation is not a drawing technology in the sense it cannot create primitive shapes like Quartz, Open GL ES and Metal can. Instead, Core Animation allows you to manipulate an already existing view and it does this by caching the bitmap data of a UIView and sending this off to the graphics hardware to be manipulated. We say Core Animation is a support system and all its work relies on using layer objects of which CALayer is the main type. It can only do this of course if a view has a layer and a view does not actually need a layer to exist. However, in iOS all views come with a layer attached by default. We say views in iOS are "layer backed". In MacOS, you need to actually add Core Animation support for views.
The actual drawing of the contents of a CALayer happens in a few ways. The first is changing the contents property of the CALayer as you have mentioned and you do this by giving it a CGImageRef. The second is by implementing or overriding in a subclass the CALayer delegate displayLayer: which creates a bitmap and sets it to the content property of a layer. The third is by implementing or overriding in a subclass the CALayer delegate drawLayer:inContext which creates a bitmap, creates a graphics context to draw into that bitmap, and then calls your delegate method to fill the bitmap.
In iOS we do not usually worry about how the content of our view's layers are rendered. Since all views are layer backed, iOS manages how to render these views in the most efficient way possible using the methods I've just described. This is an optimisation to save you time and it makes layers very easy to use. You'll usually worry about overriding these delegates or subclassing them if you are developing for MacOS where views are not always layer backed. You might also worry about this if you decide not to use the default CALayer in iOS, for example you might change a view's layer from CALayer to CAMetalLayer. Or perhaps if you are looking for a performance optimisation and even this in a small number of cases.
There are three ways to provide content to a layer.
- Assign an image object directly to the layer object’s contents property.
- Assign a delegate object to the layer and let the delegate draw the layer’s content.
- Define a layer subclass and override one of its drawing methods to provide the layer contents yourself.
If we don't implement drawRect: or setting the contents property or subclass the layer, it will use the default way which is the second one to provide content to a Layer-backed views's layer. The layer will capture the content of the view and render it.
My iOS application draws into a bitmap (same size as my view) using Core Graphics. I want to push updated regions of the bitmap to the screen. (I've used the standard UIView drawRect method but I have some good reasons to switch to OpenGL).
I just want to replicate the same behavior as UIView/CALayer drawRect but in an OpenGL view. Essentially I would like to update dirty rectangles on my OpenGL view. Nothing more.
So far I've been able to create an OpenGL ES 1.1 view and push my entire bitmap on screen using a single quad (texture on a vertex array) for each update of my bitmap. Of course, this is pretty inefficient since I only need to refresh the dirty rectangle, not the whole view.
What would be the most efficient way to do that in OpenGL ES? Should I use a lattice of quads and update the texture of the quads that intersect with my dirty rectangle? (If I were to use that method, should I use VBO?) Is there a better way to do that?
FYI (just in case), I won't need rotation but will need to scale the entire OpenGL view.
UPDATE:
This method indeed works. However, there's a bug in iOS5.x on retina display devices that produces an artifact when using single buffering. The problem has been fixed in iOS6. I don't yet have a work around.
You could simply just update a part of the texture using TexSubImage, and redraw your standard full-screen quad, but with the scissor rect beeing set (glScissor) to the "dirty" part. The GL will then not draw any fragments outside this rect.
For this to work, you must of course use single buffering.
I have a drawing app and I would like for my users to be able to use particle effects as part of their drawing. Basically, the point of the app is to perform custom drawing and save to Camera Roll or share over the World Wide Web.
I encounted the CAEmitterLayer class recently, which I reckon would be a simple and effective way to add particle effects.
I have been able to draw the particles onscreen in the app using the CAEmitterLayer implementation. So rendering onscreen works fine.
When I go about rendering the contents of the drawing using
CGContextRef context = UIGraphicsBeginImageContextWithSize(self.bounds.size);
// The instance drawingView has a CAEmitterLayer instance in its layer/view hierarchy
[drawingView.layer renderInContext:context];
//Note: I have also tried using the layer.presentationLayer and still nada
....
//Get the image from the current image context here for saving to Camera Roll or sharing
....the particles are never rendered in the image.
What I think is happening
The CAEmitterLayer is in a constant state of "animating" the particles. That's why when I attempt to render the layer (I have also tried render the layers.presentationLayer and modelLayer), the animations are never committed and so the off screen image render does not contain the particles.
Question
Has anyone rendered the contents of a CAEmitterLayer offscreen? If so, how did you do it?
Alternate Question
Does anyone know of any particle effect system libraries that don't use OpenGL and is not Cocos2D?
-[CALayer renderInContext:] is useful in a few simple cases, but will not work as expected in more complicated situations. You will need to find some other way to do your drawing.
The documentation for -[CALayer renderInContext:] says:
The Mac OS X v10.5 implementation of this method does not
support the entire Core Animation composition model.
QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not
rendered. Additionally, layers that use 3D transforms are not
rendered, nor are layers that specify backgroundFilters, filters,
compositingFilter, or a mask values. Future versions of Mac OS X may
add support for rendering these layers and properties.
(These limitations apply to iOS, too.)
The header CALayer.h also says:
* WARNING: currently this method does not implement the full
* CoreAnimation composition model, use with caution. */
I was able to get my CAEmitterLayer rendered as an image correctly in its current animation state with
Swift
func drawViewHierarchyInRect(_ rect: CGRect,
afterScreenUpdates afterUpdates: Bool) -> Bool
Objective-C
- (BOOL)drawViewHierarchyInRect:(CGRect)rect
afterScreenUpdates:(BOOL)afterUpdates
within a current context
UIGraphicsBeginImageContextWithOptions(size, false, 0)
and set afterScreenUpdates to true|YES
Good luck with that one :D
According to Apple's OpenGL ES Programming Guide, "If [a] framebuffer is intended to be displayed to the user, use a special Core Animation-aware renderbuffer."
The text goes on to say that to make this Core Animation aware renderbuffer, one needs to "Subclass UIView to create an OpenGL ES view for [the] iOS application [and] Override the layerClass" by using this code:
+ (Class) layerClass
{
return [CAEAGLLayer class];
}
However, if one examines Apple's GLCameraRipple example which displays OpenGL to the end user, the layerClass never appears to be overridden. A text search on layerClass or CAEAGLLayer reveals they are missing.
If you look for other approaches to display directly to users, Apple gives two other OpenGL approaches, but both seem to imply that they are not for displaying directly to users but rather are for off-screen rendering. (i.e. "If the framebuffer is used to perform offscreen image processing, attach a renderbuffer. If the framebuffer image is used as an input to a later rendering step, attach a texture.")
Is there another way to display OpenGL content than using a Core Animation aware renderbuffer - or is Apple somehow overrriding the layer class so the OpenGL content is becoming Core Animation aware in another way?
The reason you don't see a subclassed UIView with a CAEAGLLayer backing it in the GLCameraRipple example is because it uses a GLKView. GLKView is a class introduced in iOS 5.0 as part of GLKit, and it wraps some common code, such as the explicit override to use a CAEAGLLayer and the setup of its matching renderbuffer.
This is still being done, it's just abstracted away from you. For displaying OpenGL ES content to the screen, you still need to go through a CAEAGLLayer one way or another.
Offscreen rendering is a different animal, because there you aren't attaching to a layer for display, so there's no layer needed. If you want to render to a texture, attach a texture as a target for your FBO, and that's it.
It seems that the standard way to draw dots, lines, circles, and Bezier paths is to draw them in inside of drawRect. We don't directly call drawRect, but just let iOS call it and we can use [self setNeedsDisplay] to tell iOS to try to call drawRect when it can...
It also seems that we cannot rely on
[self setClearsContextBeforeDrawing: NO];
to not clear the background of the view before calling drawRect. Some details are in this question: UIView: how to do non-destructive drawing?
How about directly drawing on the screen -- without putting those code in drawRect. For example, in ViewController.m, have some code that directly draw dots, lines, circles on the screen. Is that possible?
Without having to drop into OpenGL, the closest you can do to get around the erasure is to convert the context as an image using something like CGBitmapContextCreateImage. From there, you can retain the image in memory (or write it to disk if necessary), and then when you redraw the view, you first draw this original image into the context and then overlay it with new content.