CAEmitterLayer not rendering when -renderInContext: of superlayer is called - ios

I have a drawing app and I would like for my users to be able to use particle effects as part of their drawing. Basically, the point of the app is to perform custom drawing and save to Camera Roll or share over the World Wide Web.
I encounted the CAEmitterLayer class recently, which I reckon would be a simple and effective way to add particle effects.
I have been able to draw the particles onscreen in the app using the CAEmitterLayer implementation. So rendering onscreen works fine.
When I go about rendering the contents of the drawing using
CGContextRef context = UIGraphicsBeginImageContextWithSize(self.bounds.size);
// The instance drawingView has a CAEmitterLayer instance in its layer/view hierarchy
[drawingView.layer renderInContext:context];
//Note: I have also tried using the layer.presentationLayer and still nada
....
//Get the image from the current image context here for saving to Camera Roll or sharing
....the particles are never rendered in the image.
What I think is happening
The CAEmitterLayer is in a constant state of "animating" the particles. That's why when I attempt to render the layer (I have also tried render the layers.presentationLayer and modelLayer), the animations are never committed and so the off screen image render does not contain the particles.
Question
Has anyone rendered the contents of a CAEmitterLayer offscreen? If so, how did you do it?
Alternate Question
Does anyone know of any particle effect system libraries that don't use OpenGL and is not Cocos2D?

-[CALayer renderInContext:] is useful in a few simple cases, but will not work as expected in more complicated situations. You will need to find some other way to do your drawing.
The documentation for -[CALayer renderInContext:] says:
The Mac OS X v10.5 implementation of this method does not
support the entire Core Animation composition model.
QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not
rendered. Additionally, layers that use 3D transforms are not
rendered, nor are layers that specify backgroundFilters, filters,
compositingFilter, or a mask values. Future versions of Mac OS X may
add support for rendering these layers and properties.
(These limitations apply to iOS, too.)
The header CALayer.h also says:
* WARNING: currently this method does not implement the full
* CoreAnimation composition model, use with caution. */

I was able to get my CAEmitterLayer rendered as an image correctly in its current animation state with
Swift
func drawViewHierarchyInRect(_ rect: CGRect,
afterScreenUpdates afterUpdates: Bool) -> Bool
Objective-C
- (BOOL)drawViewHierarchyInRect:(CGRect)rect
afterScreenUpdates:(BOOL)afterUpdates
within a current context
UIGraphicsBeginImageContextWithOptions(size, false, 0)
and set afterScreenUpdates to true|YES
Good luck with that one :D

Related

How does drawRect and CGGraphicsContext work?

I am working with some stuff in Core Graphic's and I am looking for some additional clarification regarding a couple of topics.
drawRect:
I have an understanding of this and know it is where all of the drawing aspect's of a UIView goes, but am just unclear as to what is happening behind the scene's. What happen's when I create a UIView and fill out drawRect then set another object's UIView to be that custom view? When is drawRect being called?
CGGraphicsContext:
I know what the purpose of this is and understand the concept, but I can't see exactly how it is working.
For example:
CGContextSaveGState(context);
CGContextAddRect(context, rect);
CGContextClip(context);
CGContextDrawLinearGradient(context, gradient, startPoint, endPoint, 0);
CGContextRestoreGState(context);
The code above is in my app and work's correctly. The thing that confuses me is how it is working. The idea of saving/restoring a context makes sense, but it appears like I am literally saving a context, using that exact same context to make change's, then restoring the same context once again. It just seem's like I am saving a context and then writing on top of that context, only to restore it. How is it getting saved to a point where when you restore it, it is a different instance of the context than what was just used to make changes? You use the same reference of the variable context in every situation.
Lastly I would appreciate any resource's for practice project's or example's on using Core Graphics. I am looking to improve my skill in the matter since I obviously don't have much at the current time.
What happen's when I create a UIView and fill out drawRect then set another object's UIView to be that custom view? When is drawRect being called?
Adding a view to a 'live' view graph marks the view's frame as in need of display. The main run loop then creates and coalesces invalid rects and soon returns to invoke drawing. It does not draw immediately upon invalidation. This is a good thing because resizing, for example, would result in significant overdrawing -- redundant work which would kill many apps' drawing performance. When drawing, a context is created to render to -- which ultimately outputs to its destination.
Graphics Contexts are abstractions which are free to work optimally for their destination -- a destination could be a device/screen, bitmap, PDF, etc.. However, a context handle (CGContextRef) itself refers to a destination and holds a set of parameters regarding its state (these parameters are all documented here). These parameter sets operate like stacks: Push = CGContextSaveGState, Pop = CGContextRestoreGState. Although the context pointer isn't changing, the stack of parameter sets is changing.
As far as resources, see Programming with Quartz. It's 8 years old now, and was originally written for OS X -- but that ultimately doesn't matter a whole lot because the fundamentals of the drawing system and APIs really haven't evolved significantly since then -- And that is what you intend to focus on. The APIs have been extended, so it would be good to review which APIs were introduced since 10.4 and see what problems they solve, but it's secretly a good thing for you because it helps maintain focus on the fundamental operation of the drawing system. Note that some functionalities were excluded from iOS (e.g. often due to floating point performance and memory constraints, I figure), so every example may not be usable on iOS, but I know of no better guide.
Tip: Your drawing code can be easily reused on OS X and iOS if you use Quartz rather than AppKit/UIKit. Plus, the Quartz APIs have a lower update frequency (i.e. the APIs tend to be longer lived).
-drawRect: gets called at some point after you (e.g. your view controller) have called the view's method -setNeedsDisplay or -setNeedsDisplayInRect:.
Saving the graphics state pushes the current graphics state onto a stack. The graphics state contains fill and stroke setting, the current transformation matrix etc. See Apple's documentation for details.
The Quartz 2D Programming Guide doesn't contain many examples but is otherwise quite thorough.
With quartz/ core graphics the context is literally a set of current parameters to use to draw the next drawing command on top of the previous drawing.
Saving the state let's you save all those parameters for later drawing commands that will reuse them.
Then you can set up a different set of parameters for some drawing commands.
Restoring the state gets you back to where you were.
I recommend the book
Programming with Quartz
2D and PDF Graphics in Mac OS X
Though a bit dated in some ways, it will really teach you how quartz / core graphics really flows.
Ok this is a very very deep topic to talk about. I'll explain a few things to my understanding & try to keep it simple. If I'm mistaken I hope someone can correct me out.
first of all there is concept of onscreen drawing and offscreen drawing. On screen drawing is taken place in GPU where offscreen drawing is taken place in CPU to draw things and then its given to GPU to display on the screen. Thats where drawRect() comes in to place (drawrect is only 1 way of doing the offscreen drawings btw). This is why in the drawRect template method (you will see when you make a subclass of UIView) there is a comment by Apple telling
"Only override drawRect: if you perform custom drawing. An empty implementation adversely affects performance during animation"
The reason is whenever there is drawRect method, the iOS would have to ask the CPU to takecare of the drawing which takes place in drawRect and hand it over to the GPU. (Dont get the idea that this is a bad thing :) ). So this is what happens in drawRect in an abstract level.
Now to the question of why save & restore same context over and over. Have you tried to read the description of the method in apple doc about save/restore context ? If you have, you'd notice that it shows all the graphical states which would be affected by this. Ok how does this help ?
Consider something like this. Lets say you're drawing on a rectangle where you have to limit this next part of the drawing on the right half of it and use shadows and antialiasing, etc. You can save your context before drawing the right side and set whatever properties you want and once you finish that, you can simply restore the context and you can continue with all the settings you had before without explicitly setting them again. It's a good practice as well when you do complex drawings as otherwise it would have weird outcomes you might not expect. something like this below
- drawRect()
{
CGContextSaveGState(context);
drawLeftPart(); // - 1
drawRightPart(); // - 2
someOtherDrawing(); // - 3
CGContextRestoreGState(context);
}
- drawLeftPart()
{
CGContextSaveGState(context);
// do your drawing
CGContextRestoreGState(context);
}
- drawRightPart()
{
CGContextSaveGState(context);
// do your drawing
CGContextRestoreGState(context);
}
- someOtherDrawing()
{
CGContextSaveGState(context);
// do your drawing
CGContextRestoreGState(context);
}
Now what ever properties you set in part 1 wont affect drawing of part 2 & 3 so forth.
Hope this helps,

Replicating UIView drawRect in OpenGL ES

My iOS application draws into a bitmap (same size as my view) using Core Graphics. I want to push updated regions of the bitmap to the screen. (I've used the standard UIView drawRect method but I have some good reasons to switch to OpenGL).
I just want to replicate the same behavior as UIView/CALayer drawRect but in an OpenGL view. Essentially I would like to update dirty rectangles on my OpenGL view. Nothing more.
So far I've been able to create an OpenGL ES 1.1 view and push my entire bitmap on screen using a single quad (texture on a vertex array) for each update of my bitmap. Of course, this is pretty inefficient since I only need to refresh the dirty rectangle, not the whole view.
What would be the most efficient way to do that in OpenGL ES? Should I use a lattice of quads and update the texture of the quads that intersect with my dirty rectangle? (If I were to use that method, should I use VBO?) Is there a better way to do that?
FYI (just in case), I won't need rotation but will need to scale the entire OpenGL view.
UPDATE:
This method indeed works. However, there's a bug in iOS5.x on retina display devices that produces an artifact when using single buffering. The problem has been fixed in iOS6. I don't yet have a work around.
You could simply just update a part of the texture using TexSubImage, and redraw your standard full-screen quad, but with the scissor rect beeing set (glScissor) to the "dirty" part. The GL will then not draw any fragments outside this rect.
For this to work, you must of course use single buffering.

How do you get an UIImage from an AVCaptureVideoPreviewLayer?

I have already tried this solution CGImage (or UIImage) from a CALayer
However I do not get anything.
Like the question says, I am trying to get an UIImage from the preview layer of the camera. I know I can either capture a still image or use the outputsamplebuffer but my session quality video is set to photo so either of these 2 aproaches are slow and will give me a big image.
So what I thought could work is to get the image directly from the preview layer, since this has exactly the size I need and the operations have already been made on it. I just dont know how to get this layer to draw into my context so that I can get it as an UIImage.
Perhaps another solution would be to use OpenGL to get this layer directly as a texture?
Any help will be appreciated, thanks.
Quoting Apple from this Technical Q&A:
A: Starting from iOS 7, the UIView class provides a method
-drawViewHierarchyInRect:afterScreenUpdates:, which lets you render a snapshot of the complete view hierarchy as visible onscreen into a
bitmap context. On iOS 6 and earlier, how to capture a view's drawing
contents depends on the underlying drawing technique. This new method
-drawViewHierarchyInRect:afterScreenUpdates: enables you to capture the contents of the receiver view and its subviews to an image
regardless of the drawing techniques (for example UIKit, Quartz,
OpenGL ES, SpriteKit, AV Foundation, etc) in which the views are
rendered
In my experience regarding AVFoundation is not like that, if you use that method on view that host a preview layer you will only obtain the content of the view without the image of the preview layer. Using the -snapshotViewAfterScreenUpdates: will return a UIView that host a special layer. If you try to make an image from that view you won't see nothing.
The only solution I know are AVCaptureVideoDataOutput and AVCaptureStillImageOutput. Each one has its own limit. The first one can't work simultaneously with a AVCaptureMovieFileOutput acquisition, the latter makes the shutter noise.

Is there another way to display OpenGL content than using a Core Animation aware renderbuffer?

According to Apple's OpenGL ES Programming Guide, "If [a] framebuffer is intended to be displayed to the user, use a special Core Animation-aware renderbuffer."
The text goes on to say that to make this Core Animation aware renderbuffer, one needs to "Subclass UIView to create an OpenGL ES view for [the] iOS application [and] Override the layerClass" by using this code:
+ (Class) layerClass
{
return [CAEAGLLayer class];
}
However, if one examines Apple's GLCameraRipple example which displays OpenGL to the end user, the layerClass never appears to be overridden. A text search on layerClass or CAEAGLLayer reveals they are missing.
If you look for other approaches to display directly to users, Apple gives two other OpenGL approaches, but both seem to imply that they are not for displaying directly to users but rather are for off-screen rendering. (i.e. "If the framebuffer is used to perform offscreen image processing, attach a renderbuffer. If the framebuffer image is used as an input to a later rendering step, attach a texture.")
Is there another way to display OpenGL content than using a Core Animation aware renderbuffer - or is Apple somehow overrriding the layer class so the OpenGL content is becoming Core Animation aware in another way?
The reason you don't see a subclassed UIView with a CAEAGLLayer backing it in the GLCameraRipple example is because it uses a GLKView. GLKView is a class introduced in iOS 5.0 as part of GLKit, and it wraps some common code, such as the explicit override to use a CAEAGLLayer and the setup of its matching renderbuffer.
This is still being done, it's just abstracted away from you. For displaying OpenGL ES content to the screen, you still need to go through a CAEAGLLayer one way or another.
Offscreen rendering is a different animal, because there you aren't attaching to a layer for display, so there's no layer needed. If you want to render to a texture, attach a texture as a target for your FBO, and that's it.

"Regular" drawing on top of OpenGL layer

How do I implement "regular" drawing (as normally would be done in a drawRect method) on top of an OpenGL animation running in the background? My app is the OpenGL app that is the default Xcode game app template. The GLKViewController does not have a drawRect method, and when I add one, it never gets called. I tried to implement drawing code in the drawInRect method (which does exist) but I get run time errors.
So to summarize: I'd like to draw stuff (lines, paths, whatever) NOT using OpenGL, but using regular quartz primitives and display this on top of an existing 3d rendering.
To make sure drawRect is being called, you should probably go the other route: Create a standard Cocoa Touch project, alter the + (Class)layerClass method of the main view to return [CAEAGLLayer class], then start drawing with that. Note that the CAEAGLLayer documentation specifically warns against doing what you want to do:
Avoid drawing other layers on top of the CAEAGLLayer object. If you must draw other, non OpenGL content, you might find the performance cost acceptable if you place transparent 2D content on top of the GL content and also make sure that the OpenGL content is opaque and not transformed.
Check out the GLPaint project for a simple OpenGL ES project showing the layerClass override (in PaintingView.m). They use layoutSubviews and touchesBegan/Moved/Ended to do the drawing.

Resources