I've set up a custom iOS UIView that displays a list of layers, one of which is a CAMetalLayer. I was trying to paint that same content to an image in a CGContext. I tried to extract the layer's contents in order to build a CGImage, but couldn't find a good path.
First, I've tried to asynchronously extract the contents of the framebuffer with a addCompletedHandler() callback, but the async nature of the callback didn't behave nicely with CGContext.
I've also tried extracting the contents of the CAMetalLayer by using the layer.contents property, but apparently the type of the contents is CAImageQueue, which has no documentation or exposed through the API.
I've also tried rendering the layer directly to the CGContext like so:
layer.render(in: cgContext)
but that didn't bear any results either.
Lastly all I'd need to do is retrieve the bytes making up the layer texture, and I could build my own CGImage from scratch.
Related
If I understand it correctly, setting CALayer.shouldRasterize = YES generates a bitmap with the contents of the layer and its whole hierarchy of sublayers.
The documentation says:
When the value of this property is YES, the layer is rendered as a bitmap in its local coordinate space and then composited to the destination with any other content.
Is there a way to access this bitmap's data? I would like to use it to generate a CIImage and apply a CIFilter without the need of drawing the whole layer into a new separate bitmap context and using the layer's method -renderInContext:.
I use Core Graphics to draw in an UIView, and cache the contents in a CGLayer.
One of its functions needs to duplicate a sub-area of the CGLayer and move it to a new location within the same layer. As a traditional trick, I used to do this by drawing the layer into its own context.
However, the behavior of this trick is "undefined" according to the documentation, and it stops working in iOS 12.
Is there an alternative way to do this efficiently? (I have tried drawing the sub-area into an CGImage then drawing the result image back to the layer. But this method seems sort of slow and not so memory efficient.:()
I have a question about the underlying implementation of the Core Image system. I'm adding some CIImages on top of each other. Not that much, about 5 or 6 of them. To save memory and performance, they all have their transparent pixels cropped. They are then drawn at offsets, so I'm using a #"CIAffineTransform" filter to position them.
CIFilter* moveFilter = [CIFilter filterWithName:#"CIAffineTransform"];
My question is: does the moveFilter.outputImage REALLY generate a new image, or does it generate "render settings" that are later on used to draw the actual image?
(If it is the first, that would mean I'm effectively rendering the image twice. It would be a huge flaw in the Core Image API and hard to believe Apple created it this way.)
Filters do not generate anything. outputImage does not generate anything. CIImage does not generate anything. All you are doing is constructing a chain of filters.
Rendering to a bitmap doesn't happen until you explicitly ask for it to happen. You do this in one of two ways:
Call CIContext createCGImage:fromRect:.
Actually draw a CIImage-based UIImage into a graphics context.
I got an image I want to mask on the fly. The mask is basically shaped like a part-circle and changes in volume from time to time. Therefor I need to create an in-memory image, draw the mask circle stuff to it and do the masking on the original image like described in How to Mask an UIImageView.
The thing is that I got no idea how to create an in-memory image I may use for masking and I can apply basic drawing opperations on.t
If your mask is only a semi-circle, it might be easier to create a clipping path with CGContext* calls and draw your image as a CGImage with the clipping path applied. See the documentation for CGContextClip() for details.
Are there any built in abilities to maintain the contents of a CALayer between drawLayer:inContext: calls? Right now I am copying the layer to a buffer and redrawing an image from the buffer every time I called back in drawLayer:inContext: but I'm wondering if CALayer has a way to do this automatically...
I don't believe so. The 'drawInContext' will clear the underlying buffer so that you can draw to it. However, if you forego the drawInContext or drawRect methods, you can set your layer.contents to a CGImage and that will be retained.
I personally do this for almost all of my routines. I overwrite - (void) setFrame:(CGRect)frame to check to see if the frame size has changed. If it has changed I redraw the image using my normal drawing routines but into the context: UIGraphicsBeginImageContextWithOptions(size, _opaque, 0);. I can then grab that image and set it to the imageCache: cachedImage = UIGraphicsGetImageFromCurrentImageContext();. Then I set the layer.Contents to the CGImage. I use this to help cache my drawings, especially on the new iPad which is slow on many drawing routines the iPad 2 doesn't even blink on.
Other advantages to this method: You can share cached images between views if you set up a separate, shared cache. This can really help your memory footprint if you manage your cache well. (Tip: I use NSStringFromCGSize as a dictionary key for shared images). Also, you can actually spin off your drawing routines on a different thread, and then set your layer contents when it's done. This prevents your drawing routines from blocking the main thread (the current image may be stretched in this case though until the new image is set).