Create a CGContextRef for existing CGImageRef - ios

Suppose I have an existing CGImageRef. I would like to create a CGContextRef for that existing image and draw something in it.
Considering that the CGImageRef has an underlying data array in memory, my guess is that it should be possible to create a new context using CGBitmapContextCreateWithData and pass as data array the underlying memory data from CGImageRef.
Unfortunately, I don't know how to get the data array from a CGImageRef (might not even be possible).
Please note that I prefer not to create a new context, draw the old image and then the new content.

Related

CGImage from CAMetalLayer

I've set up a custom iOS UIView that displays a list of layers, one of which is a CAMetalLayer. I was trying to paint that same content to an image in a CGContext. I tried to extract the layer's contents in order to build a CGImage, but couldn't find a good path.
First, I've tried to asynchronously extract the contents of the framebuffer with a addCompletedHandler() callback, but the async nature of the callback didn't behave nicely with CGContext.
I've also tried extracting the contents of the CAMetalLayer by using the layer.contents property, but apparently the type of the contents is CAImageQueue, which has no documentation or exposed through the API.
I've also tried rendering the layer directly to the CGContext like so:
layer.render(in: cgContext)
but that didn't bear any results either.
Lastly all I'd need to do is retrieve the bytes making up the layer texture, and I could build my own CGImage from scratch.

Modify original UIImage in UIGraphicsContext

I've seen a lot of examples where ones get new UIImage with modifications from input UIImage. It looks like:
- (UIImage *)imageByDrawingCircleOnImage:(UIImage *)image
{
// begin a graphics context of sufficient size
UIGraphicsBeginImageContext(image.size);
// draw original image into the context
[image drawAtPoint:CGPointZero];
// get the context for CoreGraphics
CGContextRef ctx = UIGraphicsGetCurrentContext();
// draw there
I have similar problem, but I really want to modify input image. I suppose it would work faster since I would't draw original image every time. But I could not find any samples of it. How can I get image context of original image, where it's already drawn?
UIImage is immutable for numerous reasons (most of them around performance and memory). You must make a copy if you want to mess with it.
If you want a mutable image, just draw it into a context and keep using that context. You can create your own context using CGBitmapContextCreate.
That said, don't second-guess the system too much here. UIImage and Core Graphics have a lot of optimizations in them and there's a reason you see so many examples that copy the image. Don't "suppose it would work faster." You really have to profile it in your program.

iOS: why two graphic contexts here and one extra mem copy?

I found many examples how to blit an array of ints onto the UIView in drawRect, but the simplest one still puzzling me. This works OK, but still three questions:
~ why two contexts?
~ why push/pop context?
~ can avoid copy? (Apple docs say that CGBitmapContextCreateImage copy memory block)
- (void)drawRect:(CGRect)rect {
CGColorSpaceRef color = CGColorSpaceCreateDeviceRGB();
int PIX[9] = { 0xff00ffff,0xff0000ff,0xff00ff00,
0xff0000ff,0xff00ffff,0xff0000ff,
0xff00ff00,0xff0000ff,0xff00ffff};
CGContextRef context = CGBitmapContextCreate((void*)PIX,3,3,8,4*3,color,kCGImageAlphaPremultipliedLast);
UIGraphicsPushContext(context);
CGImageRef image = CGBitmapContextCreateImage(context);
UIGraphicsPopContext();
CGContextRelease(context);
CGColorSpaceRelease(color);
CGContextRef c=UIGraphicsGetCurrentContext();
CGContextDrawImage(c, CGRectMake(0, 0, 10, 10), image);
CGImageRelease(image);
}
The method is drawing the array to 3x3 sized image, then drawing that image onto a 10x10 size in the current context, which is this case would be your UIView's CALayer.
UIGraphicsPushContext lets you set the CGContext that you are currently drawing to. So before the first call, your current CGContext is the UIView, then you push the new CGContext which is the image that is being drawn to.
The UIGraphicsPopContext call restores the previous context, which is the UIView, then you get a reference to that context, and draw the created image to it using this line:
CGContextDrawImage(c, CGRectMake(0, 0, 10, 10), image);
As far as avoiding the copy operation, the docs say that it is sometime a copy on write, but they don't specify when those conditions occur:
The CGImage object returned by this function is created by a copy operation. Subsequent changes to the bitmap graphics context do not affect the contents of the returned image. In some cases the copy operation actually follows copy-on-write semantics, so that the actual physical copy of the bits occur only if the underlying data in the bitmap graphics context is modified. As a consequence, you may want to use the resulting image and release it before you perform additional drawing into the bitmap graphics context. In this way, you can avoid the actual physical copy of the data.

Create CGContext for CGLayer

I want to pre-render some graphics into CGLayer for fast drawing in future.
I found that CGLayerCreateWithContext requires a CGContext parameter. It can be easily found in drawRect: method. But I need to create a CGLayer outside of drawRect:. Where should I get CGContext?
Should I simply create temporary CGBitmapContext and use it?
UPDATE:
I need to create CGLayer outside of drawRect: because I want to initialize CGLayer before it is rendered. It is possible to init once on first drawRect call but it's not beautiful solution for me.
There is no reason to do it outside of drawRect: and in fact there are some benefits to doing it inside. For example, if you change the size of the view the layer will still get made with the correct size (assuming it is based on your view's graphics context and not just an arbitrary size). This is a common practice, and I don't think there will be a benefit to creating it outside. The bulk of the CPU cycles will be spent in CGContextDrawLayer anyway.
You can create it by this function, you can render your content in the render block
typedef void (^render_block_t)(CGContextRef);
- (CGLayerRef)rendLayer:(render_block_t) block {
UIGraphicsBeginImageContext(CGSizeMake(100, 100));
CGContextRef context = UIGraphicsGetCurrentContext();
CGLayerRef cgLayer = CGLayerCreateWithContext(context, CGSizeMake(100, 100), nil);
block(CGLayerGetContext(cgLayer));
UIGraphicsEndImageContext();
return cgLayer;
}
I wrote it few days ago. I use it to draw some UIImages in mutable threads.
You can download the code on https://github.com/PengHao/GLImageView/
the file path is GLImageView/GLImageView/ImagesView.m

Is it certain that UIGraphicsBeginImageContext use CGBitmapContextCreate to create the graphics context?

Does UIGraphicsBeginImageContext use CGBitmapContextCreate to create a graphics context [update: I can't find it in the documentation], so the graphics context is exactly the same either way? I also tried to step into UIGraphicsBeginImageContext but the debugger won't let me step into it.
In the UIKit Function References page of iOS documentation, the following is written about UIGraphicsBeginImageContext:
Creates a bitmap-based graphics context and makes it the current context.
Emphasis added. Following a link to the CGContextRef page, I find this:
A graphics context contains drawing parameters and all device-specific information needed to render the paint on a page to the destination, whether the destination is a window in an application, a bitmap image, a PDF document, or a printer.
Again, emphasis added. This says that (as of now) there are 4 kinds of Core Graphics contexts, each with their own initializers. The only kind that has to do with bitmaps is a bitmap-based CGContextRef, and there is only one documented way to create them (well, technically it comes in two versions). It is very likely that this function is being used. I believe that UIGraphicsBeginImageContext is merely a convenience method. It just sets up a default set of parameters for CGBitmapContextCreate (it takes a lot of parameters) and pushes the created context onto the graphics stack.

Resources