How to take screenshot of UIView using NSThread? - ios

I'm developing an app and I need to take the screenshot from a thread. I'm unable to do this using the following code:
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, YES, [[UIScreen mainScreen] scale]);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
If anyone going to mark this question as duplicate, then please provide me a proper link to answer and take please care of this also that I am asking about creating image in thread.
Thanks for your time.

Unlike all the other answers want you to believe: That's not possible. UIKit is not thread safe, so doing anything, even just rendering the UI into a bitmap, is not guaranteed to work. You have to do that on the main thread I'm afraid.

I cannot give you a way to do this strictly in a thread, but can get you close.
Determine the size of the view, in pixels (ie retina or not), then create a bitmap context using CGBitmapContextCreate, allocate the memory, and set it all up, on a thread. You want to use BGRA on iOS (as I recall) but confirm this (using the proper layout results in faster drawing).
Now that you have the context, post a block to the main thread and draw into the context.
Once the renderInContext: finishes, post back to a background queue (or thread) to turn this bitmap into a CGImageRef and from that you can get a UIImage.
EDIT: this is taken from some code that I use with AVFoundation - I think this is what you will want but again try to confirm (its hard to find an absolute reference on the options that makes sense):
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // same as CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef newContext = CGBitmapContextCreate(NULL, width, height, 8, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little);
Note I had no use for alpha so the kCGImageAlphaNoneSkipFirst, but you will probably want it so use kCGImageAlphaFirst.

Related

Modify original UIImage in UIGraphicsContext

I've seen a lot of examples where ones get new UIImage with modifications from input UIImage. It looks like:
- (UIImage *)imageByDrawingCircleOnImage:(UIImage *)image
{
// begin a graphics context of sufficient size
UIGraphicsBeginImageContext(image.size);
// draw original image into the context
[image drawAtPoint:CGPointZero];
// get the context for CoreGraphics
CGContextRef ctx = UIGraphicsGetCurrentContext();
// draw there
I have similar problem, but I really want to modify input image. I suppose it would work faster since I would't draw original image every time. But I could not find any samples of it. How can I get image context of original image, where it's already drawn?
UIImage is immutable for numerous reasons (most of them around performance and memory). You must make a copy if you want to mess with it.
If you want a mutable image, just draw it into a context and keep using that context. You can create your own context using CGBitmapContextCreate.
That said, don't second-guess the system too much here. UIImage and Core Graphics have a lot of optimizations in them and there's a reason you see so many examples that copy the image. Don't "suppose it would work faster." You really have to profile it in your program.

iOS: why two graphic contexts here and one extra mem copy?

I found many examples how to blit an array of ints onto the UIView in drawRect, but the simplest one still puzzling me. This works OK, but still three questions:
~ why two contexts?
~ why push/pop context?
~ can avoid copy? (Apple docs say that CGBitmapContextCreateImage copy memory block)
- (void)drawRect:(CGRect)rect {
CGColorSpaceRef color = CGColorSpaceCreateDeviceRGB();
int PIX[9] = { 0xff00ffff,0xff0000ff,0xff00ff00,
0xff0000ff,0xff00ffff,0xff0000ff,
0xff00ff00,0xff0000ff,0xff00ffff};
CGContextRef context = CGBitmapContextCreate((void*)PIX,3,3,8,4*3,color,kCGImageAlphaPremultipliedLast);
UIGraphicsPushContext(context);
CGImageRef image = CGBitmapContextCreateImage(context);
UIGraphicsPopContext();
CGContextRelease(context);
CGColorSpaceRelease(color);
CGContextRef c=UIGraphicsGetCurrentContext();
CGContextDrawImage(c, CGRectMake(0, 0, 10, 10), image);
CGImageRelease(image);
}
The method is drawing the array to 3x3 sized image, then drawing that image onto a 10x10 size in the current context, which is this case would be your UIView's CALayer.
UIGraphicsPushContext lets you set the CGContext that you are currently drawing to. So before the first call, your current CGContext is the UIView, then you push the new CGContext which is the image that is being drawn to.
The UIGraphicsPopContext call restores the previous context, which is the UIView, then you get a reference to that context, and draw the created image to it using this line:
CGContextDrawImage(c, CGRectMake(0, 0, 10, 10), image);
As far as avoiding the copy operation, the docs say that it is sometime a copy on write, but they don't specify when those conditions occur:
The CGImage object returned by this function is created by a copy operation. Subsequent changes to the bitmap graphics context do not affect the contents of the returned image. In some cases the copy operation actually follows copy-on-write semantics, so that the actual physical copy of the bits occur only if the underlying data in the bitmap graphics context is modified. As a consequence, you may want to use the resulting image and release it before you perform additional drawing into the bitmap graphics context. In this way, you can avoid the actual physical copy of the data.

iOS: How to convert the self-drawn content of an UIView to an image (widespread general solution returns blank image)?

My business app requires a feature to let the user draw a signature on a UIView with his finger and save it (via button click in the toolbar) so it can be attached to an unit. These units are going to be uploaded to a server once the work is finished and already support camera picture attachments that are uploaded via Base64, so I simply want to convert the signature taken to an UIImage.
First of all, I needed a solution to draw the signature, I quickly found some sample code from Apple that seemed to meet my requirements: GLPaint
I integrated this sample code into my project with slight modifications since I work with ARC and Storyboards and didn't want the sound effects and the color palette etc., but the drawing code is a straight copy.
The integration seemed to be successful since I was able to draw the signatures on the view. So, next step was to add a save/image conversion function for the drawn signatures.
I've done endless searches and rolled dozens of threads with similar problems asked and most of them pointed to the exact same solution:
(Assumptions)
drawingView: subclassed UIView where the drawing is done on.)
<QuartzCore/QuartzCore.h> and QuartzCore.framework are included
CoreGraphics.framework is included
OpenGLES.framework is included
- (void) saveAsImage:(UIView*) drawingView
{
UIGraphicsBeginImageContext(drawingView.bounds.size);
[drawingView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentContext();
UIGraphicsEndImageContext();
}
Finally my problem: This code doesn't work for me as it always returns a blank image. Since I've already integrated support for picture attachments taken with the iPhone camera, I initially assumpted that the image processing code should work on the signature images as well.
But.. after some resultless searching I dropped that assumption, took the original GLPaint project and just added the few lines above and some code that just shows the image and it was also completely blank. So it is either an issue with that code not working on self-drawn content on UIViews or anything I'm missing.
I am basically out of ideas on this issue and hope some people can help me with it.
Best regards
Felix
I believe your problem might be you are trying to get an image from GL context. You might search around web for that but generally all you need is to call "glReadPixels" after all "draw" calls have been made.. Something like this should work:
BOOL createSnapshot;
int viewWidth, viewHeigth;
if(createSnapshot) {
uint8_t *iData = new uint8_t[viewHeigth * viewWidth * 4];
glReadPixels(0, 0, viewWidth, viewHeigth, GL_RGBA, GL_UNSIGNED_BYTE, iData);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, iData, (viewWidth * viewHeigth * 4), NULL);
CGColorSpaceRef cref = CGColorSpaceCreateDeviceRGB();
CGImageRef cgImage = CGImageCreate(viewWidth, viewHeigth, 8, 32, viewWidth*4, cref, kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);
UIImage *ret = [UIImage imageWithCGImage:cgImage scale:1.0f]; //the image you need
CGImageRelease(cgImage);
CGDataProviderRelease(provider);
CGColorSpaceRelease(cref);
delete [] iData;
createSnapshot = NO;
}
If you use multisampling you will need to call this after the buffers have been resolved and presenting frame buffer has been binded.

Create CGContext for CGLayer

I want to pre-render some graphics into CGLayer for fast drawing in future.
I found that CGLayerCreateWithContext requires a CGContext parameter. It can be easily found in drawRect: method. But I need to create a CGLayer outside of drawRect:. Where should I get CGContext?
Should I simply create temporary CGBitmapContext and use it?
UPDATE:
I need to create CGLayer outside of drawRect: because I want to initialize CGLayer before it is rendered. It is possible to init once on first drawRect call but it's not beautiful solution for me.
There is no reason to do it outside of drawRect: and in fact there are some benefits to doing it inside. For example, if you change the size of the view the layer will still get made with the correct size (assuming it is based on your view's graphics context and not just an arbitrary size). This is a common practice, and I don't think there will be a benefit to creating it outside. The bulk of the CPU cycles will be spent in CGContextDrawLayer anyway.
You can create it by this function, you can render your content in the render block
typedef void (^render_block_t)(CGContextRef);
- (CGLayerRef)rendLayer:(render_block_t) block {
UIGraphicsBeginImageContext(CGSizeMake(100, 100));
CGContextRef context = UIGraphicsGetCurrentContext();
CGLayerRef cgLayer = CGLayerCreateWithContext(context, CGSizeMake(100, 100), nil);
block(CGLayerGetContext(cgLayer));
UIGraphicsEndImageContext();
return cgLayer;
}
I wrote it few days ago. I use it to draw some UIImages in mutable threads.
You can download the code on https://github.com/PengHao/GLImageView/
the file path is GLImageView/GLImageView/ImagesView.m

CGContextDrawImage is EXTREMELY slow after large UIImage drawn into it

It seems that CGContextDrawImage(CGContextRef, CGRect, CGImageRef) performs MUCH WORSE when drawing a CGImage that was created by CoreGraphics (i.e. with CGBitmapContextCreateImage) than it does when drawing the CGImage which backs a UIImage. See this testing method:
-(void)showStrangePerformanceOfCGContextDrawImage
{
///Setup : Load an image and start a context:
UIImage *theImage = [UIImage imageNamed:#"reallyBigImage.png"];
UIGraphicsBeginImageContext(theImage.size);
CGContextRef ctxt = UIGraphicsGetCurrentContext();
CGRect imgRec = CGRectMake(0, 0, theImage.size.width, theImage.size.height);
///Why is this SO MUCH faster...
NSDate * startingTimeForUIImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImage.CGImage); //Draw existing image into context Using the UIImage backing
NSLog(#"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForUIImageDrawing]);
/// Create a new image from the context to use this time in CGContextDrawImage:
CGImageRef theImageConverted = CGBitmapContextCreateImage(ctxt);
///This is WAY slower but why?? Using a pure CGImageRef (ass opposed to one behind a UIImage) seems like it should be faster but AT LEAST it should be the same speed!?
NSDate * startingTimeForNakedGImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImageConverted);
NSLog(#"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForNakedGImageDrawing]);
}
So I guess the question is, #1 what may be causing this and #2 is there a way around it, i.e. other ways to create a CGImageRef which may be faster? I realize I could convert everything to UIImages first but that is such an ugly solution. I already have the CGContextRef sitting there.
UPDATE : This seems to not necessarily be true when drawing small images? That may be a clue- that this problem is amplified when large images (i.e. fullsize camera pics) are used. 640x480 seems to be pretty similar in terms of execution time with either method
UPDATE 2 : Ok, so I've discovered something new.. Its actually NOT the backing of the CGImage that is changing the performance. I can flip-flop the order of the 2 steps and make the UIImage method behave slowly, whereas the "naked" CGImage will be super fast. It seems whichever you perform second will suffer from terrible performance. This seems to be the case UNLESS I free memory by calling CGImageRelease on the image I created with CGBitmapContextCreateImage. Then the UIImage backed method will be fast subsequently. The inverse it not true. What gives? "Crowded" memory shouldn't affect performance like this, should it?
UPDATE 3 : Spoke too soon. The previous update holds true for images at size 2048x2048 but stepping up to 1936x2592 (camera size) the naked CGImage method is still way slower, regardless of order of operations or memory situation. Maybe there are some CG internal limits that make a 16MB image efficient whereas the 21MB image can't be handled efficiently. Its literally 20 times slower to draw the camera size than a 2048x2048. Somehow UIImage provides its CGImage data much faster than a pure CGImage object does. o.O
UPDATE 4 : I thought this might have to do with some memory caching thing, but the results are the same whether the UIImage is loaded with the non-caching [UIImage imageWithContentsOfFile] as if [UIImage imageNamed] is used.
UPDATE 5 (Day 2) : After creating mroe questions than were answered yesterday I have something solid today. What I can say for sure is the following:
The CGImages behind a UIImage don't use alpha. (kCGImageAlphaNoneSkipLast). I thought that maybe they were faster to be drawn because my context WAS using alpha. So I changed the context to use kCGImageAlphaNoneSkipLast. This makes the drawing MUCH faster, UNLESS:
Drawing into a CGContextRef with a UIImage FIRST, makes ALL subsequent image drawing slow
I proved this by 1)first creating a non-alpha context (1936x2592). 2) Filled it with randomly colored 2x2 squares. 3) Full frame drawing a CGImage into that context was FAST (.17 seconds) 4) Repeated experiment but filled context with a drawn CGImage backing a UIImage. Subsequent full frame image drawing was 6+ seconds. SLOWWWWW.
Somehow drawing into a context with a (Large) UIImage drastically slows all subsequent drawing into that context.
Well after a TON of experimentation I think I have found the fastest way to handle situations like this. The drawing operation above which was taking 6+ seconds now .1 seconds. YES. Here's what I discovered:
Homogenize your contexts & images with a pixel format! The root of the question I asked boiled down to the fact that the CGImages inside a UIImage were using THE SAME PIXEL FORMAT as my context. Therefore fast. The CGImages were a different format and therefore slow. Inspect your images with CGImageGetAlphaInfo to see which pixel format they use. I'm using kCGImageAlphaNoneSkipLast EVERYWHERE now as I don't need to work with alpha. If you don't use the same pixel format everywhere, when drawing an image into a context Quartz will be forced to perform expensive pixel-conversions for EACH pixel. = SLOW
USE CGLayers! These make offscreen-drawing performance much better. How this works is basically as follows. 1) create a CGLayer from the context using CGLayerCreateWithContext. 2) do any drawing/setting of drawing properties on THIS LAYER's CONTEXT which is gotten with CGLayerGetContext. READ any pixels or information from the ORIGINAL context. 3) When done, "stamp" this CGLayer back onto the original context using CGContextDrawLayerAtPoint.This is FAST as long as you keep in mind:
1) Release any CGImages created from a context (i.e. those created with CGBitmapContextCreateImage) BEFORE "stamping" your layer back into the CGContextRef using CGContextDrawLayerAtPoint. This creates a 3-4x speed increase when drawing that layer. 2) Keep your pixel format the same everywhere!! 3) Clean up CG objects AS SOON as you can. Things hanging around in memory seem to create strange situations of slowdown, probably because there are callbacks or checks associated with these strong references. Just a guess, but I can say that CLEANING UP MEMORY ASAP helps performance immensely.
I had a similar problem. My application has to redraw a picture almost as large as the screen size. The problem came down to drawing as fast as possible two images of the same resolution, neither rotated nor flipped, but scaled and positioned in different places of the screen each time. After all, I was able to get ~15-20 FPS on iPad 1 and ~20-25 FPS on iPad4. So... hope this helps someone:
Exactly as typewriter said, you have to use the same pixel format. Using one with AlphaNone gives a speed boost. But even more important, argb32_image call in my case did numerous calls converting pixels from ARGB to BGRA. So the best bitmapInfo value for me was (at the time; there is a probability that Apple can change something here in the future):
const CGBitmabInfo g_bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast;
CGContextDrawImage may work faster if rectangle argument was made integral (by CGRectIntegral). Seems to have more effect when image is scaled by factor close to 1.
Using layers actually slowed down things for me. Probably something was changed since 2011 in some internal calls.
Setting interpolation quality for the context lower than default (by CGContextSetInterpolationQuality) is important. I would recommend using (IS_RETINA_DISPLAY ? kCGInterpolationNone : kCGInterpolationLow). Macros IS_RETINA_DISPLAY is taken from here.
Make sure you get CGColorSpaceRef from CGColorSpaceCreateDeviceRGB() or the like when creating context. Some performance issues were reported for getting fixed color space instead of requesting that of the device.
Inheriting view class from UIImageView and simply setting self.image to the image created from context proved useful to me. However, read about using UIImageView first if you want to do this, for it requires some changes in code logic (because drawRect: isn't called anymore).
And if you can avoid scaling your image at the time of actual drawing, try to do so. Drawing non-scaled image is significantly faster - unfortunately, for me that was not an option.

Resources