I am using this code to capture the part of iPad screen.
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect rect = CGRectMake(500, 500, 600, 600);
CGImageRef imageRef = CGImageCreateWithImageInRect([viewImage CGImage],
rect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
UIImageWriteToSavedPhotosAlbum(img, nil, nil, nil);
CGImageRelease(imageRef);
But this is very slow in comparison to capture part of screen starting from {0,0} to {100,100} using the code
CGRect rect = CGRectMake(0, 0, 200, 200);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[yourView.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The difference between them is the rect size passes to UIGraphicsBeginImageContext. Can we capture CGRectMake(500, 500, 600, 600) without passing complete screen bounds to GraphicContext?
Please write code too.
Sure, you can capture just a smaller part of the view. Create the context at size 600x600, then translate the origin of the context to 500,500 before asking the layer to render.
CGRect rect = CGRectMake(500, 500, 600, 600);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, rect.origin.x, rect.origin.y);
[yourView.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You might also want to look at the -[UIView drawViewHierarchyInRect:afterScreenUpdates:] method. According to WWDC 2013 Session 226, “Implementing Engaging UI on iOS”, drawViewHierarchyInRect:afterScreenUpdates: is significantly faster than renderInContext:. See slide 41 for a speed comparison: 844 ms for the older method, and 145 ms for the newer method in their example.
Related
I want to crop a 200*200 image, the cropped part is (x=10,y=10,w=50,h=50). and drawing this part to new 500*500 image, the new rect is (x=30,y=30,w=50, h=50), how to do it?
I can get the part of image with the following method
- (UIImage*) getSubImageWithRect: (CGRect) rect {
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// translated rectangle for drawing sub image
CGRect drawRect = CGRectMake(-rect.origin.x, -rect.origin.y, self.size.width, self.size.height);
// clip to the bounds of the image context
// not strictly necessary as it will get clipped anyway?
CGContextClipToRect(context, CGRectMake(0, 0, rect.size.width, rect.size.height));
// draw image
[self drawInRect:drawRect];
// grab image
UIImage* subImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return subImage;
}
Lets try using following code chunk
- (UIImage*) getSubImageFromImage:(UIImage *)image
{
// translated rectangle for drawing sub image
CGRect drawRect = CGRectMake(10, 10, 50, 50);
// Create Image Ref on Image
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
// Get Cropped Image
UIImage *img = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return img;
}
I have a view (testView) that is 400x320 and I need to take a screenshot of part of this view (say rect = (50, 50, 200, 200)).
I am playing around with the drawViewHierarchy method in iOS 7 but I can't figure out how to do it correctly.
UIGraphicsBeginImageContextWithOptions(self.testView.bounds.size, NO, [UIScreen mainScreen].scale);
[self.testView drawViewHierarchyInRect:self.testView.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Any help will be appreciated!
Thanks.
After getting the whole snapshot, you could draw it in a smaller Graphic Context in a way that you get the part you want:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(200, 200), YES, [UIScreen mainScreen].scale);
[image drawInRect:CGRectMake(-50, -50, image.size.width, image.size.height)];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
EDIT: Better yet, draw the hierarchy directly in that smaller context:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(200, 200), NO, [UIScreen mainScreen].scale);
[self.testView drawViewHierarchyInRect:CGRectMake(-50, -50, self.testView.bounds.size.width, self.testView.bounds.size.height) afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try below line of code,
UIView *popSnapshot=[inputView snapshotViewAfterScreenUpdates:YES];
I am trying to generate screenshot of UIView which having subview with CATransform3DMakeRotation. Screenshot is generated but it doesn't contain Rotation.
Is it possible to achieve this?
Actual View:
ScreenShot Image
Using following call to Flip the view horizontally...
currentView.layer.transform = CATransform3DConcat(currentView.layer.transform,CATransform3DMakeRotation(M_PI, 0.0, 1.0, 0.0f));
Code for taking screen shot
+ (UIImage *) imageWithView:(UIView *)view
{
CGSize screenDimensions = view.bounds.size;
// Create a graphics context with the target size
// (last parameter takes scale into account)
UIGraphicsBeginImageContextWithOptions(screenDimensions, NO, 0);
// Render the view to a new context
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
The "renderInContext" only works for Affine transform. So convert the 3D transform into affine transform like this
currentView.layer.affineTransform = CATransform3DGetAffineTransform(CATransform3DConcat(currentView.layer.transform,CATransform3DMakeRotation(M_PI, 0.0, 1.0, 0.0f)));
Try this code
CGSize newSize = CGSizeMake(yourview.frame.size.width , yourview.frame.size.height);
UIGraphicsBeginImageContextWithOptions(newSize,YES,2.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
[yourview.layer renderInContext:context];
[yourview drawRect:yourview.frame];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This might work, try it:
CGRect grabRect = CGRectMake(40,40,300,200);
//for retina displays
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(grabRect.size, NO, [UIScreen mainScreen].scale);
} else {
UIGraphicsBeginImageContext(grabRect.size);
}
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, -grabRect.origin.x, -grabRect.origin.y);
[self.view.layer renderInContext:ctx];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
i have achieved this in one of my application by doing a little tweak like first i capture the whole screen's screen shot and then crop it with the desired frame i need here is a sample code from my app.
- (UIImage *) croppedPhoto
{
[imgcropRectangle setHidden:TRUE];
UIGraphicsBeginImageContext(self.view.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Create bitmap image from original image data,
// using rectangle to specify desired crop area
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], self.imgcropRectangle.frame);
UIImage *result = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
[imgcropRectangle setHidden:FALSE];
return result;
}
here imgcropRectangle is the UIImageView's object that defines my desired rectangle so i use it's frame for cropping from full screen to desired output. Hope it will help you :)
Try rendering view.layer.presentationLayer instead of view.layer
use this and before passing the view check its subviews :
+ (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContext(CGSizeMake(view.frame.size.width, view.frame.size.height));
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage
}
This question already has an answer here:
How to get a correctly rotated UIImage from an ALAssetRepresentation?
(1 answer)
Closed 9 years ago.
I have an iPad app where I'm using the camera. The original image is 480 x 640. I am attempting to resize it to 124 x 160 and then store it in CoreData using this code that I found on the internet:
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
The image is returned to me rotated counter-clockwise 90 degrees and I don't see why. I have tried commenting out this statement:
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
but it makes no difference. What is wrong here?
Thanks to everybody who made suggestions.. I kept looking and found this, which scales and keeps the orientation:
CGRect screenRect = CGRectMake(0, 0, 120.0, 160.0);
UIGraphicsBeginImageContext(screenRect.size);
[image drawInRect:screenRect blendMode:kCGBlendModePlusDarker alpha:1];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You need to consider the orientation of the image when drawing it like this. Check this answer iOS - UIImageView - how to handle UIImage image orientation
if you want to capture the screen is ios app , you can use the following codes:
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
currentCaptureImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
but I want to capture the picture from a specific point, e.g. (start.x, start.y) and with a specific width and height, how can I do this ??
i just google it and got best answer From How to capture a specific size of the self.view
UIGraphicsBeginImageContextWithOptions(CGSizeMake(300, 320), YES, 0.);
[self.view.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
If your view was, for example, 600x320 and you wanted to capture the middle 300 points in width, you'd translate the context 150 points to the left:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(300, 320), YES, 0.);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, -150.f, 0.f);
[self.view.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();