ScreenShot of an UILabel and what is in a UIImageView - ios

I am trying to take a screenshot of my UIImageView and a UILabel that is on top of that.
What I have so far grabs the UIImage in the ImageView and then renders the overlay on it but the positioning of the UILabel is all wrong. I am setting the size of the capture to the actual image size(which isn't what i want).
I just want to be able to take a screenshot exactly how it appears on the screen.
CGSize imageSize = self.imageFromOtherView.size;
// define the size and grab a UIImage from it
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
[self.imageFromOtherView drawInRect:CGRectMake(0, 0, imageSize.width, imageSize.height)];
[self.socialLabel.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Create a UIView to hold both the UIImageView and UILabel. Then pass this container view through this code. I have my view as a property called viewForPhoto so when the code is called it only captures that view. You can tweak it so that it receives a view. This will return the UIImage that you want.
- (UIImage *)imageByRenderingView
{
UIGraphicsBeginImageContextWithOptions(self.viewForPhotoView.bounds.size, NO, 0.0);
[self.viewForPhotoView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}

Add both those views to a common container view, and then call - (BOOL)drawViewHierarchyInRect:(CGRect)rect afterScreenUpdates:(BOOL)afterUpdates on the view to render it in a context.

Related

Fastest way to take screenShot of UIView

I've searched a lot but only found two methods to take screen shot of UIView.
first renderInContext:
I've used it in a way
CGContextRef context = [self createBitmapContextOfSize:CGSizeMake(nImageWidth, nImageHeight)];
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, nImageHeight);
CGContextConcatCTM(context, flipVertical);
[self.layer setBackgroundColor:[UIColor clearColor].CGColor];
[self.layer renderInContext:context];
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* background = [UIImage imageWithCGImage: cgImage];
CGImageRelease(cgImage);
Second drawViewHierarchyInRect: which I've used as
UIImage *background = nil;
UIGraphicsBeginImageContextWithOptions (self.bounds.size, NO, self.window.screen.scale);
if ([self respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)])
{
[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
}
background = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I know that the second one is faster than first and it work for me for iPhone because the view has low size. but when I capturing from iPad the video become jerky.
Can Any body tell me faster way of taking screen shot.
any help would be highly appreciated
Regarding performance, the Apple Docs state the following:
In addition to -drawViewHierarchyInRect:afterScreenUpdates:, UIView
now provides another two snapshot related methods,
-snapshotViewAfterScreenUpdates: and -resizableSnapshotViewFromRect:afterScreenUpdates:withCapInsets:. UIScreen also has -snapshotViewAfterScreenUpdates:.
Unlike UIView's -drawViewHierarchyInRect:afterScreenUpdates:, these
methods return a UIView object. If you are looking for a new snapshot
view, use one of
these methods. It will be more efficient than calling
-drawViewHierarchyInRect:afterScreenUpdates: to render the view contents into a bitmap image yourself. You can use the returned view
as a visual stand-in for the current view/screen in your app. For
example, you might use a snapshot view for animations where updating a
large view hierarchy might be expensive.
There is a third method for taking a snapshot that is much much quicker than either of these but it returns a UIView.
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
If you are just using the snapshot to place as a background "image" etc... then I'd use this instead.
However, this is only available for iOS8.
To use it just do...
UIView *snapshotView = [someView snapshotViewAfterScreenUpdates:YES];
This Method will return you A snapshot images of particular view
-(UIImage *)createSnapShotImagesFromUIview
{
UIGraphicsBeginImageContext(CGSizeMake(view.frame.size.width,view.frame.size.height));
CGContextRef context = UIGraphicsGetCurrentContext();
[mapView.layer renderInContext:context];
UIImage *img_screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img_screenShot;
}

UIView to UIImage with layer borders

I have a UIView whose layer has two sublayers, each of which has a 1.5 pixel border around the outside. I am trying to create a UIImage from this view with the following code
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0f);
[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:NO];
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
The code does return a UIImage, but the image is clipped – that is, the image doesn't include the all of the borders on the sublayers. I've tried tweaking the sizes/bounds but to no effect. Any suggestions of what else I might try?
Thanks!
What happens if you send the parent layer a
drawInContext: message instead of telling the view to draw itself?

Crop Image using CGRect

I have been trying to do this since forever. I have a camera overlay. I want to get my final image to be the part of the image viewable from the in-built camera.
What I did was make CGRect with dimensions equal to the square in the camera. Then I tried cropping it using this function.
- (UIImage *)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
I called it like this
CGRect rect = CGRectMake(10, 72, 300, 300);
UIImage *realImage = [self imageByCropping:[self.capturedImages objectAtIndex:0] toRect:rect];
What I get is a bad quality image with the wrong orientation.
::EDIT::
With Nitin's answer I can crop the correct part of the screen but the problem is it crops the view that follows the camera view, 'the confirmation view'. I suspect this is because Nitin's code uses
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
and because the ViewController in which all this is happening because the View Controller for the Confirmation View is the Controller in which this code is being executed. I will try to explain this with a small map
CameraOverlay.xib(it uses this xib to create an overlay) <===== CameraOverlayViewController ---------> ConfirmationView
So when first the ViewController is evoked(button on Tab bar), it opens the camera(UIImagePickerController) with an overlay over it. Then once user clicks an image, the image is shown on the ConfirmationView.
What I think is happening is when
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, YES, 1.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
these lines are being executed, the View at that time is ConfirmationView.
Note: I call the function in
(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info method.
Refer Drawing and printing Guide.
The default coordinate system is different between CoreGraphics and UIKit. I think your issue is because of this fact.
Using these may help you solve the issue
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context , 0.0, rect.size.height);
CGContextScaleCTM(context , 1.0, -1.0);

Combine two images

I would like to take an image and duplicate it. Then increase it by 105% and overlay it on the original image.
What is the correct way to do this on iOS?
This is your basic code for drawing the image and then saving it as an image again:
- (UIImage *)renderImage:(UIImage *)image atSize:(CGSize)size
{
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0.0, 0.0, size.width, size.height)];
// draw anything else into the context
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Where it says "draw anything else into the context" you can draw the image at a reduced size by setting the appropriate rect to draw in. Then, call the renderImage method with whatever size you want the full image to be. You can use CGContextSetAlpha to set the transparency.

How to fill the background with image in landscape in IOS?

What I'm doing is creating filling in a view's background with an image returned from a UIImagePickerController. The image fills fine in portrait mode; however, the image will repeat when filled as background in landscape mode, but I have no idea why this is occuring. This is a private method I use to resize my image.
+ (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize landscape:(BOOL)landscape {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
When this method is called the newsize parameter is equal to the views bounds size (self.view.bounds.size). The size is accessed after the view's transformation to landscape, but the image doesn't properly.
This is the code that is called right after getting an image from the UIImagePickerController.
-(void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
if (image.size.width > image.size.height) {
self.view.transform = CGAffineTransformRotate(self.view.transform, M_PI_2);
self.composition.landscapemode = YES;
} else {
self.composition.landscapemode = NO;
}
self.composition.image = [NewCompositionViewController imageWithImage:image scaledToSize:self.view.bounds.size landscape:self.composition.landscapemode];
self.view.backgroundColor = [UIColor colorWithPatternImage:self.composition.image];
[self dismissViewControllerAnimated:YES completion:nil];
}
[UIColor colorWithPatternImage:] is meant for tiling images, so it's behaving as it should.
I would recommend creating a UIImageView with screen-sized frame, setting an image to it, and adding it as subview:
UIImageView *backgroundImage = [[UIImageView alloc] initWithFrame:self.view.frame];
[backgroundImage setImage:self.composition.image];
// choose best mode that works for you
[backgroundImage setContentMode:UIViewContentModeScaleAspectFill];
[self.view insertSubview:backgroundImage atIndex:0];
//OR
[self.view addSubview:backgroundImage];
[self.view sendSubviewToBack:backgroundImage];
once it's added, you can rotate it and experiment with autoresizing masks to make sure it's displayed properly for all orientations. Exact method would depend on if you are using auto-layout or not.
UIViewContentModeScaleAspectFill may be more appropriate here than UIViewContentModeScaleAspectFit since the image is filling a background view. AspectFit will maintain the image's aspect ratio and make the entire image fit in the space, which may leave portions of the view transparent. AspectFill also maintains aspect ratio, but will fill the entire view and clip any portions of the image that don't match the view bounds.
I've been able to apply an "aspect fit" UIImage to a UIView background by combining a few AVFoundation and UIKit APIs. Here's one example:
UIImage *image = [UIImage imageWithContentsOfFile:self.desiredBackgroundImageFilePathString];
UIGraphicsBeginImageContext(self.drawingImage.frame.size);
[image drawInRect:AVMakeRectWithAspectRatioInsideRect(image.size, self.drawingImage.bounds)];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.drawingImage.backgroundColor = [UIColor colorWithPatternImage:image];
This flows through a few simple, but important steps:
Generate a UIImage from a file (or whatever).
Define the context of the image (the desired UIView for the background) with UIGraphicsBeginImageContext().
Use drawInRect in combination with AVMakeRectWithAspectRatioInsideRect to scale the image. Provide AVMakeRect...() with the image's .size and the bounds of the target UIView.
Apply the resized image to the desired image context.
Apply your now-resized image to the .backgroundColor of the target UIView using colorWithPatternImage.
I'm able to swap out images with both landscape and portrait aspect ratios without alignment or clipping issues using this code.

Resources