Take Screenshot of Textfield in swift? - ios

Am making an iOS application which captures screenshot of a textview filled with some image. The problem is that the screen shot captures the entire screen.
How do I take screen shot of UITextView using Swift?

The following worked for me
UIGraphicsBeginImageContextWithOptions(self.textField.bounds.size, false, 0);
self.textField.drawViewHierarchyInRect(self.textField.bounds, afterScreenUpdates: true)
let copied = UIGraphicsGetImageFromCurrentImageContext();
imageView.image = copied
UIGraphicsEndImageContext();

You can save any view as image by using drawViewHierarchyInRect method
UIGraphicsBeginImageContextWithOptions(self.textView.bounds.size, false, 0);
self.textText.drawViewHierarchyInRect(CGRectMake(0, 0, self.textView.frame.size.width, self.textView.frame.size.height),afterScreenUpdates:true)
var screenShot:UIImage = UIGraphicsGetImageFromCurrentImageContext();

All answers are good but missing is scale which is necessary in context.
Use UIView's drawViewHierarchyInRect mehtod to capture screen.
//Use screen scale for better images
let screenScale = UIScreen.mainScreen().scale;
//create context
UIGraphicsBeginImageContextWithOptions(yourView.bounds.size, false, screenScale);
//use view for drawing
yourView.drawViewHierarchyInRect(yourView.bounds, afterScreenUpdates: true)
//get captured image
let capturedImage = UIGraphicsGetImageFromCurrentImageContext();
//last dump context
UIGraphicsEndImageContext();
Another sloution : use UIView's layer's renderInContext
//Use screen scale for better images
let screenScale = UIScreen.mainScreen().scale;
//create context
UIGraphicsBeginImageContextWithOptions(yourView.bounds.size, false, screenScale);
//use view for drawing
yourView.layer.renderInContext(UIGraphicsGetCurrentContext())
//get captured image
let capturedImage = UIGraphicsGetImageFromCurrentImageContext();
//last dump context
UIGraphicsEndImageContext();

Related

UIImage size doubled when converted to CGImage?

I am trying to crop part of an image taken with the iPhone's camera via the cropping(to:) method on a CGImage but I am encountering a weird phenomenon where my UIImage's dimensions are doubled when converted with .cgImage which, obviously, prevents me from doing what I want.
The flow is:
Picture is taken with the camera and goes into a full-screen imageContainerView
A "screenshot" of this imageContainerView is made with a UIView extension, effectively resizing the image to the container's dimensions
imageContainerView's .image is set to now be the "screenshot"
let croppedImage = imageContainerView.renderToImage()
imageContainerView.image = croppedImage
print(imageContainerView.image!.size) //yields (320.0, 568.0)
print(imageContainerView.image!.cgImage!.width, imageContainerView.image!.cgImage!.height) //yields (640, 1136) ??
extension UIView {
func renderToImage(afterScreenUpdates: Bool = false) -> UIImage {
let rendererFormat = UIGraphicsImageRendererFormat.default()
rendererFormat.opaque = isOpaque
let renderer = UIGraphicsImageRenderer(size: bounds.size, format: rendererFormat)
let snapshotImage = renderer.image { _ in
drawHierarchy(in: bounds, afterScreenUpdates: afterScreenUpdates)
}
return snapshotImage
}
}
I have been wandering around here with no success so far and would gladly appreciate a pointer or a suggestion on how/why the image size is suddenly doubled.
Thanks in advance.
This is because print(imageContainerView.image!.size) prints the size of the image object in points and print(imageContainerView.image!.cgImage!.width, imageContainerView.image!.cgImage!.height) print the size of the actual image in pixels.
On iPhone you are using there are 2 pixels for evert point in both horizontal and vertical. The UIImage scale property will give you the factor which in your case will be 2.
See this link iPhone Resolutions

iOS UIImage: set image orientation without rotating the image itself

I am using AVCapture+camera portrait mode to capture an image. when I display the captured image, it is fine--looks fine. But when I convert to jpeg representation and then convert to base64 string to server and server stored it, it is "rotated" already.
So I checked the image orientation before sending to server: it is UIImageOrientation.Right(so is there any way to capture an image using portrait mode but the captured image orientation is up? well, I doubt that after some digging). After the server got the image, it did not do anything, just ignored the metadata about orientation I guess.
since the image I captured looks fine, I want just preserve how the image look like. However, I want to set the image orientation to be up. if I just set the image orientation, the image does not look right anymore.
So is there a way to set the orientation without causing the image to be rotated or after setting the orientation to be up, how to I keep the orientation but rotate the actual image to make it look right?
- (UIImage *)removeRotationForImage:(UIImage*)image {
if (image.imageOrientation == UIImageOrientationUp) return image;
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawInRect:(CGRect){0, 0, image.size}];
UIImage *normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return normalizedImage;
}
swift version of oldrinmendez answer
func removeRotationForImage(image: UIImage) -> UIImage {
if image.imageOrientation == UIImageOrientation.Up {
return image
}
UIGraphicsBeginImageContextWithOptions(image.size, false, image.scale)
image.drawInRect(CGRect(origin: CGPoint(x: 0, y: 0), size: image.size))
let normalizedImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return normalizedImage
}

Screen shot taken on iOS isn't of a whole view

I'm trying to get an image of the contents of a UITextView, including the part that is not currently visible on the screen. I do it exactly as pointed out in the first answer here:
Getting a screenshot of a UIScrollView, including offscreen parts, except for using a UITextView instead of a UIScrollView. Unfortunately, no matter what changes I do to the code, I'm always only getting a part of my UITextView, which is about 1024x315 (I test it on the iPad simulator, in landscape orientation). Why is the size so weird and how can I make it as I want to?
That's how I'm saving the image (ResultsView is an instance of UITextView):
UIImage* image = nil;
UIGraphicsBeginImageContext(ResultsView.contentSize);
CGPoint savedContentOffset = ResultsView.contentOffset;
CGRect savedFrame = ResultsView.frame;
ResultsView.contentOffset = CGPointZero;
ResultsView.frame = CGRectMake(0, 0, ResultsView.contentSize.width, ResultsView.contentSize.height);
[ResultsView.layer renderInContext: UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
ResultsView.contentOffset = savedContentOffset;
ResultsView.frame = savedFrame;
UIGraphicsEndImageContext();
NSData * data = UIImagePNGRepresentation(image);
UIImageWriteToSavedPhotosAlbum([UIImage imageWithData:data], nil, nil, nil);

Getting black (empty) image from UIView drawViewHierarchyInRect:afterScreenUpdates:

After successfully using UIView’s new drawViewHierarchyInRect:afterScreenUpdates: method introduced in iOS 7 to obtain an image representation (via UIGraphicsGetImageFromCurrentImageContext()) for blurring my app also needed to obtain just a portion of a view. I managed to get it in the following manner:
UIImage *image;
CGSize blurredImageSize = [_blurImageView frame].size;
UIGraphicsBeginImageContextWithOptions(blurredImageSize, YES, .0f);
[aView drawViewHierarchyInRect: [aView bounds] afterScreenUpdates: YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This lets me retrieve aView’s content following _blurImageView’s frame.
Now, however, I would need to obtain a portion of aView, but this time this portion would be “inside”. Below is an image representing what I would like to achieve.
I have already tried creating a new graphics context and setting its size to the portion’s size (red box) and calling aView to draw in the rect that represents the red box’s frame (of course its superview’s frame being equal to aView’s) but the image obtained is all black (empty).
After a lot of tweaking I managed to find something that did the job, however I heavily doubt this is the way to go.
Here’s my [edited-for-Stack Overflow] code that works:
- (UIImage *) imageOfPortionOfABiggerView
{
UIView *bigViewToExtractFrom;
UIImage *image;
UIImage *wholeImage;
CGImageRef _image;
CGRect imageToExtractFrame;
CGFloat screenScale = [[UIScreen mainScreen] scale];
// have to scale the rect due to (I suppose) the screen's scale for Core Graphics.
imageToExtractFrame = CGRectApplyAffineTransform(imageToExtractFrame, CGAffineTransformMakeScale(screenScale, screenScale));
UIGraphicsBeginImageContextWithOptions([bigViewToExtractFrom bounds].size, YES, screenScale);
[bigViewToExtractFrom drawViewHierarchyInRect: [bigViewToExtractFrom bounds] afterScreenUpdates: NO];
wholeImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// obtain a CGImage[Ref] from another CGImage, this lets me specify the rect to extract.
// However since the image is from a UIView which are all at 2x scale (retina) if you specify a rect in points CGImage will not take the screen's scale into consideration and will process the rect in pixels. You'll end up with an image from the wrong rect and half the size.
_image = CGImageCreateWithImageInRect([wholeImage CGImage], imageToExtractFrame);
wholeImage = nil;
// have to specify the image's scale due to CGImage not taking the screen's scale into consideration.
image = [UIImage imageWithCGImage: _image scale: screenScale orientation: UIImageOrientationUp];
CGImageRelease(_image);
return image;
}
I hope this will help anyone that stumped upon my issue. Feel free to improve my snippet.
Thanks

How to crop an image and set its cropping bounds from the original image

I've a UIImageView *userImage whose size is full screen and UIImageView *imageSquare whose size is 320x320. The user will be able to play with userImage to make it bigger, change position, etc. imageSquare is static and should be seen as the cropping view
The code below can crop userImage as the imageSquare.frame.size. My problem is that it crops it from the top of userImage and not from imageSquare.frame.origin, meaning I need to crop it from X and Y coordinates. It's my first time trying to do this and every things I've tried so far can't make it crop from imageSquare.frame.origin.
How could I crop the current view (the one the user is manipulating) of userImage from imageSquare.frame.origin?
CGSize pageSize = imageSquare.frame.size;
UIGraphicsBeginImageContext(pageSize);
CGContextRef resizedContext = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(resizedContext, userImage.frame.origin.x, userImage.frame.origin.y);
[userImage.layer renderInContext:resizedContext];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if (image != nil) {
NSLog(#"is not nil");
NSData *imgData = UIImagePNGRepresentation(image);
imageSquare.image = [[UIImage alloc]initWithData:imgData];
}
You'll need to translate by negative x and y:
CGContextTranslateCTM(resizedContext,
-userImage.frame.origin.x,
-userImage.frame.origin.y);
[userImage.layer renderInContext:resizedContext];

Resources