I want to create an image from an UIImageView and couple of UIViews attached on it.
The image will look exactly as the screenshot, but if I just take the screenshot, the image is created exactly the same size of screen. Now, if I create the image with bigger dimension, the screenshot is being distorted, though the UIImageView image is a really high resolution image.
So, I think this problem will be solved, if I create an image with the components other than just saving the screenshot.
I think this will do it.
(it doesn't matter how many views you have)
UIGraphicsBeginImageContext(self.window.bounds.size);
[self.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * data = UIImagePNGRepresentation(image);
[data writeToFile:#"foo.png" atomically:YES];
Related
In my app, it allows users to place text on top of images like snapchat, then they are allowed to save the image to their device. I simply add the text view on top of the image and take a picture of the image using the code:
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* savedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But when I compare the text on my image, to the text from a snapchat image...it is significantly different. Snapchat's word text on top of image is significantly sharper then mine. Mine looks very pixelated. Also I am not compressing the image at all, just saving the image as is using ALAssetLibrary.
Thank You
When you use UIGraphicsBeginImageContext, it defaults to a 1x scale (i.e. non-retina resolution). You probably want:
UIGraphicsBeginImageContextWithOptions(imageView.layer.bounds.size, YES, 0);
Which will use the same scale as the screen (probably 2x). The final parameter is the scale of the resulting image; 0 means "whatever the screen is".
If your imageView is scaled to the size of the screen, then I think your jpeg will also be limited to that resolution. If setting the scale on UIGraphicsBeginImageContextWithOptions does not give you enough resolution, you can do your drawing in a larger offscreen image. Something like:
UIGraphicsBeginImageContext(imageSize);
[image drawInRect:CGRectMake(0,0,imageSize.width,imageSize.height)];
CGContextScaleCTM(UIGraphicsGetCurrentContext(),scale,scale);
[textOverlay.layer renderInContext:UIGraphicsGetCurrentContext()];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You need to set the "scale" value to scale the textOverlay view, which is probably at screen size, to the offscreen image size.
Alternatively, probably simpler, you can start with a larger UIImageView, but put it within another UIView to scale it to fit on screen. Do the same with your text overlay view. Then, your code for creating composite should work, at whatever resolution you choose for the UIImageView.
I have an image which is 200x200 but I want to display it in full screen in iphone 5 .So when I display that image in full iamge view it is stretched . What to do??/
It's not possible. A single 200x200 image cannot fill a space larger than 200x200 without stretching/scaling.
You do have various options for displaying it in an UIImageView though See Apple's Documentation on UIViewContentMode for the contentMode options.
UIImage *img = [UIImage imageNamed:#"glyph"];
self.imageView.image = img;
self.imageView.contentMode = UIViewContentModeScaleAspectFit;
You need to resize the image with size of full screen size and then display on full screen. For reference, please check the following answer:
How to resize the image programmatically in objective-c in iphone
Hope, It will help you, Sir
:)
I am creating a iOS drawing application that allows the user to place arrows on the page and position/stretch/rotate them. When the user saves the image, I must modify the background image with the png arrows at the exact coordinate they were dragged to.
The 3 drag-able parts will be UIImage's with CapInsets to keep the arrow looking normal when stretched. The canvas UIImageView will have the same aspect as the actual image ensuring no black space is visible.
My questions is this. After a drag-able part has been stretched, how do I save the results of the UIImageView (the smaller drag-able parts) to disk/memory for later modifying of the background image (the background UIImageView canvas). It is important to ensure what image I used to modify the canvas looks exactly the same as the UIImageView drag-able part that represented it. That means that it must retain exactly how the image is displayed, including the CapInsets.
I am ignoring rotation of the drag-able parts for now because I suspect that would be the easy part.
Try using this code to get a layer screenshot from a desired view:
- (UIImage *)imageFromLayer:(CALayer *)layer {
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
I want to process a subregion of a UIImage in an iOS app. Following this question, I now have a routine to extract the region in question as a UIImage that I can now manipulate. Is there a similarly convenient method for placing the region back into the original image? The alternative I'm considering is a bytewise copy, which seems extremely low-level to me.
You could draw the two images on top of each other, and then combine them to one image.
Assuming you have the original image and the modified part:
UIGraphicsBeginImageContext(originalImage.size);
[originalImage drawAtPoint:CGPointMake(0, 0)];
[modifiedPart drawAtPoint:/* Upper left corner of the modified part */];
UIImage *combined = UIGraphicsGetImageFromCurrentImageContext();
Edit:
Forgot this line:
UIGraphicsEndImageContext();
How to take a screenshot programmatically?
You can use UIGraphicsBeginImageContext for this purpose.
For example :
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, self.view.opaque, 0.0);
[self.myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage*theImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData*theImageData = UIImageJPEGRepresentation(theImage, 1.0 ); //you can use PNG too
[theImageData writeToFile:#"example.jpeg" atomically:YES];
Here
1. First i take the image context, i've given it as myView which is a webview, you can give whatever you wish for. This takes the image of webview that can be seen on the screen.
2 Using UIGraphicsGetImageFromCurrentImageContext() i convert the screenshot of my webview into an image.
3. By using UIGraphicsEndImageContext() i ask it end the context.
4. I'm saving the image into an NSData because i had to mail the screenshot. And keep it in NSData seemed a good option if it is used to send or save.
EDIT: To add it to the camera roll you need to write:
UIImageWriteToSavedPhotosAlbum(theImage,nil,NULL,NULL); after UIGraphicsEndImageContext();
Have a look at this answer.It also takes care of retina display.
Actually to explain the process,
Choose a image context size (probably the layer size for which you need screen shot)
Render the layer which you want to take screenshot in the created context
Obtain the image from the context and you are done!