How to take a screenshot using code on iOS? - ios

How to take a screenshot programmatically?

You can use UIGraphicsBeginImageContext for this purpose.
For example :
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, self.view.opaque, 0.0);
[self.myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage*theImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData*theImageData = UIImageJPEGRepresentation(theImage, 1.0 ); //you can use PNG too
[theImageData writeToFile:#"example.jpeg" atomically:YES];
Here
1. First i take the image context, i've given it as myView which is a webview, you can give whatever you wish for. This takes the image of webview that can be seen on the screen.
2 Using UIGraphicsGetImageFromCurrentImageContext() i convert the screenshot of my webview into an image.
3. By using UIGraphicsEndImageContext() i ask it end the context.
4. I'm saving the image into an NSData because i had to mail the screenshot. And keep it in NSData seemed a good option if it is used to send or save.
EDIT: To add it to the camera roll you need to write:
UIImageWriteToSavedPhotosAlbum(theImage,nil,NULL,NULL); after UIGraphicsEndImageContext();

Have a look at this answer.It also takes care of retina display.
Actually to explain the process,
Choose a image context size (probably the layer size for which you need screen shot)
Render the layer which you want to take screenshot in the created context
Obtain the image from the context and you are done!

Related

Objective-C How does snapchat make the text on top of an image/video so sharp and not pixelated?

In my app, it allows users to place text on top of images like snapchat, then they are allowed to save the image to their device. I simply add the text view on top of the image and take a picture of the image using the code:
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* savedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But when I compare the text on my image, to the text from a snapchat image...it is significantly different. Snapchat's word text on top of image is significantly sharper then mine. Mine looks very pixelated. Also I am not compressing the image at all, just saving the image as is using ALAssetLibrary.
Thank You
When you use UIGraphicsBeginImageContext, it defaults to a 1x scale (i.e. non-retina resolution). You probably want:
UIGraphicsBeginImageContextWithOptions(imageView.layer.bounds.size, YES, 0);
Which will use the same scale as the screen (probably 2x). The final parameter is the scale of the resulting image; 0 means "whatever the screen is".
If your imageView is scaled to the size of the screen, then I think your jpeg will also be limited to that resolution. If setting the scale on UIGraphicsBeginImageContextWithOptions does not give you enough resolution, you can do your drawing in a larger offscreen image. Something like:
UIGraphicsBeginImageContext(imageSize);
[image drawInRect:CGRectMake(0,0,imageSize.width,imageSize.height)];
CGContextScaleCTM(UIGraphicsGetCurrentContext(),scale,scale);
[textOverlay.layer renderInContext:UIGraphicsGetCurrentContext()];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You need to set the "scale" value to scale the textOverlay view, which is probably at screen size, to the offscreen image size.
Alternatively, probably simpler, you can start with a larger UIImageView, but put it within another UIView to scale it to fit on screen. Do the same with your text overlay view. Then, your code for creating composite should work, at whatever resolution you choose for the UIImageView.

iOS Creat image from UIImageView and couple of UIViews attached on it

I want to create an image from an UIImageView and couple of UIViews attached on it.
The image will look exactly as the screenshot, but if I just take the screenshot, the image is created exactly the same size of screen. Now, if I create the image with bigger dimension, the screenshot is being distorted, though the UIImageView image is a really high resolution image.
So, I think this problem will be solved, if I create an image with the components other than just saving the screenshot.
I think this will do it.
(it doesn't matter how many views you have)
UIGraphicsBeginImageContext(self.window.bounds.size);
[self.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * data = UIImagePNGRepresentation(image);
[data writeToFile:#"foo.png" atomically:YES];

Render UIView in full fidelity for saving

My application allows users to composite images (add and then rotate, scale, move) on top of a background image and save the final edit to the camera roll.
The main editing ViewController's view is the top of a UIView hierarchy. To this view I add as a subview a UIImageView with an image from the Camera Roll or from the Camera. The user than adds additional characters (dogs, cats, etc.), each implemented as a seperate UIView which can be rotated, moved and scaled.
When the user is done editing, i want to save the entire scene to the camera roll at the highest resolution available - the resolution of the background image 1936X2592 pixels (the UIImageView) on iphone4.
At the moment i use the code below, however this only gives me the display resolution of the scene and not the full resolution of the image in memory. any suggestions? (i tried increasing the context size, in the code below, to the full background size, but no luck there)
// create a CG context
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, [UIScreen mainScreen].scale);
// render into the new context
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
// get the image out of the context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The answer seems to be simple enough and is also explained in this thread https://stackoverflow.com/a/11177322/207616
Simple scale up the drawing context before calling renderInContext to achieve the desired output image size
CGContextScaleCTM(context, scaleFactor, scaleFactor);

iphone image merging

i want to merge multiple images ina single image,by using a button click>my problem is ,,if the user touch a image in tableview ,the coresponding image have to merge with the main image in the next page..how can we done this?
this is the main image in the page!this the the uppercloth
i have to fix the upper cloth in the caricature
Use UIGraphicsBeginImageContext, then call drawAtPoint method of the images, then use UIGraphicsGetImageFromCurrentImageContext to get the merged image. Something like this(Not checked, just wrote from memory..Syntax errors possible..Correct yourselves..)
UIGraphicsBeginImageContext(yourFirstImage.size);
[yourFirstImage drawAtPoint:CGPointMake(0,0)];
[yourSecondImage drawAtPoint:CGPointMake(0,0)];
UIImage *mergedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Dynamic create a png image on iOS

How do I create a png image from text in iOS?
How to save a view as an image:
UIGraphicsBeginImageContext(myView.bounds.size);
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
It's covered in one of the first lectures in this term of Stanford's free iPhone programming course. There's a video and PDF notes.
Basically, you can create a UIImage and use its graphics context to draw into it, then save a PNG representation of the image.

Resources