I capture image from UIView content like below and it's working fine but when i run it in my iPhone5S with iOS 10.3.3, Captured image contains only Black View.
-(UIImage *)captureImageFromView:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
What's the issue?
Thanks in Advance.
You can do that way as explained by #LGP.
Use following function on main thread like below.
-(UIImage *)captureImageFromView:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Main thread code.
dispatch_async(dispatch_get_main_queue(), ^{
UIImage *imgResult = [self captureImageFromView:yourView];
});
Try using drawViewHierarchyInRect instead. If this doesn't work either then the problem is likely that you capture the wrong view or after it has been altered in some way.
-(UIImage *)captureImageFromView:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
FOR SWIFT
func snapshot() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, true, UIScreen.main.scale)
self.view.layer.render(in: UIGraphicsGetCurrentContext()!)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}
FOR OBJECTIVE C
- (UIImage *)captureView {
UIGraphicsBeginImageContext(yourview.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextFillRect(ctx, yourview.frame);
[yourview.layer renderInContext:ctx];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"size %f %f",newImage.size.height,newImage.size.width);
return newImage;
}
Before call hide button etc if you need then that will not show on image also.
I am using method to convert UIView to UIImage and its doing a great job when UIView (to be converted to UIImage) is already present/displayed. But my requirement is to convert UIView to UIImage without displaying UIView. Unfortunately, this code is failing in this case and I am stuck. Any help will be appreciated.
I am using the following method:
+ (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, [[UIScreen mainScreen] scale]);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Your code is likely failing because you're not laying out the subviews of your view (which is done automatically when you add a view as a subview). Try something like the method I wrote below:
+ (UIImage *)imageFromView:(UIView *)view sized:(CGSize)size
{
// layout the view
view.frame = CGRectMake(0, 0, size.width, size.height);
[view setNeedsLayout];
[view layoutIfNeeded];
// render the image
UIGraphicsBeginImageContextWithOptions(size, view.opaque, 0.0f);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:NO];
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return renderedImage;
}
Assuming you have already got a working view, this code should work to convert the UIView to a UIImage (I'm using it to convert a gradient into an image and display the image onto a UIProgressView.
Swift:
let renderer = UIGraphicsImageRenderer(size: gradientView.bounds.size)
let image = renderer.image { ctx in
gradientView.drawHierarchy(in: gradientView.bounds, afterScreenUpdates: true)
}
Objective C:
UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:gradientView.bounds.size];
UIImage *gradientImage = [renderer imageWithActions:^(UIGraphicsImageRendererContext * _Nonnull rendererContext) {
[gradientView drawViewHierarchyInRect:gradientView.bounds afterScreenUpdates:true];
}];
_progressView.progressImage = gradientImage;
The above code should allow you to convert any UIView to a UIImage. You should ideally be running it in the viewWillAppear (as at this point the view controller will have the correct layout sizes). If you have any problems getting this to work you can have a look at these example projects that I made for a guide on this very topic! Objective C, Swift.
Hide all subviews, and then snapshot UIview to UIImage should work, see code below
+ (UIImage *)custom_snapshotScreenInView:(UIView *)contentView
{
if (!contentView) {
return nil;
}
CGSize size = contentView.bounds.size;
UIGraphicsBeginImageContextWithOptions(size, NO, [UIScreen mainScreen].scale);
CGRect rect = contentView.bounds;
[contentView drawViewHierarchyInRect:rect afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
return image;
}
+ (UIImage *)custom_snapshotScreenWithoutSubviews:(UIView *)contentView
{
// save hidden view's hash
NSMutableArray *hideViewsHashs = [[NSMutableArray alloc]initWithCapacity:contentView.subviews.count];
for (UIView *subview in contentView.subviews) {
if (subview.hidden == NO) {
[hideViewsHashs addObject:#(subview.hash)];
NSLog(#"Dikey:video:snap:hash = %#", #(subview.hash));
}
subview.hidden = YES;
}
// view to image
UIImage *image = [UIImage custom_snapshotScreenInView:contentView];
// restore
for (UIView *subview in contentView.subviews) {
if ([hideViewsHashs containsObject:#(subview.hash)]) {
subview.hidden = NO;
NSLog(#"Dikey:video:snap:restore:hash = %#", #(subview.hash));
}
}
// finish
return image;
}
I want to convert a UIView to a UIImage
- (UIImage *)renderToImage:(UIView *)view {
if(UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(view.frame.size, NO, 0.0);
} else {
UIGraphicsBeginImageContext(view.frame.size);
}
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Two questions:
I have subviews on my view. Is there a way to create an image of the view without any of the subviews? Ideally id like to not have to remove them just to add them back later.
Also, it isn't rendering the images properly on retina devices. I followed the advice here to use context with options but it did not help. How to capture UIView to UIImage without loss of quality on retina display
You have to hide the subviews that you not want to be in the image of view. Below is the Method that render image of view for retina devices too.
- (UIImage *)imageOfView:(UIView *)view
{
// This if-else clause used to check whether the device support retina display or not so that
// we can render image for both retina and non retina devices.
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
} else {
UIGraphicsBeginImageContext(view.bounds.size);
}
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
- (CGImageRef)toImageRef
{
int width = self.frame.size.width;
int height = self.frame.size.height;
CGContextRef ref = CGBitmapContextCreate(NULL, width, height, 8, width*4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipLast);
[self drawRect:CGRectMake(0.0, 0.0, width, height) withContext:ref];
CGImageRef result = CGBitmapContextCreateImage(ref);
CGContextRelease(ref);
return result;
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
// move your drawing commands from here...
[self drawRect:rect withContext:context];
}
- (void)drawRect:(CGRect)rect withContext:(CGContextRef)context
{
// ...to here
}
I want a screenshot of the image on the screen saved into the saved photo library.
Considering a check for retina display use the following code snippet:
#import <QuartzCore/QuartzCore.h>
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(self.window.bounds.size, NO, [UIScreen mainScreen].scale);
} else {
UIGraphicsBeginImageContext(self.window.bounds.size);
}
[self.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(image);
if (imageData) {
[imageData writeToFile:#"screenshot.png" atomically:YES];
} else {
NSLog(#"error while taking screenshot");
}
Below method works for OPENGL objects also
//iOS7 or above
- (UIImage *) screenshot {
CGSize size = CGSizeMake(your_width, your_height);
UIGraphicsBeginImageContextWithOptions(size, NO, [UIScreen mainScreen].scale);
CGRect rec = CGRectMake(0, 0, your_width, your_height);
[_viewController.view drawViewHierarchyInRect:rec afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, self.view.opaque, 0.0);
[self.myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImageJPEGRepresentation(image, 1.0 ); //you can use PNG too
[imageData writeToFile:#"image1.jpeg" atomically:YES];
- (UIImage*) getGLScreenshot {
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <480; y++)
{
for(int x = 0; x <320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
- (void)saveGLScreenshotToPhotosAlbum {
UIImageWriteToSavedPhotosAlbum([self getGLScreenshot], nil, nil, nil);
}
Source.
IN SWIFT
func captureScreen() -> UIImage
{
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, false, 0);
self.view.drawViewHierarchyInRect(view.bounds, afterScreenUpdates: true)
let image: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
As of iOS10, this gets a bit simpler. UIKit comes with UIGraphicsImageRender that allows you to
... accomplish drawing tasks, without having to handle configuration such as color depth and image scale, or manage Core Graphics contexts
Apple Docs - UIGraphicsImageRenderer
So you can now do something like this:
let renderer = UIGraphicsImageRenderer(size: someView.bounds.size)
let image = renderer.image(actions: { context in
someView.drawHierarchy(in: someView.bounds, afterScreenUpdates: true)
})
Many of the answers here worked for me in most cases. But when trying to take a snapshot of an ARSCNView, I was only able to do it using the method described above. Although it might be worth noting that at this time, ARKit is still in beta and Xcode is in beta 4
See this post it looks like you can use UIGetScreenImage() for now.
This will save a screenshot and as well return the screenshot too.
-(UIImage *)capture{
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *imageView = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(imageView, nil, nil, nil); //if you need to save
return imageView;
}
I think the following snippet would help if you want to take a full screen(except for status bar),just replace AppDelegate with your app delegate name if necessary.
- (UIImage *)captureFullScreen {
AppDelegate *_appDelegate = (AppDelegate *)[UIApplication sharedApplication].delegate;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
// for retina-display
UIGraphicsBeginImageContextWithOptions(_appDelegate.window.bounds.size, NO, [UIScreen mainScreen].scale);
[_appDelegate.window drawViewHierarchyInRect:_appDelegate.window.bounds afterScreenUpdates:NO];
} else {
// non-retina-display
UIGraphicsBeginImageContext(_bodyView.bounds.size);
[_appDelegate.window drawViewHierarchyInRect:_appDelegate.window.bounds afterScreenUpdates:NO];
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I couldn't find an answer with Swift 3 implementation. So here it goes.
static func screenshotOf(window: UIWindow) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(window.bounds.size, true, UIScreen.main.scale)
guard let currentContext = UIGraphicsGetCurrentContext() else {
return nil
}
window.layer.render(in: currentContext)
guard let image = UIGraphicsGetImageFromCurrentImageContext() else {
UIGraphicsEndImageContext()
return nil
}
UIGraphicsEndImageContext()
return image
}
This will work with swift 4.2, the screenshot will be saved in library, but please don't forget to edit the info.plist # NSPhotoLibraryAddUsageDescription :
#IBAction func takeScreenshot(_ sender: UIButton) {
//Start full Screenshot
print("full Screenshot")
UIGraphicsBeginImageContext(card.frame.size)
view.layer.render(in: UIGraphicsGetCurrentContext()!)
var sourceImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(sourceImage!, nil, nil, nil)
//Start partial Screenshot
print("partial Screenshot")
UIGraphicsBeginImageContext(card.frame.size)
sourceImage?.draw(at: CGPoint(x:-25,y:-100)) //the screenshot starts at -25, -100
var croppedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(croppedImage!, nil, nil, nil)
}
I'm answering this question as it's a highly viewed, and there are many answers out there plus there's Swift and Obj-C.
Disclaimer This is not my code, nor my answers, this is only to help people that land here find a quick answer. There are links to the original answers to give credit where credit is due!! Please honor the original answers with a +1 if you use their answer!
Using QuartzCore
#import <QuartzCore/QuartzCore.h>
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(self.window.bounds.size, NO, [UIScreen mainScreen].scale);
} else {
UIGraphicsBeginImageContext(self.window.bounds.size);
}
[self.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(image);
if (imageData) {
[imageData writeToFile:#"screenshot.png" atomically:YES];
} else {
NSLog(#"error while taking screenshot");
}
In Swift
func captureScreen() -> UIImage
{
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, false, 0);
self.view.drawViewHierarchyInRect(view.bounds, afterScreenUpdates: true)
let image: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
Note: As the nature with programming, updates may need to be done so please edit or let me know! *Also if I failed to include an answer/method worth including feel free to let me know as well!
For iOS 7.0 or above..
If you want to take screenshots of view say(myView), you can do this with single line:
[myView snapshotViewAfterScreenUpdates:NO];
Get Screenshot From View
-(UIImage *)getScreenshotImage {
if ([[UIScreen mainScreen] scale] == 2.0) {
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, FALSE, 2.0);
} else {
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, FALSE, 1.0);
}
[self.view.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Save Image to Photos
UIImageWriteToSavedPhotosAlbum(YOUR_IMAGE, nil, nil, nil);
How-To
UIImageWriteToSavedPhotosAlbum([self getScreenshotImage], nil, nil, nil);
Two options available at bellow site:
OPTION 1: using UIWindow (tried and work perfectly)
// create graphics context with screen size
CGRect screenRect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, screenRect);
// grab reference to our window
UIWindow *window = [UIApplication sharedApplication].keyWindow;
// transfer content into our context
[window.layer renderInContext:ctx];
UIImage *screengrab = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
OPTION 2: using UIView
// grab reference to the view you'd like to capture
UIView *wholeScreen = self.splitViewController.view;
// define the size and grab a UIImage from it
UIGraphicsBeginImageContextWithOptions(wholeScreen.bounds.size, wholeScreen.opaque, 0.0);
[wholeScreen.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screengrab = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
For retina screen (as DenNukem answer)
// grab reference to our window
UIWindow *window = [UIApplication sharedApplication].keyWindow;
// create graphics context with screen size
CGRect screenRect = [[UIScreen mainScreen] bounds];
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(screenRect.size, NO, [UIScreen mainScreen].scale);
} else {
UIGraphicsBeginImageContext(screenRect.size);
[window.layer renderInContext:UIGraphicsGetCurrentContext()];
}
for more detail:
http://pinkstone.co.uk/how-to-take-a-screeshot-in-ios-programmatically/
Get Screenshot From View :
- (UIImage *)takeSnapshotView {
CGRect rect = [myView bounds];//Here you can change your view with myView
UIGraphicsBeginImageContextWithOptions(rect.size,YES,0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
[myView.layer renderInContext:context];
UIImage *capturedScreen = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return capturedScreen;//capturedScreen is the image of your view
}
Hope, this is what you're looking for. Any concern get back to me. :)
Another option is to use the Automation tool on instruments. You write a script to put the screen into whatever you state you want, then take the shots. Here is the script I used for one of my apps. Obviously, the details of the script will be different for your app.
var target = UIATarget.localTarget();
var app = target.frontMostApp();
var window = app.mainWindow();
var picker = window.pickers()[0];
var wheel = picker.wheels()[2];
var buttons = window.buttons();
var button1 = buttons.firstWithPredicate("name == 'dateButton1'");
var button2 = buttons.firstWithPredicate("name == 'dateButton2'");
function setYear(picker, year) {
var yearName = year.toString();
var yearWheel = picker.wheels()[2];
yearWheel.selectValue(yearName);
}
function setMonth(picker, monthName) {
var wheel = picker.wheels()[0];
wheel.selectValue(monthName);
}
function setDay(picker, day) {
var wheel = picker.wheels()[1];
var name = day.toString();
wheel.selectValue(name);
}
target.delay(1);
setYear(picker, 2015);
setMonth(picker, "July");
setDay(picker, 4);
button1.tap();
setYear(picker, 2015);
setMonth(picker, "December");
setDay(picker, 25);
target.captureScreenWithName("daysShot1");
var nButtons = buttons.length;
UIALogger.logMessage(nButtons + " buttons");
for (var i=0; i<nButtons; i++) {
UIALogger.logMessage("button " + buttons[i].name());
}
var tabBar = window.tabBars()[0];
var barButtons = tabBar.buttons();
var nBarButtons = barButtons.length;
UIALogger.logMessage(nBarButtons + " buttons on tab bar");
for (var i=0; i<nBarButtons; i++) {
UIALogger.logMessage("button " + barButtons[i].name());
}
var weeksButton = barButtons[1];
var monthsButton = barButtons[2];
var yearsButton = barButtons[3];
target.delay(2);
weeksButton.tap();
target.captureScreenWithName("daysShot2");
target.delay(2);
monthsButton.tap();
target.captureScreenWithName("daysShot3");
target.delay(2);
yearsButton.tap();
target.delay(2);
button2.tap();
target.delay(2);
setYear(picker, 2018);
target.delay(2);
target.captureScreenWithName("daysShot4");
Just a small contribution, I've done this with a button but the pressing also means the button is captured pressed. So first I unhighlight.
- (IBAction)screenShot:(id)sender {
// Unpress screen shot button
screenShotButton.highlighted = NO;
// create graphics context with screen size
CGRect screenRect = [[UIScreen mainScreen] bounds];
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, [UIScreen mainScreen].scale);
} else {
UIGraphicsBeginImageContext(self.view.bounds.size);
}
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, screenRect);
// grab reference to our window
UIWindow *window = [UIApplication sharedApplication].keyWindow;
// transfer content into our context
[window.layer renderInContext:ctx];
UIImage *screengrab = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// save screengrab to Camera Roll
UIImageWriteToSavedPhotosAlbum(screengrab, nil, nil, nil);
}
I got the main body of the code from:
http://pinkstone.co.uk/how-to-take-a-screeshot-in-ios-programmatically/
where I used option 1, option 2 didn't seem to work for me. Added the adjustments for Rentina screen sizes from this thread, and the unhighlighting of the screenShotButton. The view I'm using it on is a StoryBoarded screen of buttons and labels and with several UIView added later via the program.
In Swift you can use following code.
if UIScreen.mainScreen().respondsToSelector(Selector("scale")) {
UIGraphicsBeginImageContextWithOptions(self.window!.bounds.size, false, UIScreen.mainScreen().scale)
}
else{
UIGraphicsBeginImageContext(self.window!.bounds.size)
}
self.window?.layer.renderInContext(UIGraphicsGetCurrentContext())
var image : UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
Swift 4:
func makeImage(withView view: UIView) -> UIImage? {
let rect = view.bounds
UIGraphicsBeginImageContextWithOptions(rect.size, true, 0)
guard let context = UIGraphicsGetCurrentContext() else {
assertionFailure()
return nil
}
view.layer.render(in: context)
guard let image = UIGraphicsGetImageFromCurrentImageContext() else {
assertionFailure()
return nil
}
UIGraphicsEndImageContext()
return image
}