My Swift code for capturing a UITableView as an image isn't working when the table is scrolled down. I essentially have the answer in Objective-C but can't seem to make it work in Swift. Currently this is what I have in Swift:
func snapshotOfCell (inputView: UIView) -> UIView {
UIGraphicsBeginImageContextWithOptions(inputView.bounds.size, false, 0.0)
inputView.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext() as UIImage
UIGraphicsEndImageContext()
let cellSnapshot : UIView = UIImageView(image: image)
cellSnapshot.layer.masksToBounds = false
return cellSnapshot
}
I found this answer but it's in Objective-C:
-(UIImage *) imageWithTableView:(UITableView *)tableView {
UIView *renderedView = tableView;
CGPoint tableContentOffset = tableView.contentOffset;
UIGraphicsBeginImageContextWithOptions(renderedView.bounds.size, renderedView.opaque, 0.0);
CGContextRef contextRef = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(contextRef, 0, -tableContentOffset.y);
[tableView.layer renderInContext:contextRef];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
It seems to fix the scroll problem by using a contentOffset. However, I've been trying to integrate it into my Swift function without success. Anyone good with both Objective-C and Swift? Thanks!
capture whole tableview as a image
UIGraphicsBeginImageContextWithOptions(CGSizeMake(tableView.contentSize.width, tableView.contentSize.height),false, 0.0)
let context = UIGraphicsGetCurrentContext()
let previousFrame = tableView.frame
tableView.frame = CGRectMake(tableView.frame.origin.x, tableView.frame.origin.y, tableView.contentSize.width, tableView.contentSize.height);
tableView.layer.renderInContext(context!)
tableView.frame = previousFrame
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
imageView.image = image;
capture the screenshot of tableview in a scrolled position
let contentOffset = tableView.contentOffset
UIGraphicsBeginImageContextWithOptions(tableView.bounds.size, true, 1)
let context = UIGraphicsGetCurrentContext()
CGContextTranslateCTM(context, 0, -contentOffset.y)
tableView.layer.renderInContext(context!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
imageView.image = image;
Swift 3.0 version for capturing entire tableview based on #Jeyamahesan's answer
UIGraphicsBeginImageContextWithOptions(CGSize(width:tableView.contentSize.width, height:tableView.contentSize.height),false, 0.0)
let context = UIGraphicsGetCurrentContext()
let previousFrame = tableView.frame
tableView.frame = CGRect(x: tableView.frame.origin.x, y: tableView.frame.origin.y, width: tableView.contentSize.width, height: tableView.contentSize.height)
tableView.layer.render(in: context!)
tableView.frame = previousFrame
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
Try this piece of code:
-(UIImage *)screenshot {
UIImage *image = nil;
UIGraphicsBeginImageContextWithOptions(tableView.contentSize, false, 0.0);
{
CGPoint savedContentOffset = tableView.contentOffset;
CGRect savedFrame = tableView.frame;
tableView.contentOffset = CGPointMake(0.0, 0.0);
tableView.frame = CGRectMake(0, 0.0, tableView.contentSize.width, tableView.contentSize.height);
[tableView.layer renderInContext:UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
tableView.contentOffset = savedContentOffset;
tableView.frame = savedFrame;
}
UIGraphicsEndImageContext();
return image;
}
Happy Coding..!!
Please try this one Its may be help to you
- (UIImage*)buildImage:(UIImage*)image
{
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawAtPoint:CGPointZero];
CGFloat scale;
scale = image.size.width / _workingView.frame.size.width;
CGContextScaleCTM(UIGraphicsGetCurrentContext(), scale, scale);
NSLog(#"%f",scale);
[tableView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *tmp = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tmp;
}
Related
UIImage* image = nil;
UIGraphicsBeginImageContextWithOptions(scrollView.contentSize, YES, 0);
{
CGPoint savedContentOffset = scrollView.contentOffset;
CGRect savedFrame = scrollView.frame;
scrollView.contentOffset = CGPointZero;
scrollView.frame= CGRectMake(0, 0, scrollView.contentSize.width,scrollView.contentSize.height);
[scrollView.layer renderInContext: UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
scrollView.contentOffset = savedContentOffset;
scrollView.frame = savedFrame;
}
UIGraphicsEndImageContext();
I can not get the full image form scrollview , But only a half , and the half is black
EDIT
Try to add
drawHierarchy(in: self.bounds, afterScreenUpdates: true)
after the line
[scrollView.layer renderInContext: UIGraphicsGetCurrentContext()];
I would like to include the Navigation bar in my tableview screenshot image. The following code works to capture the entire tableview and I have tried other code that captures the Navigation bar but not the entire tableview. Is it possible to do both at the same time?
func screenshot(){
var image = UIImage();
UIGraphicsBeginImageContextWithOptions(CGSize(width:tableView.contentSize.width, height:tableView.contentSize.height),false, 0.0)
let context = UIGraphicsGetCurrentContext()
let previousFrame = tableView.frame
tableView.frame = CGRect(x: tableView.frame.origin.x, y: tableView.frame.origin.y, width: tableView.contentSize.width, height: tableView.contentSize.height)
tableView.layer.render(in: context!)
tableView.frame = previousFrame
image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
Take navigation bar and tableview screenshots separately, then merge them.
Objective C:
-(UIImage*)merge:(UIImage*)tvImage with:(UIImage*)navImage {
CGFloat contextHeight = tableView.contentSize.height + self.navigationController.navigationBar.frame.size.height;
CGRect contextFrame = CGRectMake(0, 0, tableView.frame.size.width, contextHeight);
UIGraphicsBeginImageContextWithOptions(contextFrame.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(context);
//1 draw navigation image in context
[navImage drawInRect:self.navigationController.navigationBar.frame];
//2 draw tableview image in context
CGFloat y = self.navigationController.navigationBar.frame.size.height;
CGFloat h = tableView.contentSize.height;
CGFloat w = tableView.frame.size.width;
[tvImage drawInRect:CGRectMake(0, y, w, h)];
// Clean up and get the new image.
UIGraphicsPopContext();
UIImage *mergeImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return mergeImage;
}
Swift 3:
func merge(tvImage:UIImage, with navImage:UIImage) {
let contextHeight:CGFloat = tableView.contentSize.height + self.navigationController!.navigationBar.frame.size.height;
let contextFrame:CGRect = CGRect(0, 0, tableView.frame.size.width, contextHeight);
UIGraphicsBeginImageContextWithOptions(contextFrame.size, false, 0.0);
let context:CGContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(context);
//1 draw navigation image in context
navImage.draw(in: self.navigationController.navigationBar.frame)
//2 draw tableview image in context
let y:CGFloat = self.navigationController.navigationBar.frame.size.height;
let h:CGFloat = tableView.contentSize.height;
let w:CGFloat = tableView.frame.size.width;
tvImage.draw(in: CGRectMake(0, y, w, h))
// Clean up and get the new image.
UIGraphicsPopContext();
let mergeImage:UIImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return mergeImage;
}
Writing an extension will always help us to reuse. I have created a simple UIView extension check below
extension UIView {
// render the view within the view's bounds, then capture it as image
func asImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image(actions: { rendererContext in
layer.render(in: rendererContext.cgContext)
})
}
}
Usage:
self.imageview.image = self.view.asImage() // If you want to capture the View
self.imageview.image = self.tabBarController?.view.asImage() // If it's TabbarViewController
How to create a circular image with border (UIGraphics)?
P.S. I need to draw a picture.
code in viewDidLoad:
NSURL *url2 = [NSURL URLWithString:#"http://images.ak.instagram.com/profiles/profile_55758514_75sq_1399309159.jpg"];
NSData *data2 = [NSData dataWithContentsOfURL:url2];
UIImage *profileImg = [UIImage imageWithData:data2];
UIGraphicsEndImageContext();
// Create image context with the size of the background image.
UIGraphicsBeginImageContext(profileImg.size);
[profileImg drawInRect:CGRectMake(0, 0, profileImg.size.width, profileImg.size.height)];
// Get the newly created image.
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
// Release the context.
UIGraphicsEndImageContext();
// Set the newly created image to the imageView.
self.imageView.image = result;
It sounds like you want to clip the image to a circle. Here's an example:
static UIImage *circularImageWithImage(UIImage *inputImage,
UIColor *borderColor, CGFloat borderWidth)
{
CGRect rect = (CGRect){ .origin=CGPointZero, .size=inputImage.size };
UIGraphicsBeginImageContextWithOptions(rect.size, NO, inputImage.scale); {
// Fill the entire circle with the border color.
[borderColor setFill];
[[UIBezierPath bezierPathWithOvalInRect:rect] fill];
// Clip to the interior of the circle (inside the border).
CGRect interiorBox = CGRectInset(rect, borderWidth, borderWidth);
UIBezierPath *interior = [UIBezierPath bezierPathWithOvalInRect:interiorBox];
[interior addClip];
[inputImage drawInRect:rect];
}
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
Result:
Have you tried this ?
self.imageView.layer.borderColor = [UIColor greenColor].CGColor;
self.imageView.layer.borderWidth = 1.f;
You'll also need
self.imageView.layer.corderRadius = self.imageView.frame.size.width/2;
self.imageView.clipsToBounds = YES;
Swift 4 version
extension UIImage {
func circularImageWithBorderOf(color: UIColor, diameter: CGFloat, boderWidth:CGFloat) -> UIImage {
let aRect = CGRect.init(x: 0, y: 0, width: diameter, height: diameter)
UIGraphicsBeginImageContextWithOptions(aRect.size, false, self.scale)
color.setFill()
UIBezierPath.init(ovalIn: aRect).fill()
let anInteriorRect = CGRect.init(x: boderWidth, y: boderWidth, width: diameter-2*boderWidth, height: diameter-2*boderWidth)
UIBezierPath.init(ovalIn: anInteriorRect).addClip()
self.draw(in: anInteriorRect)
let anImg = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return anImg
}
}
I have a game where users can create custom levels and upload them to my server for other users to play and I want to get a screenshot of the "action area" before the user tests his/her level to upload to my server as sort of a "preview image".
I know how to get a screenshot of the entire view, but I want to define it to a custom frame. Consider the following image:
I want to just take a screenshot of the area in red, the "action area." Can I achieve this?
Just you need to make a rect of the area you want to be captured and pass the rect in the method.
Swift 3.x :
extension UIView {
func imageSnapshot() -> UIImage {
return self.imageSnapshotCroppedToFrame(frame: nil)
}
func imageSnapshotCroppedToFrame(frame: CGRect?) -> UIImage {
let scaleFactor = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(bounds.size, false, scaleFactor)
self.drawHierarchy(in: bounds, afterScreenUpdates: true)
var image: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
if let frame = frame {
let scaledRect = frame.applying(CGAffineTransform(scaleX: scaleFactor, y: scaleFactor))
if let imageRef = image.cgImage!.cropping(to: scaledRect) {
image = UIImage(cgImage: imageRef)
}
}
return image
}
}
//How to call :
imgview.image = self.view.imageSnapshotCroppedToFrame(frame: CGRect.init(x: 0, y: 0, width: 320, height: 100))
Objective C :
-(UIImage *)captureScreenInRect:(CGRect)captureFrame
{
CALayer *layer;
layer = self.view.layer;
UIGraphicsBeginImageContext(self.view.bounds.size);
CGContextClipToRect (UIGraphicsGetCurrentContext(),captureFrame);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenImage;
}
//How to call :
imgView.image = [self captureScreenInRect:CGRectMake(0, 0, 320, 100)];
- (UIImage *) getScreenShot {
UIWindow *keyWindow = [[UIApplication sharedApplication] keyWindow];
CGRect rect = [keyWindow bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[keyWindow.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
I need to resize my UIImage according to the size of UIImageView. My image is too small, so i need to scale it up. I was not able to do it using:
self.firstImage.contentMode = UIViewContentModeScaleAspectFit;
Please help.
In Swift 3.0 , it will be
func imageWithImage(image:UIImage, scaledToSize newSize:CGSize ) -> UIImage {
UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0)
image.draw(in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.width))
let newImage : UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext();
return newImage;
}
try with this:
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
self.imageView.clipsToBounds = YES;