swift - UILabel text not renderer when using renderInContext asynchronously - ios

I need to generate an image from a custom view composed by an UIImageView and an UILabel.
I can't use the new iOS 7 drawViewHierarchyInRect:afterScreenUpdates: because I have some ugly glitch on iPhone 6/6+ (iOS8 scale glitch when calling drawViewHierarchyInRect afterScreenUpdates:YES)
So I did it the old fashion way using renderInContext: which works well but it's quite slow. I'm using image generation to display markers in a GMSMapView (god I missed MapKit ...) but the user experience is quite bad because of lags due to those image generation.
So I try to perform the image creation operation in a background thread in order to have something smooth but here's the problem : the majority of my labels are not rendered.
Anyone as already faced this issue ?
Here's the code I used :
func CGContextCreate(size: CGSize) -> CGContext {
let scale = UIScreen.mainScreen().scale
let space: CGColorSpaceRef = CGColorSpaceCreateDeviceRGB()
let bitmapInfo: CGBitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedFirst.rawValue)
let context: CGContext = CGBitmapContextCreate(nil, Int(size.width * scale), Int(size.height * scale), 8, Int(size.width * scale * 4), space, bitmapInfo)
CGContextScaleCTM(context, scale, scale)
CGContextTranslateCTM(context, 0, size.height)
CGContextScaleCTM(context, 1, -1)
return context
}
func UIGraphicsGetImageFromContext(context: CGContext) -> UIImage? {
let cgImage: CGImage = CGBitmapContextCreateImage(context)
let image = UIImage(CGImage: cgImage, scale: UIScreen.mainScreen().scale, orientation: UIImageOrientation.Up)
return image
}
extension UIView {
func snapshot() -> UIImage {
let context = CGContextCreate(self.frame.size)
self.layer.renderInContext(context)
let image = UIGraphicsGetImageFromContext(context)
return image!
}
func snapshot(#completion: UIImage? -> Void) {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
let image = self.snapshot()
completion(image)
}
}
}

It seems there are issues to rendering UILabel in an other thread than the main one.
Your best option is to use the method drawInRect:withAttributes: of NSString to draw your text.

I think the problem is that the completion is executed in this background queue. UI updates have to be on the main thread so might work with.
dispatch_async(dispatch_get_main_queue()) {
completion(image)
}

Related

crop a specific area of a image in swift

I am creating a app where you can crop multiple images to one specific size.
I have a array with multiple images. The images of the array were displayed on a view, where I can drag them inside the view. I am using the same image twice. It look like this:
I also have a crop view (displays red only for demonstration). The images should be crop to this size of the crop view:
The end result look like this:
There were a few problems.
I don't understand why the image is rotated. It also seems the image is not cropped to the crop view that I created (the red view). Also the images should have a slight delay, because I drag each of them to a other place in the view.
The method that I am using is from apples documentation:
let cropRect = CGRect(x: cropView.frame.origin.x, y: cropView.frame.origin.y, width: cropView.frame.width, height: cropView.frame.height)
let croppedImage = ImageCrophandler.sharedInstance.cropImage(imageContentView[i].image!, toRect: cropRect, viewWidth: cropView.frame.width, viewHeight: cropView.frame.height)
print(croppedImage)
arrayOfCropedImages.append(croppedImage!)
func cropImage(_ inputImage: UIImage, toRect cropRect: CGRect, viewWidth: CGFloat, viewHeight: CGFloat) -> UIImage? {
let imageViewScale = max(inputImage.size.width / viewWidth,
inputImage.size.height / viewHeight)
// Scale cropRect to handle images larger than shown-on-screen size
let cropZone = CGRect(x:cropRect.origin.x * imageViewScale,
y:cropRect.origin.y * imageViewScale,
width:cropRect.size.width * imageViewScale,
height:cropRect.size.height * imageViewScale)
// Perform cropping in Core Graphics
guard let cutImageRef: CGImage = inputImage.cgImage?.cropping(to:cropZone)
else {
return nil
}
// Return image to UIImage
let croppedImage: UIImage = UIImage(cgImage: cutImageRef)
return croppedImage
}

How to reduce the image size coming from web services?

In the below shown image i am getting the images from web services and passing it to a table view but when scrolling up and down the image size was increasing and it is overlapping on labels and i had given constraints also can anyone help me how to avoid this ?
Hey you don't need to resize image.
First Set fix height width of your image view with constraints in tableview cell
Second Set imageview to aspectFit.
imageView.contentMode = UIViewContentModeScaleAspectFit;
Constraints add like this of your image view
Hope you will get success using that, if you any query regarding this , just comment will help
Using content mode to fit the image is an option but if you want to crop or resize it or compress image check the below code.
Call like let imageData = image.compressImage(rate: 0.5) and then you can provide data to write the image if needed.
func compressImage(rate: CGFloat) -> Data? {
return UIImageJPEGRepresentation(self, rate)
}
OR If you want to crop the image then,
func croppedImage(_ bound: CGRect) -> UIImage? {
guard self.size.width > bound.origin.x else {
print("X coordinate is larger than the image width")
return nil
}
guard self.size.height > bound.origin.y else {
print("Y coordinate is larger than the image height")
return nil
}
let scaledBounds: CGRect = CGRect(x: bound.x * self.scale, y: bound.y * self.scale, width: bound.w * self.scale, height: bound.h * self.scale)
let imageRef = self.cgImage?.cropping(to: scaledBounds)
let croppedImage: UIImage = UIImage(cgImage: imageRef!, scale: self.scale, orientation: UIImageOrientation.up)
return croppedImage
}
Make sure to add the above methods to UIImage Extension.
I had placed a view on table view cell and then I had placed all the elements and given constraints for view and elements in it then my problem has been reduced and working perfectly when scrolling also and is as shown in image below.
here is the layout for this screen

Image masking fails on iOS10 beta3

For some time now we use the following code to mask a grayscale image without transparency to a coloured image.
This always worked fine until Apple released iOS 10 beta 3. Suddenly the mask is not applied anymore resulting in just a square box being returned with te given color.
The logic behind this can be found at
https://developer.apple.com/library/mac/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_images/dq_images.html
under the header Masking an Image with an Image Mask
The logic of this code:
* Take an grayscale image without alpha
* Create an solid image with the given color
* Create a mask from the given image
* Mask the solid image with the created mask
* Output is a masked image with also respect for colors in between (gray might be light red i.e.).
Has anyone an idea how to fix this function?
If you have XCode 8 beta 3 you can run this code and on a simulator lower than iOS 10 this will work correct and on iOS 10 it will just create a square box
Example image:
public static func image(maskedWith color: UIColor, imageNamed imageName: String) -> UIImage? {
guard let image = UIImage(named: imageName)?.withRenderingMode(.alwaysOriginal) else {
return nil
}
guard image.size != CGSize.zero else {
return nil
}
guard
let maskRef = image.cgImage,
let colorImage = self.image(with: color, size: image.size),
let cgColorImage = colorImage.cgImage,
let dataProvider = maskRef.dataProvider
else {
return nil
}
guard
let mask = CGImage(maskWidth: maskRef.width, height: maskRef.height, bitsPerComponent: maskRef.bitsPerComponent, bitsPerPixel: maskRef.bitsPerPixel, bytesPerRow: maskRef.bytesPerRow, provider: dataProvider, decode: nil, shouldInterpolate: true),
let masked = cgColorImage.masking(mask)
else {
return nil
}
let result = UIImage(cgImage: masked, scale: UIScreen.main().scale, orientation: image.imageOrientation)
return result
}
public static func image(with color: UIColor, size: CGSize) -> UIImage? {
guard size != CGSize.zero else {
return nil
}
let rect = CGRect(origin: CGPoint.zero, size: size)
UIGraphicsBeginImageContextWithOptions(size, false, UIScreen.main().scale)
defer {
UIGraphicsEndImageContext()
}
guard let context = UIGraphicsGetCurrentContext() else {
return nil
}
context.setFillColor(color.cgColor)
context.fill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image;
}
This issue has been solved in iOS10 beta 4.

Swift - Adding text to a resizable image

I've been fiddling around with different techniques on how to implement a resizable image in the Asset Catalog, but there are no examples out there on how to add text to these resizable images in Swift (even in Apple's own guides) and allow them to resize dynamically.
If any knows how to do that, knows links to blogs which explain that, that will be helpful.
UIImage class scales an image to the dimensions you set it at via constraints or frame depending on how you're laying your stuff out. You can load an image from the assets catalog via the name, as you know.
So just upload the images to the asset catalog in all resolutions for the device classes (e.g. 1x, 2x, 3x...) and then create a UIImage() and set the frame of the UIImage instance after it is a subview, and there you go. iOS will select the right image size based on the device/screen you have and that's about as well as you can do to get a good resolution for a scaled image.
By default, you'll be using constraints to size the UIImage in Interface Builder. If you create the image programmatically you'll have more flexibility but more work as to how you size the UIImage after you place it in a superview.
The following is some code to scale a UIImage. Once you have a bitmap context of the image as shown in the function below, you can use other drawing and font functions with the derived context handle to add text. As you search core image and core text classes you'll notice conversion options and ways manipulate and add text and do all kinds of image manipulation.
func scaleImage(image: UIImage, newSize: CGSize) -> UIImage {
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue)
var colorSpace = CGColorSpaceCreateDeviceRGB()
var bytesPerRow = CGImageGetBytesPerRow(image.CGImage)
var ctx = CGBitmapContextCreate(nil,
UInt(newSize.width),
UInt(newSize.height),
CGImageGetBitsPerComponent(image.CGImage),
UInt(newSize.width * 4),
colorSpace,
bitmapInfo)!
CGContextSetInterpolationQuality(ctx, kCGInterpolationHigh)
CGContextDrawImage(ctx, CGRect(origin: CGPointZero, size: CGSizeMake(newSize.width, newSize.height)), image.CGImage)
return UIImage(CGImage: CGBitmapContextCreateImage(ctx))!
}
You could also convert the label to an image by programmatically taking a snapshot as shown here:
func takeSnapshot(view: UIView) -> UIImageView {
var image : UIImage
UIGraphicsBeginImageContextWithOptions(view.bounds.size, false, 0.0)
view.layer.renderInContext(UIGraphicsGetCurrentContext())
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
var imageView = UIImageView(image: image)
imageView.opaque = false
return imageView
}
And overlay one image on the other like this:
func overlayImages(images: [UIImage]) -> UIImage? {
var compositeImage : UIImage?
if images.count > 0 {
var maxWidth = CGFloat(0), maxHeight = CGFloat(0)
for image in images {
if image.size.width > maxWidth {
maxWidth = image.size.width
}
if image.size.height > maxHeight {
maxHeight = image.size.height
}
}
var size = CGSizeMake(maxWidth, maxHeight)
UIGraphicsBeginImageContext(size);
for image in images {
var x = maxWidth / 2 - image.size.width / 2
var y = maxHeight / 2 - image.size.height / 2
image.drawInRect(CGRectMake(x, y, image.size.width, image.size.height))
}
compositeImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return compositeImage;
}
You could also subclass your UIImageView() and override drawRect() method rect to add some text as shown, but don't forget to call super() to make sure the image is drawn as well.
var stringAttrs = [UITextAttributeFont : font,
UITextAttributeTextColor : textColor ]
var attrStr = NSAttributedString(string:"hello", attributes:stringAttrs)
attrStr.drawAtPoint:CGPointMake(10.f, 10.f)

How to take screenshot of UIScrollView visible area?

How do I take a 1:1 screenshot of UIScrollView visible area? The content may be larger or smaller than UIScrollView bounds as well as half-hidden (I've implemented custom scrolling for smaller content, so it's not in the top-left corner).
I've achieved desired result on simulator, but not on device itself:
-(UIImage *)imageFromCombinedContext:(UIView *)background {
UIImage *image;
CGRect vis = background.bounds;
CGSize size = vis.size;
UIGraphicsBeginImageContext(size);
[background.layer affineTransform];
[background.layer renderInontext:UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef imref = CGImageCreateWithImageInRect([image CGImage], vis);
image = [UIImage imageWithCGImage:imref];
CGImageRelease(imref);
return image;
}
Another approach would be to use the contentOffset to adjust the layer's visible area and capture only the currently visible area of UIScrollView.
UIScrollView *contentScrollView;....//scrollview instance
UIGraphicsBeginImageContextWithOptions(contentScrollView.bounds.size,
YES,
[UIScreen mainScreen].scale);
//this is the key
CGPoint offset=contentScrollView.contentOffset;
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -offset.x, -offset.y);
[contentScrollView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *visibleScrollViewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Cheers :)
Swift version of Abduliam Rehmanius answer.
func screenshot() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.scrollCrop.bounds.size, true, UIScreen.mainScreen().scale);
//this is the key
let offset:CGPoint = self.scrollCrop.contentOffset;
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -offset.x, -offset.y);
self.scrollCrop.layer.renderInContext(UIGraphicsGetCurrentContext()!);
let visibleScrollViewImage: UIImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return visibleScrollViewImage;
}
Swift 4 version:
func screenshot() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.scrollCrop.bounds.size, false, UIScreen.main.scale)
let offset = self.scrollCrop.contentOffset
let thisContext = UIGraphicsGetCurrentContext()
thisContext?.translateBy(x: -offset.x, y: -offset.y)
self.scrollCrop.layer.render(in: thisContext!)
let visibleScrollViewImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return visibleScrollViewImage
}
I've found a solution myself - I took screenshot of the whole view and then crop it to the size and position of UIScrollView frame.
-(UIImage *)imageFromCombinedContext:(UIView *)background
{
UIImage *image;
CGSize size = self.view.frame.size;
UIGraphicsBeginImageContext(size);
[background.layer affineTransform];
[self.view.layer.layer renderInContext:UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef imgRef = CGImageCreateWithImageInRect([image CGImage],background.frame);
image = [UIImage imageWithCGImage:imref];
CGImageRelease(imref);
return image;
}
Swift 4 version of Abduliam Rehmanius answer as UIScrollView extension with translation, no slow cropping
extension UIScrollView {
var snapshotVisibleArea: UIImage? {
UIGraphicsBeginImageContext(bounds.size)
UIGraphicsGetCurrentContext()?.translateBy(x: -contentOffset.x, y: -contentOffset.y)
layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
Jeffery Sun has the right answer. Just put your scroll view inside another view. Get container view to render in context. done.
In the code below, cropView contains the scroll view to be captured. The solution is really just that simple.
As I understand the question and why I found this page, the whole content of the scroll view isn't wanted - just the visible portion.
func captureCrop() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.cropView.frame.size, true, 0.0)
self.cropView.layer.renderInContext(UIGraphicsGetCurrentContext())
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image;
}
#Abduliam Rehmanius's answer has poor performance, since if the UIScrollView contains a large content area, we will draw that entire content area (even outside the visible bounds).
#Concuror's answer has the issue that it will also draw anything that is on top of the UIScrollView.
My solution was to put the UIScrollView inside a UIView called containerView with the same bounds and then render containerView:
containerView.renderInContext(context)
Swift 3.0 :
func captureScreen() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(self.yourScrollViewName.bounds.size, true, UIScreen.main.scale)
let offset:CGPoint = self.yourScrollViewName.contentOffset;
UIGraphicsGetCurrentContext()!.translateBy(x: -offset.x, y: -offset.y);
self.yourScrollViewName.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
and use it as :
let Image = captureScreen()
Update swift 3+, 4 on #Concuror code
func getImage(fromCombinedContext background: UIView) -> UIImage {
var image: UIImage?
let size: CGSize = view.frame.size
UIGraphicsBeginImageContext(size)
background.layer.affineTransform()
view.layer.render(in: UIGraphicsGetCurrentContext()!)
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let imgRef = image?.cgImage?.cropping(to: background.frame)
image = UIImage(cgImage: imgRef!)
// CGImageRelease(imgRef!) // Removing on Swift - 'CGImageRelease' is unavailable: Core Foundation objects are automatically memory managed
return image ?? UIImage()
}
A lot of the answers use UIGraphicsBeginImageContext (pre iOS 10.0) to create an image, this creates an image missing the P3 colour gamut - reference https://stackoverflow.com/a/41288197/2481602
extension UIScrollView {
func asImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
inputView?.layer.render(in: rendererContext.cgContext)
layer.render(in: rendererContext.cgContext)
}
}
}
The above will result in a better quality image being produced.
The second image is clearer, and showing more of the colours - this was done with the UIGraphicsImageRenderer rather than the UIGraphicsBeginImageContext (first Image)

Resources