iOS - Create a huge amount of UIImageViews without freezing the UI - ios

I have to render create a huge amount(~10000) of UIImage objects, which are later added to a map using MapBox. The image are created by rendering a UILabel to an UIImage (sadly, the label can not directly be rendered on the map in the right font).
I imagined that, because I am not adding the views to the hierarchy until they are rendered on the map, I could create the views on a background thread in order to not have the UI freeze. However, this seems not to be possible.
My question is, is there a way to create a huge amount of UIImage objects by rendering a UILabel to an image without freezing the UI? Thanks for any help!
My code to render a UIView to an image is as follows:
private func render(_ view: UIView) -> UIImage? {
UIGraphicsBeginImageContext(view.frame.size)
guard let currentContext = UIGraphicsGetCurrentContext() else {return nil}
view.layer.render(in: currentContext)
return UIGraphicsGetImageFromCurrentImageContext()
}
And an example of a label I render to an image:
bigItemList.forEach { item in
// work to process the item..
// I have to call the rendering om the main thread, which causes the UI to freeze
DispatchQueue.main.async {
addImage(createLabel(text: label))
}
}
func createLabel(text: String?) -> UIImage? {
let label = UILabel()
label.textAlignment = .center
label.font = .boldSystemFont(ofSize: 14)
label.textColor = .mainBlue
label.text = text
label.sizeToFit()
label render(depthLabel)
}

UILabel is not safe to access on any non-main thread. The tool you want here is CATextLayer which will do all the same things (with a slightly clunkier syntax) while being thread-safe. For example:
func createLabel(text: String) -> UIImage? {
let label = CATextLayer()
let uiFont = UIFont.boldSystemFont(ofSize: 14)
label.font = CGFont(uiFont.fontName as CFString)
label.fontSize = 14
label.foregroundColor = UIColor.blue.cgColor
label.string = text
label.bounds = CGRect(origin: .zero, size: label.preferredFrameSize())
let renderer = UIGraphicsImageRenderer(size: label.bounds.size)
return renderer.image { context in
label.render(in: context.cgContext)
}
}
This is safe to run on any queue.

You most probably are already in main queue. Try execuring the loop in another one:
DispatchQueue.init(label: "Other queue").async {
bigItemList.forEach { item in
// work to process the item..
// I have to call the rendering om the main thread, which causes the UI to freeze
DispatchQueue.main.async {
addImage(createLabel(text: label))
}
}
}
EDIT: If it is already in another queue, you are also missing the UIGraphicsEndImageContext() call into render:
private func render(_ view: UIView) -> UIImage? {
UIGraphicsBeginImageContext(view.frame.size)
guard let currentContext = UIGraphicsGetCurrentContext() else {return nil}
view.layer.render(in: currentContext)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
EDIT: Another thing I see is that label allocation is the most time consuming in your loop. Try allocating it once since it's reusable:
let label = UILabel()
func createLabel(text: String?) -> UIImage? {
label.textAlignment = .center
...
...
}

Related

UIView take a screenshot when it is not visible

I have a UIView that can be drawn like a finger paint application, but sometimes it is not visible. I want to be able to take a screenshot of it when it is not visible. Also, I want a screenshot where it is visible, but I don't want any subviews. I just want the UIView itself. This is the method I have tried:
func shapshot() -> UIImage? {
UIGraphicsBeginImageContext(self.frame.size)
guard let context = UIGraphicsGetCurrentContext() else {
return nil
}
self.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
if image == nil {
return nil
}
return image
}
func snapshot() -> UIImage {
UIGraphicsBeginImageContextWithOptions(bounds.size, self.isOpaque, UIScreen.main.scale)
layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
To get view rendered as UIImage, you could introduce a very simple protocol and extend UIView with it.
protocol Renderable {
var render: UIImage { get }
}
extension UIView: Renderable {
var render: UIImage {
UIGraphicsImageRenderer(bounds: bounds).image { context in
layer.render(in: context.cgContext)
}
}
}
and now it's super easy to get the image of any view
let image: UIImage = someView.render
then if you plan to share it or save it, you probably want to convert it to Data
let data: Data? = image.pngData()
I am not sure what you mean with the "when it is not visible" but this should work as long as the view is in the view hierarchy and it's properly laid out. I have been using this method in many apps for sharing stuff and it never failed me.
And of course there is no need for a protocol, feel free to use only the render computed property. It's just a matter of preference.
Documentation:
UIGraphicsImageRenderer, image(actions:)

UILabel text duplicates in PDF when containing a certain character

I'm generating a PDF something like:
UIGraphicsBeginPDFContextToFile
layer.render(in: context)
UIGraphicsEndPDFContext
Yesterday, I found a fix to make all text vector-based (basically this answer https://stackoverflow.com/a/9056861/897465).
That worked really well, text is vectors and searchable, etc.
Except for one irritating thing.
Everytime a UILabel contains the character "Å", the text is kind of duplicated in the PDF, like the image below. Probably there are other characters as well the would cause this.
I've got a small example that demonstrates it below. If you run this in the simulator you'll get a pdf in /tmp/test.pdf where you can see the issue yourself.
I guess I should file a rdar, but I would really like some good workaround (good meaning not checking if label.text contains "Å"). Since I don't think this is something Apple would fix, considering the whole workaround to begin with (PDFLabel).
import UIKit
class PDFLabel: UILabel {
override func draw(_ layer: CALayer, in ctx: CGContext) {
let isPDF = !UIGraphicsGetPDFContextBounds().isEmpty
if !layer.shouldRasterize && isPDF {
draw(bounds)
} else {
super.draw(layer, in: ctx)
}
}
}
func generatePDFWith(_ texts: [String]) {
let paper = CGRect(origin: .zero, size: CGSize(width: 876, height: 1239))
UIGraphicsBeginPDFContextToFile("/tmp/test.pdf", paper, [
kCGPDFContextCreator as String: "SampleApp"
])
texts.forEach { text in
UIGraphicsBeginPDFPage()
let v = UIView()
let label = PDFLabel()
label.text = text
label.textColor = .black
v.translatesAutoresizingMaskIntoConstraints = false
label.translatesAutoresizingMaskIntoConstraints = false
v.addSubview(label)
v.widthAnchor.constraint(equalToConstant: 500).isActive = true
v.heightAnchor.constraint(equalToConstant: 500).isActive = true
label.centerXAnchor.constraint(equalTo: v.centerXAnchor).isActive = true
label.centerYAnchor.constraint(equalTo: v.centerYAnchor).isActive = true
v.setNeedsLayout()
v.layoutIfNeeded()
v.layer.render(in: UIGraphicsGetCurrentContext()!)
}
UIGraphicsEndPDFContext();
print("Done!")
}
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
generatePDFWith([
"Ångavallen",
"Hey Whatsapp?",
"Änglavallen",
"Örebro",
"Råå",
"RÅÅ",
"Å",
"Ål",
])
}
}
EDIT: Little progress in debugging, it seems that the draw function gets called two times, and the first time it is drawn a little "off" if it has an "Å" (probably any character larger than the box).
So this fixed it for me (however ugly it may be):
class PDFLabel: UILabel {
var count = 0
override func draw(_ layer: CALayer, in ctx: CGContext) {
let isPDF = !UIGraphicsGetPDFContextBounds().isEmpty
if isPDF {
if count > 0 { draw(bounds) }
count += 1
} else if !layer.shouldRasterize {
draw(bounds)
} else {
super.draw(layer, in: ctx)
}
}
}

UIImagePickerController cropping image rect is not correct

I have a UIViewController that holds the image picker:
let picker = UIImagePickerController()
and I call the image picker like that:
private func showCamera() {
picker.allowsEditing = true
picker.sourceType = .camera
picker.cameraCaptureMode = .photo
picker.modalPresentationStyle = .fullScreen
present(picker, animated: true, completion: nil)
}
when I'm done I get a delegate callback like that:
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
DispatchQueue.main.async {
if let croppedImage = info[UIImagePickerControllerEditedImage] as? UIImage {
self.imageView.contentMode = .scaleAspectFill
self.imageView.image = croppedImage
self.dismiss(animated:true, completion: nil)
}
}
}
and I get the cropping UI after I took the image and in the video you can see the behaviour:
https://youtu.be/OaJnsjrlwF8
As you can see, I can not scroll the zoomed rect to the bottom or top. This behaviour is reproducible on iOS 10/11 on multiple devices.
Is there any way to get this right with UIImagePickerController?
No, this component has been bugged for quite some time now. Not only this one with positioning but also the cropped rect is usually incorrect (off by some 20px vertically).
It seems Apple has no interest in fixing it and you should create your own. It is not too much of work. Start by creating a screen that accepts and displays image on scroll view. Then ensure zoom and pan is working (maybe even rotation) which should be all done pretty quickly.
Then the cropping part occurs which is actually done quickest by using view snapshot:
The following will create an image from image view:
func snapshotImageFor(view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, false, 0.0)
guard let context = UIGraphicsGetCurrentContext() else {
return nil
}
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
Then in your view controller you can do this little trick:
func createSnapshot(inFrame rect: CGRect) -> UIImage? {
let temporaryView = UIView(frame: rect) // This is a view from which the snapshot will occure
temporaryView.clipsToBounds = true
view.addSubview(temporaryView) // We want to put it into hierarchy
guard let viewToSnap = scrollViewContainer else { return nil } // We want to use the superview of the scrollview because using scroll view directly may have some issues.
let originalImageViewFrame = viewToSnap.frame // Preserve previous frame
guard let originalImageViewSuperview = viewToSnap.superview else { return nil } // Preserve previous superview
guard let index = originalImageViewSuperview.subviews.index(of: viewToSnap) else { return nil } // Preserve view hierarchy index
// Now change the frame and put it on the new view
viewToSnap.frame = originalImageViewSuperview.convert(originalImageViewFrame, to: temporaryView)
temporaryView.addSubview(viewToSnap)
// Create snapshot
let croppedImage = snapshotImageFor(view: temporaryView)
// Put everything back the way it was
viewToSnap.frame = originalImageViewFrame // Reset frame
originalImageViewSuperview.insertSubview(viewToSnap, at: index) // Reset superview
temporaryView.removeFromSuperview() // Remove the temporary view
self.croppedImage = croppedImage
return croppedImage
}
There are some downsides to this procedures like doing everything on main thread but for your specific procedure this should not be a problem at all.
You might at some point want some control of the image output size. You can do that easiest by modifying snapshot image to include custom scale:
func snapshotImageFor(view: UIView, scale: CGFloat = 0.0) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, false, scale)
Then you would for instance call snapshotImageFor(view: view, scale: expectedWidth/view.bounds.width).

adding catextlayers to calayer in a background task

I have a situation, which I do not understand.
I like to create CATextLayers in a background task and show them in my view, because it takes some time.
This works perfect without a background task. I can see the text immediately in my view "worldmapview"
#IBOutlet var worldmapview: Worldmapview! // This view is just an empty view.
override func viewDidLoad(){
view.addSubview(worldmapview);
}
func addlayerandrefresh_direct(){
var calayer=CALayer();
var newtextlayer:CATextLayer=create_text_layer(0,y1: 0,width:1000,heigth:1000,text:"My Text ....",fontsize:5);
self.worldmapview.layer.addSublayer(newtextlayer)
//calayer.addSublayer(newtextlayer);
calayer.addSublayer(newtextlayer)
self.worldmapview.layer.addSublayer(calayer);
self.worldmapview.setNeedsDisplay();
}
When doing this in a backgroundtask, the text does not appear in my view. Sometimes, not always, it appears after some seconds (10 for example).
func addlayerandrefresh_background(){
let qualityOfServiceClass = QOS_CLASS_BACKGROUND
let backgroundQueue = dispatch_get_global_queue(qualityOfServiceClass, 0)
dispatch_async(backgroundQueue, {
var calayer=CALayer();
var newtextlayer:CATextLayer=create_text_layer(0,y1: 0,width:1000,heigth:1000,text:"My Text ....",fontsize:5);
dispatch_async(dispatch_get_main_queue(),{
self.worldmapview.layer.addSublayer(newtextlayer)
//calayer.addSublayer(newtextlayer);
calayer.addSublayer(newtextlayer)
self.worldmapview.layer.addSublayer(calayer);
self.worldmapview.setNeedsDisplay();
})
})
}
func create_text_layer(x1:CGFloat,y1:CGFloat,width:CGFloat,heigth:CGFloat,text:String,fontsize:CGFloat) -> CATextLayer {
let textLayer = CATextLayer()
textLayer.frame = CGRectMake(x1, y1, width, heigth);
textLayer.string = text
let fontName: CFStringRef = "ArialMT"
textLayer.font = CTFontCreateWithName(fontName, fontsize, nil)
textLayer.fontSize=fontsize;
textLayer.foregroundColor = UIColor.darkGrayColor().CGColor
textLayer.wrapped = true
textLayer.alignmentMode = kCAAlignmentLeft
textLayer.contentsScale = UIScreen.mainScreen().scale
return textLayer;
}
Does someone see, what is wrong ?
What is very confusing : The same doing with CAShapeLayer works in the background.
Looks like setNeedDisplay not cause sublayers redraw and we need call it on all layers we need, in this case it's newly added layer
func addlayerandrefresh_background(){
let calayer = CALayer()
calayer.frame = self.worldmapview.bounds
dispatch_async(backgroundQueue, {
for var i = 0; i < 100 ; i+=10 {
let newtextlayer:CATextLayer=self.create_text_layer(0, y1: CGFloat(i) ,width:200,heigth:200,text:"My Text ....",fontsize:5)
calayer.addSublayer(newtextlayer)
}
dispatch_async(dispatch_get_main_queue(),{
self.worldmapview.layer.addSublayer(calayer)
for l in calayer.sublayers! {
l.setNeedsDisplay()
}
})
})
}
All the changes on UI are performed on main thread. You cannot update User Interface on background thread. According to Apple's documentation
Work involving views, Core Animation, and many other UIKit classes
usually must occur on the app’s main thread. There are some exceptions
to this rule—for example, image-based manipulations can often occur on
background threads—but when in doubt, assume that work needs to happen
on the main thread.

Make emoji symbols grayscale in UILabel

I would like to use Apple's built-in emoji characters (specifically, several of the smileys, e.g. \ue415) in a UILabel but I would like the emojis to be rendered in grayscale.
I want them to remain characters in the UILabel (either plain text or attributed is fine). I'm not looking for a hybrid image / string solution (which I already have).
Does anyone know how to accomplish this?
I know you said you aren't looking for a "hybrid image solution", but I have been chasing this dragon for a while and the best result I could come up with IS a hybrid. Just in case my solution is somehow more helpful on your journey, I am including it here. Good luck!
import UIKit
import QuartzCore
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// the target label to apply the effect to
let label = UILabel(frame: view.frame)
// create label text with empji
label.text = "🍑 HELLO"
label.textAlignment = .center
// set to red to further show the greyscale change
label.textColor = .red
// calls our extension to get an image of the label
let image = UIImage.imageWithLabel(label: label)
// create a tonal filter
let tonalFilter = CIFilter(name: "CIPhotoEffectTonal")
// get a CIImage for the filter from the label image
let imageToBlur = CIImage(cgImage: image.cgImage!)
// set that image as the input for the filter
tonalFilter?.setValue(imageToBlur, forKey: kCIInputImageKey)
// get the resultant image from the filter
let outputImage: CIImage? = tonalFilter?.outputImage
// create an image view to show the result
let tonalImageView = UIImageView(frame: view.frame)
// set the image from the filter into the new view
tonalImageView.image = UIImage(ciImage: outputImage ?? CIImage())
// add the view to our hierarchy
view.addSubview(tonalImageView)
}
}
extension UIImage {
class func imageWithLabel(label: UILabel) -> UIImage {
UIGraphicsBeginImageContextWithOptions(label.bounds.size, false, 0.0)
label.layer.render(in: UIGraphicsGetCurrentContext()!)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}
}

Resources