Change color of kCGImageAlphaOnly rendered CGImage - ios

I'm trying to take some huge 32bit PNGs that are actually just black with an alpha channel and present them in an iOS app in a memory-friendly way.
To do that I've tried to re-render the images in an "alpha-only" CGContext:
extension UIImage {
func toLayer() -> CALayer? {
let cgImage = self.cgImage!
let height = Int(self.size.height)
let width = Int(self.size.width)
let colorSpace = CGColorSpaceCreateDeviceGray()
let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width, space: colorSpace, bitmapInfo: CGImageAlphaInfo.alphaOnly.rawValue)!
context.draw(cgImage, in: CGRect(origin: .zero, size: self.size))
let image = context.makeImage()!
let layer = CALayer()
layer.contents = image
layer.contentsScale = self.scale
return layer
}
}
This is awesome! It takes memory usage down from 180MB to about 18MB, which is actually better than I expected.
The issue is, the black (or, now, opaque) parts of the image are no longer black, but are white instead.
It seems like it should be an easy fix to change the coloration of the opaque bits but I can't find any information about it online. Do you have an idea?

I've managed to answer my own question. By setting the alpha-only image as the contents of the output layer's mask, we can set the background colour of the layer to anything we want (including non-greyscale values), and still keep the memory benefits!
I've included the final code because I'm surely not the only one interested in this method:
extension UIImage {
func to8BitLayer(color: UIColor = .black) -> CALayer? {
guard let cgImage = self.cgImage else { return nil }
let height = Int(self.size.height * scale)
let width = Int(self.size.width * scale)
let colorSpace = CGColorSpaceCreateDeviceGray()
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width, space: colorSpace, bitmapInfo: CGImageAlphaInfo.alphaOnly.rawValue) else {
print("Couldn't create CGContext")
return nil
}
context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
guard let image = context.makeImage() else {
print("Couldn't create image from context")
return nil
}
// Note that self.size corresponds to the non-scaled (retina) dimensions, so is not the same size as the context
let frame = CGRect(origin: .zero, size: self.size)
let mask = CALayer()
mask.contents = image
mask.contentsScale = scale
mask.frame = frame
let layer = CALayer()
layer.backgroundColor = color.cgColor
layer.mask = mask
layer.contentsScale = scale
layer.frame = frame
return layer
}
}

Related

How to change color for point in UIImage?

I have a code that changes the color of a certain point in an image to transparent.
func processByPixel(in image: UIImage, byPoint: CGPoint) -> UIImage? {
guard let inputCGImage = image.cgImage else { print("unable to get cgImage"); return nil }
let colorSpace = CGColorSpaceCreateDeviceRGB()
let width = inputCGImage.width
let height = inputCGImage.height
let bytesPerPixel = 4
let bitsPerComponent = 8
let bytesPerRow = bytesPerPixel * width
let bitmapInfo = RGBA32.bitmapInfo
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
print("Cannot create context!"); return nil
}
context.draw(inputCGImage, in: CGRect(x: 0, y: 0, width: width, height: height))
guard let buffer = context.data else { print("Cannot get context data!"); return nil }
let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height)
let offset = Int(byPoint.x) * width + Int(byPoint.y)
pixelBuffer[offset] = .transparent
let outputCGImage = context.makeImage()!
let outputImage = UIImage(cgImage: outputCGImage, scale: image.scale, orientation: image.imageOrientation)
return outputImage
}
By tapping on the picture, I calculate the point on which it was clicked and pass it to this function.
The problem is that the color changes slightly with the offset.
For example I will pass CGPoint(x: 0, y:0) but change color to 0, 30
I think that the offset variable is not calculated correctly
Calculate the offset like this:
let offset = Int(byPoint.y) * width + Int(byPoint.x)
It is the number of vertical lines times line length plus x.

Force UIView to draw with specific scale (1 point = 1 pixel)

I'm doing a barcode printing feature, it generates a barcode view and then send the pixels data to the thermal printer. The process is below:
Snapshot UIView with size 250x90 (points) to UIImage:
let renderer = UIGraphicsImageRenderer(bounds: view.bounds)
let image = renderer.image { rendererContext in
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
}
Get pixels data of output image:
extension UIImage {
func pixelData() -> [UInt8]? {
let height = self.size.height
let width = self.size.width
let dataSize = width * height
var pixelData = [UInt8](repeating: 0, count: Int(dataSize))
let colorSpace = CGColorSpaceCreateDeviceGray()
let bitmapInfo: UInt32 = 0
let context = CGContext(data: &pixelData,
width: Int(width),
height: Int(height),
bitsPerComponent: 8,
bytesPerRow: Int(width),
space: colorSpace,
bitmapInfo: bitmapInfo)
guard let cgImage = self.cgImage else { return nil }
context?.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
return pixelData
}
}
Send pixelData to printer (after some process to convert it to printer data, like which pixel is black/white/gray...)
The problem is that, the size of the output bitmap must be fixed to 250x90 pixels, so it can be fit in the label stamp. But in high resolution iPhones with screen scale of 3x, after called pixelData() with 250x90 as width/height, the output CGImage will be downscale from original cgImage (because the original has 750x270 pixels). And because of downscaling, some black area become gray, and barcode becomes unrecognizable.
I can put the image.scale to pixelData() method, but that will make the pixels data to have the physical size of 750x270 pixels and it too large to fit in the label stamp.
I also tried this way to create UIImage but it still downscales and pixelates the output image:
// force 1.0 scale
UIGraphicsBeginImageContextWithOptions(bounds.size, isOpaque, 1.0)
drawHierarchy(in: bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
So the question is:
Can I force the UIView to be drawn with 1x scale, as 1 point = 1 pixel, and everything after that will working as expected?
Or can I adjust the pixels data generation so that context.draw can merge 3 pixels into 1?

Render a CGImage (rather than UIImage) directly?

I'm making a CGImage
func otf() -> CGImage {
which is a bezier mask on a gradient. So,
// the path
let p = UIBezierPath()
p.moveTo etc
// the mask
let m = CAShapeLayer()
set size etc
m.path = p.cgPath
// the layer
let l = CAGradientLayer()
set colors etc
l.mask = m
it's done. So then render a UIImage in the usual way ...
UIGraphicsBeginImageContextWithOptions(sz.size, false, UIScreen.main.scale)
l.render(in: UIGraphicsGetCurrentContext()!)
let r = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
But. Then to return it, of course you have to convert to a CGImage
return r.cgImage!
(Same deal if you use something like ..
UIGraphicsImageRenderer(bounds: b).image { (c) in v.layer.render(in: c) }
.. you get a UIImage, not a CGImage.)
Seems like there should be a better / more elegant way - is there some way to more directly "build a CGImage", rather than "building a UIImage, and then converting"?
You need to create CGContext to generate direct CGImage
Note:- Use this code to create direct CGImage
func createImage() -> CGImage? {
// the path
var p = UIBezierPath()
p = UIBezierPath(arcCenter: CGPoint(x: 100,y: 100), radius: CGFloat(200), startAngle: CGFloat(0), endAngle:CGFloat(Double.pi * 2), clockwise: true)
// the mask
let m = CAShapeLayer()
m.path = p.cgPath
// the layer
let layer = CAGradientLayer()
layer.frame = CGRect(x: 0, y: 0, width: 200, height: 200)
layer.colors = [UIColor.red.cgColor, UIColor.black.cgColor]
layer.mask = m
let imageSize = CGSize(width: 200, height: 200)
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB)!
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
guard let context = CGContext(data: nil, width: Int(imageSize.width), height: Int(imageSize.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) else { return nil }
layer.render(in: context)
let img = context.makeImage()
return img
}

Crop image captured with camera session

I am trying to crop image captured with camera session with specific rect of interest. For find proportional crop rect I am using previewLayer.metadataOutputRectConverted method. But after cropping i got wrong ratio.
Debug example:
(lldb) po rectOfInterest.width / rectOfInterest.height
0.7941176470588235
(lldb) po image.size.width / image.size.height
0.75
(lldb) po outputRect.width / outputRect.height
0.9444444444444444
(lldb) po Double(cropped.width) / Double(cropped.height)
0.7080152671755725
As you can see, I am expecting that cropped image ratio will be ~0.79 as rectOfInterest which I am using for cropping.
Method:
private func makeImageCroppedToRectOfInterest(from image: UIImage) -> UIImage {
let previewLayer = cameraController.previewLayer
let rectOfInterest = layoutLayer.layoutRect
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: rectOfInterest)
guard let cgImage = image.cgImage else {
return image
}
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: outputRect.origin.x * width,
y: outputRect.origin.y * height ,
width: outputRect.size.width * width,
height: outputRect.size.height * height)
guard let cropped = cgImage.cropping(to: cropRect) else {
return image
}
return UIImage(cgImage: cropped, scale: image.scale, orientation: image.imageOrientation)
}
Target rect:
I'm not an expert, but I think you misinterpreted the metadataOutputRectConverted method usage.
There is no way your code could return a cropped image with the same aspect ratio since you are multiplying (what I think are) virtually unrelated numbers one to each other.
You can try this method
previewLayer.layerRectConverted(fromMetadataOutputRect: CGRect(x: 0, y: 0, width: 1, height: 1))
To get an idea on what the actual calculations made from the metadataOututRectConverted are.
I think you could explain better what you want to achieve, and maybe provide a sample project (or at the very least some more context on what are the actual images/rects you are using) to help us debug, if you want to have more help on this.
Thanks to #Enricoza I understand how to fix my problem. Here is code:
private func makeImageCroppedToRectOfInterest(from image: UIImage) -> UIImage {
let previewLayer = cameraController.previewLayer
let rectOfInterest = layoutLayer.layoutRect
let metadataOutputRect = CGRect(x: 0, y: 0, width: 1, height: 1)
let outputRect = previewLayer.layerRectConverted(fromMetadataOutputRect: metadataOutputRect)
guard let cgImage = image.cgImage else {
return image
}
let width = image.size.width
let height = image.size.height
let factorX = width / outputRect.width
let factorY = height / outputRect.height
let factor = max(factorX, factorY)
let cropRect = CGRect(x: (rectOfInterest.origin.x - outputRect.origin.x) * factor,
y: (rectOfInterest.origin.y - outputRect.origin.y) * factor,
width: rectOfInterest.size.width * factor,
height: rectOfInterest.size.height * factor)
guard let cropped = cgImage.cropping(to: cropRect) else {
return image
}
return UIImage(cgImage: cropped, scale: image.scale, orientation: image.imageOrientation)
}

Changing the color of a gif in Swift

I need to change the color of a gif programatically. The original gif is white on clear background, and I want to make it change color through code.
I've tried the answer in this thread, but that causes the gif to freeze on the first frame. Here's some of the code I'm working with.
class MyViewController: UIViewController {
#IBOutlet weak var myImageView: UIImageView?
override func viewDidLoad() {
myImageView?.image = UIImage.gif(name: "my_gif")
myImageView?.image = myImageView?.image?.maskWithColor(color: UIColor.red)
// This creates a red, but still version of the gif
}
}
extension UIImage {
func maskWithColor(color: UIColor) -> UIImage? {
let maskImage = cgImage!
let width = size.width
let height = size.height
let bounds = CGRect(x: 0, y: 0, width: width, height: height)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)!
context.clip(to: bounds, mask: maskImage)
context.setFillColor(color.cgColor)
context.fill(bounds)
if let cgImage = context.makeImage() {
let coloredImage = UIImage(cgImage: cgImage)
return coloredImage
} else {
return nil
}
}
}
I'm currently using different gif files for each color I need as a workaround, but that sounds awfully inefficient. Any ideas?
Thanks in advance!
You can create a UIView and set it's background color. Then you can set the view's alpha value to 0.5 and place the view over the image view.
Like so:
let overlayView = UIView()
overlayView.frame = myImageView.frame
overlayView.backgroundColor = .red
overlayView.alpha = 0.5
view.addSubview(overlayView)
This way your not messing up the existing image view that works.
Hope this helps.

Resources