Force UIView to draw with specific scale (1 point = 1 pixel) - ios

I'm doing a barcode printing feature, it generates a barcode view and then send the pixels data to the thermal printer. The process is below:
Snapshot UIView with size 250x90 (points) to UIImage:
let renderer = UIGraphicsImageRenderer(bounds: view.bounds)
let image = renderer.image { rendererContext in
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
}
Get pixels data of output image:
extension UIImage {
func pixelData() -> [UInt8]? {
let height = self.size.height
let width = self.size.width
let dataSize = width * height
var pixelData = [UInt8](repeating: 0, count: Int(dataSize))
let colorSpace = CGColorSpaceCreateDeviceGray()
let bitmapInfo: UInt32 = 0
let context = CGContext(data: &pixelData,
width: Int(width),
height: Int(height),
bitsPerComponent: 8,
bytesPerRow: Int(width),
space: colorSpace,
bitmapInfo: bitmapInfo)
guard let cgImage = self.cgImage else { return nil }
context?.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
return pixelData
}
}
Send pixelData to printer (after some process to convert it to printer data, like which pixel is black/white/gray...)
The problem is that, the size of the output bitmap must be fixed to 250x90 pixels, so it can be fit in the label stamp. But in high resolution iPhones with screen scale of 3x, after called pixelData() with 250x90 as width/height, the output CGImage will be downscale from original cgImage (because the original has 750x270 pixels). And because of downscaling, some black area become gray, and barcode becomes unrecognizable.
I can put the image.scale to pixelData() method, but that will make the pixels data to have the physical size of 750x270 pixels and it too large to fit in the label stamp.
I also tried this way to create UIImage but it still downscales and pixelates the output image:
// force 1.0 scale
UIGraphicsBeginImageContextWithOptions(bounds.size, isOpaque, 1.0)
drawHierarchy(in: bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
So the question is:
Can I force the UIView to be drawn with 1x scale, as 1 point = 1 pixel, and everything after that will working as expected?
Or can I adjust the pixels data generation so that context.draw can merge 3 pixels into 1?

Related

Generate gradient image texture on iOS

I need to generate a 256x1 texture on iOS. I have 4 colors Red, Green, Yellow, Blue with Red on one end and Blue on the other, and Yellow and Green at 3/4th and 1/4th position respectively. The colors in between need to be linearly interpolated. I need to use this texture in Metal for lookup in shader code. What is the easiest way to generate this texture in code?
Since MTKTextureLoader doesn't currently support the creation of 1D textures (this has been a feature request since at least 2016), you'll need to create your texture manually.
Assuming you already have your image loaded, you can ask it for its CGImage, then use this method to extract the pixel data and load it into a texture:
func texture1DForImage(_ cgImage: CGImage, device: MTLDevice) -> MTLTexture? {
let width = cgImage.width
let height = 0
let bytesPerRow = width * 4 // RGBA, 8 bits per component
let bitmapInfo: UInt32 = /* default byte order | */ CGImageAlphaInfo.premultipliedLast.rawValue
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB)!
let context = CGContext(data: nil,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: bytesPerRow,
space: colorSpace,
bitmapInfo: bitmapInfo)!
let bounds = CGRect(x: 0, y: 0, width: width, height: height)
context.draw(cgImage, in: bounds)
guard let data = context.data?.bindMemory(to: UInt8.self, capacity: bytesPerRow) else { return nil }
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm,
width: width,
height: height,
mipmapped: false)
textureDescriptor.textureType = .type1D
textureDescriptor.usage = [ .shaderRead ]
let texture = device.makeTexture(descriptor: textureDescriptor)!
texture.replace(region: MTLRegionMake1D(0, width), mipmapLevel: 0, withBytes: data, bytesPerRow: bytesPerRow)
return texture
}

How to change UIImage color?

I'd like to change the color of every pixel of any UIImage to a specific color (all pixel should get the same color):
... so of course I could just loop trough every pixel of the UIImage and set it's red, green and blue property to 0 to achieve a black-color look.
But obviously this is not a effective way to recolor an image and I'm pretty sure there are several more effective methods achieving this instead of looping trough EVERY single pixel of the image.
func recolorImage(image: UIImage, color: String) -> UIImage {
let img: CGImage = image.cgImage!
let context = CGContext(data: nil, width: img.width, height: img.height, bitsPerComponent: 8, bytesPerRow: 4 * img.width, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
context.draw(img, in: CGRect(x: 0, y: 0, width: img.width, height: img.height))
let data = context.data!.assumingMemoryBound(to: UInt8.self)
for i in 0..<img.height {
for j in 0..<img.width {
// set data[pixel] ==> [0,0,0,255]
}
}
let output = context.makeImage()!
return UIImage(cgImage: output)
}
Ayn help would be very appreciated!
Since every pixel of the original image will be the same color, the result image is not dependent on the pixels of the original image. Your method actually just needs the size of the image and then creates a new image with that size, that is filled with one single color.
func recolorImage(image: UIImage, color: UIColor) -> UIImage {
let size = image.size
UIGraphicsBeginImageContext(size)
color.setFill()
UIRectFill(CGRect(origin: .zero, size: size))
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}

Reduce UIImage size by making it grayscale

I'm generating a PDF with 10 to 15 images the user has taken. The images are photos of documents and don't need to be colored.
If I simply use the UIImages the user has taken, and use a very high compression rate
UIImageJPEGRepresentation(image, 0.02)
the PDF is about 3MB (witch colored images) on an iPhone6.
To further reduce the file size, I would now like to convert all images to true grayscale (I do want to throw the color information away). I also found this on github
Note that iOS/macOS do not support gray scale with alpha (you have to use a RGB image with all 3 set to the same value + alpha to get this affect).
I'm converting the images to grayscale like so:
guard let cgImage = self.cgImage else {
return self
}
let height = self.size.height
let width = self.size.width
let colorSpace = CGColorSpaceCreateDeviceGray();
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
let context = CGContext.init(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)!
let rect = CGRect(x: 0, y: 0, width: width, height: height)
context.draw(cgImage, in: rect)
guard let grayscaleImage = context.makeImage() else {
return self
}
return UIImage(cgImage: grayscaleImage)
However, when I try to compress the resulting images again with
UIImageJPEGRepresentation(image, 0.02)
I get the flollowing logs:
JPEGDecompressSurface : Picture decode failed: e00002c2
and the images are displayed distorted. Any ideas on how I can get small true grayscale image?

Change color of kCGImageAlphaOnly rendered CGImage

I'm trying to take some huge 32bit PNGs that are actually just black with an alpha channel and present them in an iOS app in a memory-friendly way.
To do that I've tried to re-render the images in an "alpha-only" CGContext:
extension UIImage {
func toLayer() -> CALayer? {
let cgImage = self.cgImage!
let height = Int(self.size.height)
let width = Int(self.size.width)
let colorSpace = CGColorSpaceCreateDeviceGray()
let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width, space: colorSpace, bitmapInfo: CGImageAlphaInfo.alphaOnly.rawValue)!
context.draw(cgImage, in: CGRect(origin: .zero, size: self.size))
let image = context.makeImage()!
let layer = CALayer()
layer.contents = image
layer.contentsScale = self.scale
return layer
}
}
This is awesome! It takes memory usage down from 180MB to about 18MB, which is actually better than I expected.
The issue is, the black (or, now, opaque) parts of the image are no longer black, but are white instead.
It seems like it should be an easy fix to change the coloration of the opaque bits but I can't find any information about it online. Do you have an idea?
I've managed to answer my own question. By setting the alpha-only image as the contents of the output layer's mask, we can set the background colour of the layer to anything we want (including non-greyscale values), and still keep the memory benefits!
I've included the final code because I'm surely not the only one interested in this method:
extension UIImage {
func to8BitLayer(color: UIColor = .black) -> CALayer? {
guard let cgImage = self.cgImage else { return nil }
let height = Int(self.size.height * scale)
let width = Int(self.size.width * scale)
let colorSpace = CGColorSpaceCreateDeviceGray()
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width, space: colorSpace, bitmapInfo: CGImageAlphaInfo.alphaOnly.rawValue) else {
print("Couldn't create CGContext")
return nil
}
context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
guard let image = context.makeImage() else {
print("Couldn't create image from context")
return nil
}
// Note that self.size corresponds to the non-scaled (retina) dimensions, so is not the same size as the context
let frame = CGRect(origin: .zero, size: self.size)
let mask = CALayer()
mask.contents = image
mask.contentsScale = scale
mask.frame = frame
let layer = CALayer()
layer.backgroundColor = color.cgColor
layer.mask = mask
layer.contentsScale = scale
layer.frame = frame
return layer
}
}

Resizing to a higher size from a lower size using Swift's Core Graphics/Quartz Core resizing

I am trying to resize an image of size 20x20 to an image of 24x24 using Quartz Core resizing technique. I am not able to do so until I give it an image of 24x24, which is the least size it accepts. Following is the code I am using for resizing:
func resizingwithQuadCore24x24(image: UIImage) -> UIImage{
let cgImage = image.CGImage
let width = 24
let height = 24
let bitsPerComponent = CGImageGetBitsPerComponent(cgImage)
let bytesPerRow = CGImageGetBytesPerRow(cgImage)
let colorSpace = CGImageGetColorSpace(cgImage)
let bitmapInfo = CGImageGetBitmapInfo(cgImage)
let context = CGBitmapContextCreate(nil, (width), (height), bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo.rawValue)
// print("The infos are bitsPerComponent: \(bitsPerComponent); bytesPerRow: \(bytesPerRow); colorSpace: \(colorSpace); bitmapInfo: \(bitmapInfo) ")
CGContextSetInterpolationQuality(context, CGInterpolationQuality.Default)
CGContextSetAllowsAntialiasing(context, true)
CGContextSetShouldAntialias(context, true)
CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), cgImage)
let scaledImage = CGBitmapContextCreateImage(context).flatMap { UIImage(CGImage: $0) }
return scaledImage!
}
Does anyone knows if it possible to resize an image of lower sizes?
The problem is that you're trying to get the bitmap info from the image itself before resizing. This means that the new height and width you pass into the bitmap context will not match the bits per components & bytes per row of the original image.
The solution is to calculate the bits per component and bytes per row based on your new image size. For example:
let width = 24
let height = 24
let bitsPerComponent = 8
let bytesPerRow = 4*width
This will give you a bitmap with 8 bits per component and 32 bits per pixel.
I would also recommend that you don't get the color space or bitmap info from the image itself – it's normally preferable to work in a bitmap context with a constant format that doesn't depend on the format of the image.
If you want a premultiplied RGBA bitmap context (with 32bpp), you should use the following for the color space and bitmap info:
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.ByteOrder32Big.rawValue | CGImageAlphaInfo.PremultipliedLast.rawValue)
Finally, I'm not even sure you need to be using a manual bitmap context here, you could just use a UIGraphics image context.
For example:
// assuming width and height are in pixels here
let width = 24
let height = 24
let size = CGSize(width: CGFloat(width), height: CGFloat(height))
UIGraphicsBeginImageContext(size)
let ctx = UIGraphicsGetCurrentContext()
CGContextDrawImage(ctx, CGRect(origin: CGPointZero, size: size), cgImage)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Side Note
I would advise against force unwrapping your UIImage at the end of your function, and instead return a UIImage?. This will allow the caller of the function deal with the optional nature of the function without crashing your program.
For example:
func resizingwithQuadCore24x24(image: UIImage) -> UIImage? {
...
return CGBitmapContextCreateImage(context).map {UIImage(CGImage: $0)}
}

Resources