Generate gradient image texture on iOS - ios

I need to generate a 256x1 texture on iOS. I have 4 colors Red, Green, Yellow, Blue with Red on one end and Blue on the other, and Yellow and Green at 3/4th and 1/4th position respectively. The colors in between need to be linearly interpolated. I need to use this texture in Metal for lookup in shader code. What is the easiest way to generate this texture in code?

Since MTKTextureLoader doesn't currently support the creation of 1D textures (this has been a feature request since at least 2016), you'll need to create your texture manually.
Assuming you already have your image loaded, you can ask it for its CGImage, then use this method to extract the pixel data and load it into a texture:
func texture1DForImage(_ cgImage: CGImage, device: MTLDevice) -> MTLTexture? {
let width = cgImage.width
let height = 0
let bytesPerRow = width * 4 // RGBA, 8 bits per component
let bitmapInfo: UInt32 = /* default byte order | */ CGImageAlphaInfo.premultipliedLast.rawValue
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB)!
let context = CGContext(data: nil,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: bytesPerRow,
space: colorSpace,
bitmapInfo: bitmapInfo)!
let bounds = CGRect(x: 0, y: 0, width: width, height: height)
context.draw(cgImage, in: bounds)
guard let data = context.data?.bindMemory(to: UInt8.self, capacity: bytesPerRow) else { return nil }
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm,
width: width,
height: height,
mipmapped: false)
textureDescriptor.textureType = .type1D
textureDescriptor.usage = [ .shaderRead ]
let texture = device.makeTexture(descriptor: textureDescriptor)!
texture.replace(region: MTLRegionMake1D(0, width), mipmapLevel: 0, withBytes: data, bytesPerRow: bytesPerRow)
return texture
}

Related

Force UIView to draw with specific scale (1 point = 1 pixel)

I'm doing a barcode printing feature, it generates a barcode view and then send the pixels data to the thermal printer. The process is below:
Snapshot UIView with size 250x90 (points) to UIImage:
let renderer = UIGraphicsImageRenderer(bounds: view.bounds)
let image = renderer.image { rendererContext in
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
}
Get pixels data of output image:
extension UIImage {
func pixelData() -> [UInt8]? {
let height = self.size.height
let width = self.size.width
let dataSize = width * height
var pixelData = [UInt8](repeating: 0, count: Int(dataSize))
let colorSpace = CGColorSpaceCreateDeviceGray()
let bitmapInfo: UInt32 = 0
let context = CGContext(data: &pixelData,
width: Int(width),
height: Int(height),
bitsPerComponent: 8,
bytesPerRow: Int(width),
space: colorSpace,
bitmapInfo: bitmapInfo)
guard let cgImage = self.cgImage else { return nil }
context?.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
return pixelData
}
}
Send pixelData to printer (after some process to convert it to printer data, like which pixel is black/white/gray...)
The problem is that, the size of the output bitmap must be fixed to 250x90 pixels, so it can be fit in the label stamp. But in high resolution iPhones with screen scale of 3x, after called pixelData() with 250x90 as width/height, the output CGImage will be downscale from original cgImage (because the original has 750x270 pixels). And because of downscaling, some black area become gray, and barcode becomes unrecognizable.
I can put the image.scale to pixelData() method, but that will make the pixels data to have the physical size of 750x270 pixels and it too large to fit in the label stamp.
I also tried this way to create UIImage but it still downscales and pixelates the output image:
// force 1.0 scale
UIGraphicsBeginImageContextWithOptions(bounds.size, isOpaque, 1.0)
drawHierarchy(in: bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
So the question is:
Can I force the UIView to be drawn with 1x scale, as 1 point = 1 pixel, and everything after that will working as expected?
Or can I adjust the pixels data generation so that context.draw can merge 3 pixels into 1?

Convert UIImage to 4 bits color space

I want to make my image as smaller as possible (in terms of data size, not changing height or width) and I can do it at expense of colors. So I want to have 4bit color scheme. How can I convert image to this color scheme? How can I create it? I start with trying to convert to grayscale with this code:
func convert(image: UIImage) -> UIImage? {
let imageSize = image.size
let colorSpace:CGColorSpace = CGColorSpaceCreateDeviceGray()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
let context = CGContext(data: nil, width: Int(imageSize.width), height: Int(imageSize.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)!
context.draw(image.cgImage!, in: CGRect(origin: CGPoint.zero, size: imageSize))
//image.draw(in: )
let imgRef = context.makeImage()
return UIImage(cgImage: imgRef!)
}
and it's working. Now I want to modify it to use different color space. I found that what I am looking for is probably indexed color scheme, right? But I can't find any useful tutorial of how can I do this. I started like this:
let table: [UInt8] = [0,0,0, 255,0,0, 0,0,255, 0,255,0,
255,255,0, 255,0,255, 0,255,255, 255,255,255]
let colorSpace = CGColorSpace(indexedBaseSpace: CGColorSpaceCreateDeviceRGB(),
last: table.count - 1, colorTable: table)!
But I am not sure if I started ok and should I continue. Which values should I then use in CGContext init method and so on. Can someone helps me? Thanks

Transparency of UIImage is Missed when Converting to Texture

I am converting a UIImage to texture, then I am converting that texture to UIImage. The transparent areas are replaced with black color.
The function I am using to convert a UIImage to texture.
func imageToTexture(imageNamed: String, device: MTLDevice) -> MTLTexture {
let bytesPerPixel = 4
let bitsPerComponent = 8
var image = UIImage(named: imageNamed)!
let width = Int(image.size.width)
let height = Int(image.size.height)
let bounds = CGRect(x: 0, y: 0, width: CGFloat(width), height: CGFloat(height))
var rowBytes = width * bytesPerPixel
var colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: rowBytes, space: colorSpace, bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue).rawValue)
context!.clear(bounds)
context?.translateBy(x: CGFloat(width), y: CGFloat(height))
// CGContextTranslateCTM(context!, CGFloat(width), CGFloat(height))
context?.scaleBy(x: -1.0, y: -1.0)
// CGContextScaleCTM(context!, -1.0, -1.0)
context?.draw(image.cgImage!, in: bounds)
// CGContextDrawImage(context, bounds, image.CGImage)
var texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .bgra8Unorm , width: width, height: height, mipmapped: true)
var texture = device.makeTexture(descriptor: texDescriptor)
texture?.label = imageNamed
// var pixelsData = CGBitmapContextGetData(context!)
var pixelsData = context?.data
var region = MTLRegionMake2D(0, 0, width, height)
texture?.replace(region: region, mipmapLevel: 0, withBytes: pixelsData!, bytesPerRow: rowBytes)
makeImage(from: texture!)
return texture!
}
The function converting texture to UIImage. After changing UIImage to texture, I am calling to convert it back to UIImage. It is not.
func makeImage(from texture: MTLTexture) -> UIImage? {
let width = texture.width
let height = texture.height
let bytesPerRow = width * 4
let data = UnsafeMutableRawPointer.allocate(bytes: bytesPerRow * height, alignedTo: 4)
defer {
data.deallocate(bytes: bytesPerRow * height, alignedTo: 4)
}
let region = MTLRegionMake2D(0, 0, width, height)
texture.getBytes(data, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0)
var buffer = vImage_Buffer(data: data, height: UInt(height), width: UInt(width), rowBytes: bytesPerRow)
let map: [UInt8] = [2, 1, 0, 3]
vImagePermuteChannels_ARGB8888(&buffer, &buffer, map, 0)
guard let colorSpace = CGColorSpace(name: CGColorSpace.genericRGBLinear) else { return nil }
guard let context = CGContext(data: data, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow,
space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue) else { return nil }
guard let cgImage = context.makeImage() else { return nil }
let x = UIImage(cgImage: cgImage)
return x;
}
Here the output image is black in the transparent areas. I don't know where I messed up.

How to convert bgra8Unorm iOS-Metal texture to rgba8Unorm texture?

I am working with iOS 11, XCode 9 and Metal 2. I have a MTLTexture with pixel format bgra8Unorm. I cannot change this pixel format, because according to pixelFormat documentation:
The pixel format for a Metal layer must be bgra8Unorm, bgra8Unorm_srgb, rgba16Float, BGRA10_XR, or bgra10_XR_sRGB.
The other pixel formats are not suitable for my application.
Now I want to create an UIImage from the texture. I am able to do so by extracting the pixel bytes from the texture (doc):
getBytes(_:bytesPerRow:bytesPerImage:from:mipmapLevel:slice:)
I am processing these bytes to get an UIImage:
func getUIImageForRGBAData(data: Data) -> UIImage? {
let d = (data as NSData)
let width = GlobalConfiguration.textureWidth
let height = GlobalConfiguration.textureHeight
let rowBytes = width * 4
let size = rowBytes * height
let pointer = malloc(size)
memcpy(pointer, d.bytes, d.length)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pointer, width: width, height: height, bitsPerComponent: 8, bytesPerRow: rowBytes, space: colorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
let imgRef = context.makeImage()
let image = UIImage(cgImage: imgRef!)
return image
}
However, CGContext assumes that the pixels are in the rgba8 format. For example, red texture pixels are blue in the final UIImage. Is there a way to change the pixelFormat in this process to get the proper colors?
This function will swizzle the bytes of a .bgra8Unorm texture into RGBA order and create a UIImage from the data:
func makeImage(from texture: MTLTexture) -> UIImage? {
let width = texture.width
let height = texture.height
let bytesPerRow = width * 4
let data = UnsafeMutableRawPointer.allocate(bytes: bytesPerRow * height, alignedTo: 4)
defer {
data.deallocate(bytes: bytesPerRow * height, alignedTo: 4)
}
let region = MTLRegionMake2D(0, 0, width, height)
texture.getBytes(data, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0)
var buffer = vImage_Buffer(data: data, height: UInt(height), width: UInt(width), rowBytes: bytesPerRow)
let map: [UInt8] = [2, 1, 0, 3]
vImagePermuteChannels_ARGB8888(&buffer, &buffer, map, 0)
guard let colorSpace = CGColorSpace(name: CGColorSpace.genericRGBLinear) else { return nil }
guard let context = CGContext(data: data, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow,
space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue) else { return nil }
guard let cgImage = context.makeImage() else { return nil }
return UIImage(cgImage: cgImage)
}
Caveat: This function is very expensive. Creating an image from a Metal texture every frame is almost never what you want to do.

Reduce UIImage size by making it grayscale

I'm generating a PDF with 10 to 15 images the user has taken. The images are photos of documents and don't need to be colored.
If I simply use the UIImages the user has taken, and use a very high compression rate
UIImageJPEGRepresentation(image, 0.02)
the PDF is about 3MB (witch colored images) on an iPhone6.
To further reduce the file size, I would now like to convert all images to true grayscale (I do want to throw the color information away). I also found this on github
Note that iOS/macOS do not support gray scale with alpha (you have to use a RGB image with all 3 set to the same value + alpha to get this affect).
I'm converting the images to grayscale like so:
guard let cgImage = self.cgImage else {
return self
}
let height = self.size.height
let width = self.size.width
let colorSpace = CGColorSpaceCreateDeviceGray();
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
let context = CGContext.init(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)!
let rect = CGRect(x: 0, y: 0, width: width, height: height)
context.draw(cgImage, in: rect)
guard let grayscaleImage = context.makeImage() else {
return self
}
return UIImage(cgImage: grayscaleImage)
However, when I try to compress the resulting images again with
UIImageJPEGRepresentation(image, 0.02)
I get the flollowing logs:
JPEGDecompressSurface : Picture decode failed: e00002c2
and the images are displayed distorted. Any ideas on how I can get small true grayscale image?

Resources