How to convert bgra8Unorm iOS-Metal texture to rgba8Unorm texture? - ios

I am working with iOS 11, XCode 9 and Metal 2. I have a MTLTexture with pixel format bgra8Unorm. I cannot change this pixel format, because according to pixelFormat documentation:
The pixel format for a Metal layer must be bgra8Unorm, bgra8Unorm_srgb, rgba16Float, BGRA10_XR, or bgra10_XR_sRGB.
The other pixel formats are not suitable for my application.
Now I want to create an UIImage from the texture. I am able to do so by extracting the pixel bytes from the texture (doc):
getBytes(_:bytesPerRow:bytesPerImage:from:mipmapLevel:slice:)
I am processing these bytes to get an UIImage:
func getUIImageForRGBAData(data: Data) -> UIImage? {
let d = (data as NSData)
let width = GlobalConfiguration.textureWidth
let height = GlobalConfiguration.textureHeight
let rowBytes = width * 4
let size = rowBytes * height
let pointer = malloc(size)
memcpy(pointer, d.bytes, d.length)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pointer, width: width, height: height, bitsPerComponent: 8, bytesPerRow: rowBytes, space: colorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
let imgRef = context.makeImage()
let image = UIImage(cgImage: imgRef!)
return image
}
However, CGContext assumes that the pixels are in the rgba8 format. For example, red texture pixels are blue in the final UIImage. Is there a way to change the pixelFormat in this process to get the proper colors?

This function will swizzle the bytes of a .bgra8Unorm texture into RGBA order and create a UIImage from the data:
func makeImage(from texture: MTLTexture) -> UIImage? {
let width = texture.width
let height = texture.height
let bytesPerRow = width * 4
let data = UnsafeMutableRawPointer.allocate(bytes: bytesPerRow * height, alignedTo: 4)
defer {
data.deallocate(bytes: bytesPerRow * height, alignedTo: 4)
}
let region = MTLRegionMake2D(0, 0, width, height)
texture.getBytes(data, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0)
var buffer = vImage_Buffer(data: data, height: UInt(height), width: UInt(width), rowBytes: bytesPerRow)
let map: [UInt8] = [2, 1, 0, 3]
vImagePermuteChannels_ARGB8888(&buffer, &buffer, map, 0)
guard let colorSpace = CGColorSpace(name: CGColorSpace.genericRGBLinear) else { return nil }
guard let context = CGContext(data: data, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow,
space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue) else { return nil }
guard let cgImage = context.makeImage() else { return nil }
return UIImage(cgImage: cgImage)
}
Caveat: This function is very expensive. Creating an image from a Metal texture every frame is almost never what you want to do.

Related

How to change color for point in UIImage?

I have a code that changes the color of a certain point in an image to transparent.
func processByPixel(in image: UIImage, byPoint: CGPoint) -> UIImage? {
guard let inputCGImage = image.cgImage else { print("unable to get cgImage"); return nil }
let colorSpace = CGColorSpaceCreateDeviceRGB()
let width = inputCGImage.width
let height = inputCGImage.height
let bytesPerPixel = 4
let bitsPerComponent = 8
let bytesPerRow = bytesPerPixel * width
let bitmapInfo = RGBA32.bitmapInfo
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
print("Cannot create context!"); return nil
}
context.draw(inputCGImage, in: CGRect(x: 0, y: 0, width: width, height: height))
guard let buffer = context.data else { print("Cannot get context data!"); return nil }
let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height)
let offset = Int(byPoint.x) * width + Int(byPoint.y)
pixelBuffer[offset] = .transparent
let outputCGImage = context.makeImage()!
let outputImage = UIImage(cgImage: outputCGImage, scale: image.scale, orientation: image.imageOrientation)
return outputImage
}
By tapping on the picture, I calculate the point on which it was clicked and pass it to this function.
The problem is that the color changes slightly with the offset.
For example I will pass CGPoint(x: 0, y:0) but change color to 0, 30
I think that the offset variable is not calculated correctly
Calculate the offset like this:
let offset = Int(byPoint.y) * width + Int(byPoint.x)
It is the number of vertical lines times line length plus x.

Wrong colors in bitmap drawing (Creating image by context)

Color distortion occurs when pictures are superimposed on each other. What am I doing wrong?
// My pixel struct
struct RGBA: Codable {
var r: UInt8
var g: UInt8
var b: UInt8
var a: UInt8
}
// Creating UIImage by pixels
let rgba = pixels.flatMap({ $0 })
let colorSpace = CGColorSpaceCreateDeviceRGB()
let data = UnsafeMutableRawPointer(mutating: rgba)
let bitmapContext = CGContext(data: data,
width: size.width,
height: size.height,
bitsPerComponent: 8,
bytesPerRow: 4 * size.width,
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
guard let image = bitmapContext?.makeImage() else { return nil }
return UIImage(cgImage: image)
I tried using this code but it didn't work
CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedLast.rawValue).rawValue)
Thanks

Generate gradient image texture on iOS

I need to generate a 256x1 texture on iOS. I have 4 colors Red, Green, Yellow, Blue with Red on one end and Blue on the other, and Yellow and Green at 3/4th and 1/4th position respectively. The colors in between need to be linearly interpolated. I need to use this texture in Metal for lookup in shader code. What is the easiest way to generate this texture in code?
Since MTKTextureLoader doesn't currently support the creation of 1D textures (this has been a feature request since at least 2016), you'll need to create your texture manually.
Assuming you already have your image loaded, you can ask it for its CGImage, then use this method to extract the pixel data and load it into a texture:
func texture1DForImage(_ cgImage: CGImage, device: MTLDevice) -> MTLTexture? {
let width = cgImage.width
let height = 0
let bytesPerRow = width * 4 // RGBA, 8 bits per component
let bitmapInfo: UInt32 = /* default byte order | */ CGImageAlphaInfo.premultipliedLast.rawValue
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB)!
let context = CGContext(data: nil,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: bytesPerRow,
space: colorSpace,
bitmapInfo: bitmapInfo)!
let bounds = CGRect(x: 0, y: 0, width: width, height: height)
context.draw(cgImage, in: bounds)
guard let data = context.data?.bindMemory(to: UInt8.self, capacity: bytesPerRow) else { return nil }
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm,
width: width,
height: height,
mipmapped: false)
textureDescriptor.textureType = .type1D
textureDescriptor.usage = [ .shaderRead ]
let texture = device.makeTexture(descriptor: textureDescriptor)!
texture.replace(region: MTLRegionMake1D(0, width), mipmapLevel: 0, withBytes: data, bytesPerRow: bytesPerRow)
return texture
}

Transparency of UIImage is Missed when Converting to Texture

I am converting a UIImage to texture, then I am converting that texture to UIImage. The transparent areas are replaced with black color.
The function I am using to convert a UIImage to texture.
func imageToTexture(imageNamed: String, device: MTLDevice) -> MTLTexture {
let bytesPerPixel = 4
let bitsPerComponent = 8
var image = UIImage(named: imageNamed)!
let width = Int(image.size.width)
let height = Int(image.size.height)
let bounds = CGRect(x: 0, y: 0, width: CGFloat(width), height: CGFloat(height))
var rowBytes = width * bytesPerPixel
var colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: rowBytes, space: colorSpace, bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue).rawValue)
context!.clear(bounds)
context?.translateBy(x: CGFloat(width), y: CGFloat(height))
// CGContextTranslateCTM(context!, CGFloat(width), CGFloat(height))
context?.scaleBy(x: -1.0, y: -1.0)
// CGContextScaleCTM(context!, -1.0, -1.0)
context?.draw(image.cgImage!, in: bounds)
// CGContextDrawImage(context, bounds, image.CGImage)
var texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .bgra8Unorm , width: width, height: height, mipmapped: true)
var texture = device.makeTexture(descriptor: texDescriptor)
texture?.label = imageNamed
// var pixelsData = CGBitmapContextGetData(context!)
var pixelsData = context?.data
var region = MTLRegionMake2D(0, 0, width, height)
texture?.replace(region: region, mipmapLevel: 0, withBytes: pixelsData!, bytesPerRow: rowBytes)
makeImage(from: texture!)
return texture!
}
The function converting texture to UIImage. After changing UIImage to texture, I am calling to convert it back to UIImage. It is not.
func makeImage(from texture: MTLTexture) -> UIImage? {
let width = texture.width
let height = texture.height
let bytesPerRow = width * 4
let data = UnsafeMutableRawPointer.allocate(bytes: bytesPerRow * height, alignedTo: 4)
defer {
data.deallocate(bytes: bytesPerRow * height, alignedTo: 4)
}
let region = MTLRegionMake2D(0, 0, width, height)
texture.getBytes(data, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0)
var buffer = vImage_Buffer(data: data, height: UInt(height), width: UInt(width), rowBytes: bytesPerRow)
let map: [UInt8] = [2, 1, 0, 3]
vImagePermuteChannels_ARGB8888(&buffer, &buffer, map, 0)
guard let colorSpace = CGColorSpace(name: CGColorSpace.genericRGBLinear) else { return nil }
guard let context = CGContext(data: data, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow,
space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue) else { return nil }
guard let cgImage = context.makeImage() else { return nil }
let x = UIImage(cgImage: cgImage)
return x;
}
Here the output image is black in the transparent areas. I don't know where I messed up.

Drawing image in Swift takes forever

In my app I'm creating an image mapping floats to a pixel value and use it as an overlay on Google Maps, but it takes forever to do, the same thing in Android is almost instant. My code looks like this:
private func imageFromPixels(pixels: [PixelData], width: Int, height: Int) -> UIImage? {
let bitsPerComponent = 8
let bitsPerPixel = bitsPerComponent * 4
let bytesPerRow = bitsPerPixel * width / 8
let providerRef = CGDataProvider(
data: NSData(bytes: pixels, length: height * width * 4)
)
let cgimage = CGImage(
width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue),
provider: providerRef!,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent
)
if cgimage == nil {
print("CGImage is not supposed to be nil")
return nil
}
return UIImage(cgImage: cgimage!)
}
Any suggestions on how this can take so long to do? I can see it uses about 96% CPU power.
func fromData(pair: AllocationPair) -> UIImage? {
let table = pair.table
let data = pair.data
prepareColors(allocations: table.allocations)
let height = data.count
let width = data[0].count
var colors = [PixelData]()
for row in data {
for val in row {
if (val == 0.0) {
colors.append(PixelData(a: 0, r: 0, g: 0, b: 0))
continue
}
if let interval = findInterval(table: table, value: val) {
if let color = intervalColorDict[interval] {
colors.append(PixelData(a: color.a, r: color.r, g: color.g, b: color.b))
}
}
}
}
return imageFromPixels(pixels: colors, width: width, height: height)
}
I've tried to time profile it, and this is the output where it takes time.
I tried your code and I found out that the problem isn't with your function.
I think that you should use a UInt8 based pixel structure and not a CGFloat based structure.
I translated your code for a cocoa application, and this is the result:
public struct PixelData {
var a: UInt8
var r: UInt8
var g: UInt8
var b: UInt8
}
func imageFromPixels(pixels: [PixelData], width: Int, height: Int) -> NSImage? {
let bitsPerComponent = 8
let bitsPerPixel = bitsPerComponent * 4
let bytesPerRow = bitsPerPixel * width / 8
let providerRef = CGDataProvider(
data: NSData(bytes: pixels, length: height * width * 4)
)
let cgimage = CGImage(
width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue),
provider: providerRef!,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent
)
if cgimage == nil {
print("CGImage is not supposed to be nil")
return nil
}
return NSImage(cgImage: cgimage!, size: NSSize(width: width, height: height))
}
var img = [PixelData]()
for i: UInt8 in 0 ..< 20 {
for j: UInt8 in 0 ..< 20 {
// Creating a red 20x20 image.
img.append(PixelData(a: 255, r: 255, g: 0, b: 0))
}
}
var ns = imageFromPixels(pixels: img, width: 20, height: 20)
This code is fast and light, here's the debug system impact values:
I think that the problem comes up with the part which loads the pixel data, check it and be sure that it works properly.
If you need to access and change the actual bitmap data, you can use CGImage. In other cases use CIImage represented objects. So work with UIImage, convert UIImage to CIImage, make any manipulations on it, and then convert it back to a UIImage (as you did with CGImage in your code).
Here's a post explaining what is what: UIImage vs. CIImage vs. CGImage
Core Image doesn’t actually render an image until it is told to do so. This “lazy evaluation” method allows Core Image to operate as efficiently as possible.
CIImage page in Apple Developer Documentation

Resources