MTKTextureLoader saturates image - ios

I am trying to use a MTKTextureLoader to load a CGImage as a texture. Here is the original image
However after I convert that CGImage into a MTLTexture and that texture back to a CGImage it looks horrible, like this:
Here is sorta what is going on in code.
The image is loaded in as a CGImage (I have checked and that image does appear to have the full visual quality)
I have a function view() that allows me to view a NSImage by using it in a CALayer like so:
func view() {
.....
imageView!.layer = CALayer()
imageView!.layer!.contentsGravity = kCAGravityResizeAspectFill
imageView!.layer!.contents = img
imageView!.wantsLayer = true
So I did the following
let cg = CoolImage()
let ns = NSImage(cgImage: cg, size: Size(width: cg.width, height: cg.height))
view(image: ns)
And checked sure enough it had the full visual fidelity.
So then I loaded the cg image into a MTLTexture like so
let textureLoader = MTKTextureLoader(device: metalState.sharedDevice!)
let options = [
MTKTextureLoader.Option.textureUsage: NSNumber(value: MTLTextureUsage.shaderRead.rawValue | MTLTextureUsage.shaderWrite.rawValue | MTLTextureUsage.renderTarget.rawValue),
MTKTextureLoader.Option.SRGB: false
]
return ensure(try textureLoader.newTexture(cgImage: cg, options: options))
I then converted the MTLTexture back to a UIImage like so:
let texture = self
let width = texture.width
let height = texture.height
let bytesPerRow = width * 4
let data = UnsafeMutableRawPointer.allocate(bytes: bytesPerRow * height, alignedTo: 4)
defer {
data.deallocate(bytes: bytesPerRow * height, alignedTo: 4)
}
let region = MTLRegionMake2D(0, 0, width, height)
texture.getBytes(data, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0)
var buffer = vImage_Buffer(data: data, height: UInt(height), width: UInt(width), rowBytes: bytesPerRow)
var map: [UInt8] = [0, 1, 2, 3]
if (pixelFormat == .bgra8Unorm) {
map = [2, 1, 0, 3]
}
vImagePermuteChannels_ARGB8888(&buffer, &buffer, map, 0)
guard let colorSpace = CGColorSpace(name: CGColorSpace.genericRGBLinear) else { return nil }
guard let context = CGContext(data: data, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue) else { return nil }
guard let cgImage = context.makeImage() else { return nil }
return NSImage(cgImage: cgImage, size: Size(width: width, height: height))
And viewed it.
The resulting image was quite saturated and I believe it was because of the CGImage to MTLTexture conversion which I have been fairly successful with in the past.
Please note that this texture was never rendered only converted.
You are probably wondering why I am using all of these conversions and that is a great point. My actual pipeline does not work anything like this HOWEVER it does require each of these conversion components to be working smoothly. This is not my actual use case just something to show the problem.

The problem here isn't the conversion from CGImage to MTLTexture. The problem is that you're assuming that the color space of the source image is linear. More likely than not, the image data is actually sRGB-encoded, so by creating a bitmap context with a generic linear color space, you're incorrectly telling CG that it should gamma-encode the image data before display, which leads to the desaturation you're seeing.
You can fix this by using the native color space of the original CGImage, or by otherwise accounting for the fact that your image data is sRGB-encoded.

Related

MTKView frequently displaying scrambled MTLTextures

I am working on an MTKView-backed paint program which can replay painting history via an array of MTLTextures that store keyframes. I am having an issue in which sometimes the content of these MTLTextures is scrambled.
As an example, say I want to store a section of the drawing below as a keyframe:
During playback, sometimes the drawing will display exactly as intended, but sometimes, it will display like this:
Note the distorted portion of the picture. (The undistorted portion constitutes a static background image that's not part of the keyframe in question)
I describe the way I Create individual MTLTextures from the MTKView's currentDrawable below. Because of color depth issues I won't go into, the process may seem a little round-about.
I first get a CGImage of the subsection of the screen that constitutes a keyframe.
I use that CGImage to create an MTLTexture tied to the MTKView's device.
I store that MTLTexture into a MTLTextureStructure that stores the MTLTexture and the keyframe's bounding-box (which I'll need later)
Lastly, I store in an array of MTLTextureStructures (keyframeMetalArray). During playback, when I hit a keyframe, I get it from this keyframeMetalArray.
The associated code is outlined below.
let keyframeCGImage = weakSelf!.canvasMetalViewPainting.mtlTextureToCGImage(bbox: keyframeBbox, copyMode: copyTextureMode.textureKeyframe) // convert from MetalTexture to CGImage
let keyframeMTLTexture = weakSelf!.canvasMetalViewPainting.CGImageToMTLTexture(cgImage: keyframeCGImage)
let keyframeMTLTextureStruc = mtlTextureStructure(texture: keyframeMTLTexture, bbox: keyframeBbox, strokeType: brushTypeMode.brush)
weakSelf!.keyframeMetalArray.append(keyframeMTLTextureStruc)
Without providing specifics about how each conversion is happening, I wonder if, from an architecture design point, I'm overlooking something that is corrupting my data stored in the keyframeMetalArray. It may be unwise to try to store these MTLTextures in volatile arrays, but I don't know that for a fact. I just figured using MTLTextures would be the quickest way to update content.
By the way, when I swap out arrays of keyframes to arrays of UIImage.pngData, I have no display issues, but it's a lot slower. On the plus side, it tells me that the initial capture from currentDrawable to keyframeCGImage is working just fine.
Any thoughts would be appreciated.
p.s. adding a bit of detail based on the feedback:
mtlTextureToCGImage:
func mtlTextureToCGImage(bbox: CGRect, copyMode: copyTextureMode) -> CGImage {
let kciOptions = [convertFromCIContextOption(CIContextOption.outputPremultiplied): true,
convertFromCIContextOption(CIContextOption.useSoftwareRenderer): false] as [String : Any]
let bboxStrokeScaledFlippedY = CGRect(x: (bbox.origin.x * self.viewContentScaleFactor), y: ((self.viewBounds.height - bbox.origin.y - bbox.height) * self.viewContentScaleFactor), width: (bbox.width * self.viewContentScaleFactor), height: (bbox.height * self.viewContentScaleFactor))
let strokeCIImage = CIImage(mtlTexture: metalDrawableTextureKeyframe,
options: convertToOptionalCIImageOptionDictionary(kciOptions))!.oriented(CGImagePropertyOrientation.downMirrored)
let imageCropCG = cicontext.createCGImage(strokeCIImage, from: bboxStrokeScaledFlippedY, format: CIFormat.RGBA8, colorSpace: colorSpaceGenericRGBLinear)
cicontext.clearCaches()
return imageCropCG!
} // end of func mtlTextureToCGImage(bbox: CGRect)
CGImageToMTLTexture:
func CGImageToMTLTexture (cgImage: CGImage) -> MTLTexture {
// Note that we forego the more direct method of creating stampTexture:
//let stampTexture = try! MTKTextureLoader(device: self.device!).newTexture(cgImage: strokeUIImage.cgImage!, options: nil)
// because MTKTextureLoader seems to be doing additional processing which messes with the resulting texture/colorspace
let width = Int(cgImage.width)
let height = Int(cgImage.height)
let bytesPerPixel = 4
let rowBytes = width * bytesPerPixel
//
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm,
width: width,
height: height,
mipmapped: false)
texDescriptor.usage = MTLTextureUsage(rawValue: MTLTextureUsage.shaderRead.rawValue)
texDescriptor.storageMode = .shared
guard let stampTexture = device!.makeTexture(descriptor: texDescriptor) else {
return brushTextureSquare // return SOMETHING
}
let dstData: CFData = (cgImage.dataProvider!.data)!
let pixelData = CFDataGetBytePtr(dstData)
let region = MTLRegionMake2D(0, 0, width, height)
print ("[MetalViewPainting]: w= \(width) | h= \(height) region = \(region.size)")
stampTexture.replace(region: region, mipmapLevel: 0, withBytes: pixelData!, bytesPerRow: Int(rowBytes))
return stampTexture
} // end of func CGImageToMTLTexture (cgImage: CGImage)
The type of distortion looks like a bytes-per-row alignment issue between CGImage and MTLTexture. You're probably only seeing this issue when your image is a certain size that falls outside of the bytes-per-row alignment requirement of your MTLDevice. If you really need to store the texture as a CGImage, ensure that you are using the bytesPerRow value of the CGImage when copying back to the texture.

Generate Laplacian image by Apple-Metal MPSImageLaplacian

I am trying to generate Laplacian image out of rgb CGImage by using metal laplacian.
The current code used:
if let croppedImage = self.cropImage2(image: UIImage(ciImage: image), rect: rect)?.cgImage {
let commandBuffer = self.commandQueue.makeCommandBuffer()!
let laplacian = MPSImageLaplacian(device: self.device)
let textureLoader = MTKTextureLoader(device: self.device)
let options: [MTKTextureLoader.Option : Any]? = nil
let srcTex = try! textureLoader.newTexture(cgImage: croppedImage, options: options)
let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: srcTex.pixelFormat, width: srcTex.width, height: srcTex.height, mipmapped: false)
let lapTex = self.device.makeTexture(descriptor: desc)
laplacian.encode(commandBuffer: commandBuffer, sourceTexture: srcTex, destinationTexture: lapTex!)
let output = CIImage(mtlTexture: lapTex!, options: [:])?.cgImage
print("output: \(output?.width)")
print("")
}
I suspect the problem is in makeTexture:
let lapTex = self.device.makeTexture(descriptor: desc)
the width and height of the lapTex in debugger are invalid although the desc and srcTex contains valid data including width and height.
Looks like order or initialisation is wrong but couldn't find what.
Does anyone has an idea what is wrong?
Thanks
There are a few things wrong here.
First, as mentioned in my comment, the command buffer isn't being committed, so the kernel work is never being performed.
Second, you need to wait for the work to complete before attempting to read back the results. (On macOS you'd additionally need to use a blit command encoder to ensure that the contents of the texture are copied back to CPU-accessible memory.)
Third, it's important to create the destination texture with the appropriate usage flags. The default of .shaderRead is insufficient in this case, since the MPS kernel writes to the texture. Therefore, you should explicitly set the usage property on the texture descriptor (to either [.shaderRead, .shaderWrite] or .shaderWrite, depending on how you go on to use the texture).
Fourth, it may be the case that the pixel format of your source texture isn't a writable format, so unless you're absolutely certain it is, consider setting the destination pixel format to a known-writable format (like .rgba8unorm) instead of assuming the destination should match the source. This also helps later when creating CGImages.
Finally, there is no guarantee that the cgImage property of a CIImage is non-nil when it wasn't created from a CGImage. Calling the property doesn't (necessarily) create a new backing CGImage. So, you need to explicitly create a CGImage somehow.
One way of doing this would be to create a Metal device-backed CIContext and use its createCGImage(_:from:) method. Although this might work, it seems redundant if the intent is simply to create a CGImage from a MTLTexture (for display purposes, let's say).
Instead, consider using the getBytes(_:bytesPerRow:from:mipmapLevel:) method to get the bytes from the texture and load them into a CG bitmap context. It's then trivial to create a CGImage from the context.
Here's a function that computes the Laplacian of an image and returns the resulting image:
func laplacian(_ image: CGImage) -> CGImage? {
let commandBuffer = self.commandQueue.makeCommandBuffer()!
let laplacian = MPSImageLaplacian(device: self.device)
let textureLoader = MTKTextureLoader(device: self.device)
let options: [MTKTextureLoader.Option : Any]? = nil
let srcTex = try! textureLoader.newTexture(cgImage: image, options: options)
let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: srcTex.pixelFormat,
width: srcTex.width,
height: srcTex.height,
mipmapped: false)
desc.pixelFormat = .rgba8Unorm
desc.usage = [.shaderRead, .shaderWrite]
let lapTex = self.device.makeTexture(descriptor: desc)!
laplacian.encode(commandBuffer: commandBuffer, sourceTexture: srcTex, destinationTexture: lapTex)
#if os(macOS)
let blitCommandEncoder = commandBuffer.makeBlitCommandEncoder()!
blitCommandEncoder.synchronize(resource: lapTex)
blitCommandEncoder.endEncoding()
#endif
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
// Note: You may want to use a different color space depending
// on what you're doing with the image
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Note: We skip the last component (A) since the Laplacian of the alpha
// channel of an opaque image is 0 everywhere, and that interacts oddly
// when we treat the result as an RGBA image.
let bitmapInfo = CGImageAlphaInfo.noneSkipLast.rawValue
let bytesPerRow = lapTex.width * 4
let bitmapContext = CGContext(data: nil,
width: lapTex.width,
height: lapTex.height,
bitsPerComponent: 8,
bytesPerRow: bytesPerRow,
space: colorSpace,
bitmapInfo: bitmapInfo)!
lapTex.getBytes(bitmapContext.data!,
bytesPerRow: bytesPerRow,
from: MTLRegionMake2D(0, 0, lapTex.width, lapTex.height),
mipmapLevel: 0)
return bitmapContext.makeImage()
}

Can't load large jpeg into a MTLTexture with MTKTextureLoader

I'm trying to load a large image into a MTLTexture and it works with 4000x6000 images. But when I try with 6000x8000 it can't.
func setTexture(device: MTLDevice, imageName: String) -> MTLTexture? {
let textureLoader = MTKTextureLoader(device: device)
var texture: MTLTexture? = nil
// In iOS 10 the origin was changed.
let textureLoaderOptions: [MTKTextureLoader.Option: Any]
if #available(iOS 10.0, *) {
let origin = MTKTextureLoader.Origin.bottomLeft.rawValue
textureLoaderOptions = [MTKTextureLoader.Option.origin : origin]
} else {
textureLoaderOptions = [:]
}
if let textureURL = Bundle.main.url(forResource: imageName, withExtension: nil, subdirectory: "Images") {
do {
texture = try textureLoader.newTexture(URL: textureURL, options: textureLoaderOptions)
} catch {
print("Texture not created.")
}
}
return texture
}
Pretty basic code. I'm running it in an iPad Pro with A9 chip, GPU family 3. It should handle textures this large. Should I manually tile it somehow if it doesn't accept this size? In that case, what's the best approach: using MTLRegionMake to copy bytes, slicing in Core Image or a Core Graphics context...
I appreciate any help
Following your helpful comments I decided to load it manually drawing to a CGContext and copying to a MTLTexture. I'm adding the solution code below. The context shouldn't be created each time a texture is created, it's better to put it outside the function and keep reusing it.
// Grab the CGImage, w = width, h = height...
let context = CGContext(data: nil, width: w, height: h, bitsPerComponent: bpc, bytesPerRow: (bpp / 8) * w, space: colorSpace!, bitmapInfo: bitmapInfo.rawValue)
let flip = CGAffineTransform(a: 1, b: 0, c: 0, d: -1, tx: 0, ty: CGFloat(h))
context?.concatenate(flip)
context?.draw(cgImage, in: CGRect(x: 0, y: 0, width: CGFloat(w), height: CGFloat(h)))
let textureDescriptor = MTLTextureDescriptor()
textureDescriptor.pixelFormat = .rgba8Unorm
textureDescriptor.width = w
textureDescriptor.height = h
guard let data = context?.data else {print("No data in context."); return nil}
let texture = device.makeTexture(descriptor: textureDescriptor)
texture?.replace(region: MTLRegionMake2D(0, 0, w, h), mipmapLevel: 0, withBytes: data, bytesPerRow: 4 * w)
return texture
I had this issue before, a texture would load on one device and not on another. I think it is a bug with the texture loader.
You can load in a texture manually using CGImage and a CGContext, draw the image into the context. Create a MTLTexture buffer, then copy the bytes from the CGContext into the texture using a MTLRegion.
It's not fool proof, you have to make sure to use the correct pixel format for the metal buffer or you'll get strange results, so either you code for one specific format of image you're importing, or do a lot of checking. Apples' Basic Texturing example shows how you can change the color order before writing the bytes to the texture using MTLRegion.

64-bit RGBA UIImage? CGBitmapInfo for 64-bit

I'm trying to save a 16-bit depth PNG image with P3 color space from a Metal texture on iOS. The texture has pixelformat = .rgba16Unorm, and I extract the data with this code
func dataProviderRef() -> CGDataProvider? {
let pixelCount = width * height
var imageBytes = [UInt8](repeating: 0, count: pixelCount * bytesPerPixel)
let region = MTLRegionMake2D(0, 0, width, height)
getBytes(&imageBytes, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0)
return CGDataProvider(data: NSData(bytes: &imageBytes, length: pixelCount * bytesPerPixel * MemoryLayout<UInt8>.size))
}
I figured out that the way to save a PNG image on iOS would be to create a UIImage first, and to initialize it, I need to create a CGImage. The problem is I don't know what to pass to CGIBitmapInfo. In the documentation I can see you can specify the byteOrder for 32-bit formats, but not for 64-bit.
The function I use to convert the texture to an UIImage is this,
extension UIImage {
public convenience init?(texture: MTLTexture) {
guard let rgbColorSpace = texture.defaultColorSpace else {
return nil
}
let bitmapInfo:CGBitmapInfo = [CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue)]
guard let provider = texture.dataProviderRef() else {
return nil
}
guard let cgim = CGImage(
width: texture.width,
height: texture.height,
bitsPerComponent: texture.bitsPerComponent,
bitsPerPixel: texture.bitsPerPixel,
bytesPerRow: texture.bytesPerRow,
space: rgbColorSpace,
bitmapInfo: bitmapInfo,
provider: provider,
decode: nil,
shouldInterpolate: false,
intent: .defaultIntent
)
else {
return nil
}
self.init(cgImage: cgim)
}
}
Note that "texture" is using a series of attributes that do not exist in MTLTexture. I created a simple extension for convenience. The only interesting bit I guess it's the color space, that at the moment is simply,
public extension MTLTexture {
var defaultColorSpace: CGColorSpace? {
get {
switch pixelFormat {
case .rgba16Unorm:
return CGColorSpace(name: CGColorSpace.displayP3)
default:
return CGColorSpaceCreateDeviceRGB()
}
}
}
}
It looks like the image I'm creating with that code above is sampling 4 bytes per pixel, instead of 8. So I obviously end up with a funny looking image...
How do I create the appropriate CGBitmapInfo? Is it even possible?
P.S. If you want to see the full code with an example, it's all in github: https://github.com/endavid/VidEngine/tree/master/SampleColorPalette
The answer was using byteOrder16. For instance, I've replace bitmapInfo in the code above for this,
let isFloat = texture.bitsPerComponent == 16
let bitmapInfo:CGBitmapInfo = [isFloat ? .byteOrder16Little : .byteOrder32Big, CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue)]
(The alpha can be premultiplied as well).
The SDK documentation does not provide many hints of why this is, but the book Programming with Quartz has a nice explanation of the meaning of these 16 bits:
The value byteOrder16Little specifies to Quartz that each 16-bit chunk of data supplied by your data provider should be treated in little endian order [...] For example, when using a value of byteOrder16Little for an image that specifies RGB format with 16 bits per component and 48 bits per pixel, your data provider supplies the data for each pixel where the components are ordered R, G, B, but each color component value is in little-endian order [...] For best performance when using byteOrder16Little, either the pixel size or the component size of the image must be 16 bits.
So for a 64-bit image in rgba16, the pixel size is 64 bits, but the component size is 16 bits. It works nicely :)
(Thanks #warrenm !)

Metal MTLTexture replaces semi-transparent areas with black when alpha values that aren't 1 or 0

While using Apple's texture importer, or my own, a white soft-edged circle drawn in software (with a transparent bg) or in Photoshop (saved as a PNG) when rendered will have its semi-transparent colors replaced with black when brought into Metal.
Below is a screen grab from Xcode's Metal debugger, you can see the texture before being sent to shaders.
Image located here (I'm not high ranked enough to embed)
In Xcode, finder, and when put into an UIImageView, the source texture does not have the ring. But somewhere along the UIImage -> CGContex -> MTLTexture process (I'm thinking specifically the MTLTexture part) the tranparent sections are darkened.
I've been banging my head against the wall changing everything I could for the past couple of days but I can't figure it out.
To be transparent (ha), here is my personal import code
import UIKit
import CoreGraphics
class MetalTexture {
class func imageToTexture(imageNamed: String, device: MTLDevice) -> MTLTexture {
let bytesPerPixel = 4
let bitsPerComponent = 8
var image = UIImage(named: imageNamed)!
let width = Int(image.size.width)
let height = Int(image.size.height)
let bounds = CGRectMake(0, 0, CGFloat(width), CGFloat(height))
var rowBytes = width * bytesPerPixel
var colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, rowBytes, colorSpace, CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.rawValue))
CGContextClearRect(context, bounds)
CGContextTranslateCTM(context, CGFloat(width), CGFloat(height))
CGContextScaleCTM(context, -1.0, -1.0)
CGContextDrawImage(context, bounds, image.CGImage)
var texDescriptor = MTLTextureDescriptor.texture2DDescriptorWithPixelFormat(.RGBA8Unorm, width: width, height: height, mipmapped: false)
var texture = device.newTextureWithDescriptor(texDescriptor)
texture.label = imageNamed
var pixelsData = CGBitmapContextGetData(context)
var region = MTLRegionMake2D(0, 0, width, height)
texture.replaceRegion(region, mipmapLevel: 0, withBytes: pixelsData, bytesPerRow: rowBytes)
return texture
}
}
But I don't think that's the problem (since it's a copy of Apple's in Swift, and I've used theirs instead with no differences).
Any leads at all would be super helpful.
Since you are using a CGContext that is configured to use colors premultiplied with the alpha channel, using the standard RGB = src.rgb * src.a + dst.rgb * (1-src.a) will cause darkened areas because the src.rgb values are premultiplied with src.a already. This means that what you want is src.rgb + dst.rgb * (1-src.a) which is configured like this:
pipeline.colorAttachments[0].isBlendingEnabled = true
pipeline.colorAttachments[0].rgbBlendOperation = .add
pipeline.colorAttachments[0].alphaBlendOperation = .add
pipeline.colorAttachments[0].sourceRGBBlendFactor = .one
pipeline.colorAttachments[0].sourceAlphaBlendFactor = .sourceAlpha
pipeline.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha
pipeline.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
The .one means to leave RGB as-is. The reason for premultiplied colors is that you only need to multiply the colors once on creation as opposed to every time you blend. Your configuration achieves this too, but indirectly.
Thanks to Jessy I decided to look through how I was blending my alphas and I've figured it out. My texture still looks darkened in the GPU debugger, but in the actual app everything looks correct. I made my changes to my pipeline state descriptor, which you can see below.
pipelineStateDescriptor.colorAttachments[0].blendingEnabled = true
pipelineStateDescriptor.colorAttachments[0].rgbBlendOperation = .Add
pipelineStateDescriptor.colorAttachments[0].alphaBlendOperation = .Add
pipelineStateDescriptor.colorAttachments[0].sourceRGBBlendFactor = .DestinationAlpha
pipelineStateDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .DestinationAlpha
pipelineStateDescriptor.colorAttachments[0].destinationRGBBlendFactor = .OneMinusSourceAlpha
pipelineStateDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .OneMinusBlendAlpha

Resources