Deep copy CVPixelBuffer for depth data in Swift - ios

I'm getting a stream of depth data from AVCaptureSynchronizedDataCollection and trying to do some processing on the depthDataMap asynchronously. I tried to deep copy the CVPixelBuffer since I don't want to block the camera while processing, but it doesn't seem the copied buffer is correct because I keep getting bad access errors. Here is the code I'm using to deep copy the CVPixelBuffer:
func duplicatePixelBuffer(input: CVPixelBuffer) -> CVPixelBuffer {
var copyOut: CVPixelBuffer?
let bufferWidth = CVPixelBufferGetWidth(input)
let bufferHeight = CVPixelBufferGetHeight(input)
let bytesPerRow = CVPixelBufferGetBytesPerRow(input)
let bufferFormat = CVPixelBufferGetPixelFormatType(input)
_ = CVPixelBufferCreate(kCFAllocatorDefault, bufferWidth, bufferHeight, bufferFormat, CVBufferGetAttachments(input, CVAttachmentMode.shouldPropagate), &copyOut)
let output = copyOut!
// Lock the depth map base address before accessing it
CVPixelBufferLockBaseAddress(input, CVPixelBufferLockFlags.readOnly)
CVPixelBufferLockBaseAddress(output, CVPixelBufferLockFlags.readOnly)
let baseAddress = CVPixelBufferGetBaseAddress(input)
let baseAddressCopy = CVPixelBufferGetBaseAddress(output)
memcpy(baseAddressCopy, baseAddress, bufferHeight * bytesPerRow)
// Unlock the base address when finished accessing the buffer
CVPixelBufferUnlockBaseAddress(input, CVPixelBufferLockFlags.readOnly)
CVPixelBufferUnlockBaseAddress(output, CVPixelBufferLockFlags.readOnly)
NSLog("Pixel buffer original: \(input)")
NSLog("Pixel buffer copy: \(output)")
return output
}
I checked the two CVPixelBuffer objects before the return and it seems like there is no iosurface for the copied buffer. Also, there is a MetadataDictionary object in propagatedAttachments in the original, but in the copy the MetadataDictionary object is directly in attributes.
I've tried some of the other solutions on Stack Overflow with no luck since my planes are non-planar. Would appreciate any insights on this or if I should try a different approach entirely. Thanks!

Related

How to convert a numerical data array into RAW image data in Swift?

I have a data array of Int16 or Int32 numerical values that are the raw image data from a 11MP camera chip with an RGGB pixel layout (CFA). The data are exported by the camera driver as FITS data, which is basically a vector or long string of bytes or 16bit/pixel data in my case.
I like to convert these data into a raw image format in Swift in order to use the powerful debayering and demosaicing features and algorithms in iOS/Swift. I do not intend to demosaic myself, since iOS has a great library for this already (see WWDC2016 keynote on Raw Processing with Core Image).
I need to make iOS “believe” my data are actual raw image data.
I tried using CreatePixelBufferWithBytes in Swift and then CIImage from pixelbuffer but to no avail. The CIImage.cgimage is not an RGB color image.
Is there a simple way to create a raw or DNG image in Swift from raw numerical data?
Here is what I tried with the CVPixelBuffer approach, but I do not get any color image out of this:
imgRawData is a [Int32] or [Float32] array with width*height number of elements.
var pixelBuffer: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue ]
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, width, height, kCVPixelFormatType_14Bayer_RGGB, &imgRawData, 2*width, nil, nil, attrs as CFDictionary, &pixelBuffer)
let dummyImg = UIImage(systemName: "star.fill")?.cgImage
let ciiraw = CIImage(cvPixelBuffer: pixelBuffer!)
let cif = CIFilter.lanczosScaleTransform()
cif.scale = 0.25
cif.inputImage = ciiraw
let cii = cif.outputImage
let context: CIContext = CIContext.init(options: nil)
guard let cgi = context.createCGImage(cii!, from: cii!.extent) else { return dummyImg! }
Quickview of Xcode shows me only black&white or grayscale images. So does the SwiftUI View of the CGImage...
You can use CGContext and pass your raw values in as bitmapinfo, see init:
init?(data: UnsafeMutableRawPointer?, width: Int, height: Int, bitsPerComponent: Int, bytesPerRow: Int, space: CGColorSpace, bitmapInfo: UInt32)
And for space parameter, which takes CGColorSpace you would use CGColorSpaceCreateDeviceRGB().
You will then use your image with a code similar to this one:
let imageRef = CGContext.makeImage(context!)
let imageRep = NSBitmapImageRep(cgImage: imageRef()!)
Play around with it for a bit, I think you will find what you are looking for.

MTKView frequently displaying scrambled MTLTextures

I am working on an MTKView-backed paint program which can replay painting history via an array of MTLTextures that store keyframes. I am having an issue in which sometimes the content of these MTLTextures is scrambled.
As an example, say I want to store a section of the drawing below as a keyframe:
During playback, sometimes the drawing will display exactly as intended, but sometimes, it will display like this:
Note the distorted portion of the picture. (The undistorted portion constitutes a static background image that's not part of the keyframe in question)
I describe the way I Create individual MTLTextures from the MTKView's currentDrawable below. Because of color depth issues I won't go into, the process may seem a little round-about.
I first get a CGImage of the subsection of the screen that constitutes a keyframe.
I use that CGImage to create an MTLTexture tied to the MTKView's device.
I store that MTLTexture into a MTLTextureStructure that stores the MTLTexture and the keyframe's bounding-box (which I'll need later)
Lastly, I store in an array of MTLTextureStructures (keyframeMetalArray). During playback, when I hit a keyframe, I get it from this keyframeMetalArray.
The associated code is outlined below.
let keyframeCGImage = weakSelf!.canvasMetalViewPainting.mtlTextureToCGImage(bbox: keyframeBbox, copyMode: copyTextureMode.textureKeyframe) // convert from MetalTexture to CGImage
let keyframeMTLTexture = weakSelf!.canvasMetalViewPainting.CGImageToMTLTexture(cgImage: keyframeCGImage)
let keyframeMTLTextureStruc = mtlTextureStructure(texture: keyframeMTLTexture, bbox: keyframeBbox, strokeType: brushTypeMode.brush)
weakSelf!.keyframeMetalArray.append(keyframeMTLTextureStruc)
Without providing specifics about how each conversion is happening, I wonder if, from an architecture design point, I'm overlooking something that is corrupting my data stored in the keyframeMetalArray. It may be unwise to try to store these MTLTextures in volatile arrays, but I don't know that for a fact. I just figured using MTLTextures would be the quickest way to update content.
By the way, when I swap out arrays of keyframes to arrays of UIImage.pngData, I have no display issues, but it's a lot slower. On the plus side, it tells me that the initial capture from currentDrawable to keyframeCGImage is working just fine.
Any thoughts would be appreciated.
p.s. adding a bit of detail based on the feedback:
mtlTextureToCGImage:
func mtlTextureToCGImage(bbox: CGRect, copyMode: copyTextureMode) -> CGImage {
let kciOptions = [convertFromCIContextOption(CIContextOption.outputPremultiplied): true,
convertFromCIContextOption(CIContextOption.useSoftwareRenderer): false] as [String : Any]
let bboxStrokeScaledFlippedY = CGRect(x: (bbox.origin.x * self.viewContentScaleFactor), y: ((self.viewBounds.height - bbox.origin.y - bbox.height) * self.viewContentScaleFactor), width: (bbox.width * self.viewContentScaleFactor), height: (bbox.height * self.viewContentScaleFactor))
let strokeCIImage = CIImage(mtlTexture: metalDrawableTextureKeyframe,
options: convertToOptionalCIImageOptionDictionary(kciOptions))!.oriented(CGImagePropertyOrientation.downMirrored)
let imageCropCG = cicontext.createCGImage(strokeCIImage, from: bboxStrokeScaledFlippedY, format: CIFormat.RGBA8, colorSpace: colorSpaceGenericRGBLinear)
cicontext.clearCaches()
return imageCropCG!
} // end of func mtlTextureToCGImage(bbox: CGRect)
CGImageToMTLTexture:
func CGImageToMTLTexture (cgImage: CGImage) -> MTLTexture {
// Note that we forego the more direct method of creating stampTexture:
//let stampTexture = try! MTKTextureLoader(device: self.device!).newTexture(cgImage: strokeUIImage.cgImage!, options: nil)
// because MTKTextureLoader seems to be doing additional processing which messes with the resulting texture/colorspace
let width = Int(cgImage.width)
let height = Int(cgImage.height)
let bytesPerPixel = 4
let rowBytes = width * bytesPerPixel
//
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm,
width: width,
height: height,
mipmapped: false)
texDescriptor.usage = MTLTextureUsage(rawValue: MTLTextureUsage.shaderRead.rawValue)
texDescriptor.storageMode = .shared
guard let stampTexture = device!.makeTexture(descriptor: texDescriptor) else {
return brushTextureSquare // return SOMETHING
}
let dstData: CFData = (cgImage.dataProvider!.data)!
let pixelData = CFDataGetBytePtr(dstData)
let region = MTLRegionMake2D(0, 0, width, height)
print ("[MetalViewPainting]: w= \(width) | h= \(height) region = \(region.size)")
stampTexture.replace(region: region, mipmapLevel: 0, withBytes: pixelData!, bytesPerRow: Int(rowBytes))
return stampTexture
} // end of func CGImageToMTLTexture (cgImage: CGImage)
The type of distortion looks like a bytes-per-row alignment issue between CGImage and MTLTexture. You're probably only seeing this issue when your image is a certain size that falls outside of the bytes-per-row alignment requirement of your MTLDevice. If you really need to store the texture as a CGImage, ensure that you are using the bytesPerRow value of the CGImage when copying back to the texture.

Generate Laplacian image by Apple-Metal MPSImageLaplacian

I am trying to generate Laplacian image out of rgb CGImage by using metal laplacian.
The current code used:
if let croppedImage = self.cropImage2(image: UIImage(ciImage: image), rect: rect)?.cgImage {
let commandBuffer = self.commandQueue.makeCommandBuffer()!
let laplacian = MPSImageLaplacian(device: self.device)
let textureLoader = MTKTextureLoader(device: self.device)
let options: [MTKTextureLoader.Option : Any]? = nil
let srcTex = try! textureLoader.newTexture(cgImage: croppedImage, options: options)
let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: srcTex.pixelFormat, width: srcTex.width, height: srcTex.height, mipmapped: false)
let lapTex = self.device.makeTexture(descriptor: desc)
laplacian.encode(commandBuffer: commandBuffer, sourceTexture: srcTex, destinationTexture: lapTex!)
let output = CIImage(mtlTexture: lapTex!, options: [:])?.cgImage
print("output: \(output?.width)")
print("")
}
I suspect the problem is in makeTexture:
let lapTex = self.device.makeTexture(descriptor: desc)
the width and height of the lapTex in debugger are invalid although the desc and srcTex contains valid data including width and height.
Looks like order or initialisation is wrong but couldn't find what.
Does anyone has an idea what is wrong?
Thanks
There are a few things wrong here.
First, as mentioned in my comment, the command buffer isn't being committed, so the kernel work is never being performed.
Second, you need to wait for the work to complete before attempting to read back the results. (On macOS you'd additionally need to use a blit command encoder to ensure that the contents of the texture are copied back to CPU-accessible memory.)
Third, it's important to create the destination texture with the appropriate usage flags. The default of .shaderRead is insufficient in this case, since the MPS kernel writes to the texture. Therefore, you should explicitly set the usage property on the texture descriptor (to either [.shaderRead, .shaderWrite] or .shaderWrite, depending on how you go on to use the texture).
Fourth, it may be the case that the pixel format of your source texture isn't a writable format, so unless you're absolutely certain it is, consider setting the destination pixel format to a known-writable format (like .rgba8unorm) instead of assuming the destination should match the source. This also helps later when creating CGImages.
Finally, there is no guarantee that the cgImage property of a CIImage is non-nil when it wasn't created from a CGImage. Calling the property doesn't (necessarily) create a new backing CGImage. So, you need to explicitly create a CGImage somehow.
One way of doing this would be to create a Metal device-backed CIContext and use its createCGImage(_:from:) method. Although this might work, it seems redundant if the intent is simply to create a CGImage from a MTLTexture (for display purposes, let's say).
Instead, consider using the getBytes(_:bytesPerRow:from:mipmapLevel:) method to get the bytes from the texture and load them into a CG bitmap context. It's then trivial to create a CGImage from the context.
Here's a function that computes the Laplacian of an image and returns the resulting image:
func laplacian(_ image: CGImage) -> CGImage? {
let commandBuffer = self.commandQueue.makeCommandBuffer()!
let laplacian = MPSImageLaplacian(device: self.device)
let textureLoader = MTKTextureLoader(device: self.device)
let options: [MTKTextureLoader.Option : Any]? = nil
let srcTex = try! textureLoader.newTexture(cgImage: image, options: options)
let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: srcTex.pixelFormat,
width: srcTex.width,
height: srcTex.height,
mipmapped: false)
desc.pixelFormat = .rgba8Unorm
desc.usage = [.shaderRead, .shaderWrite]
let lapTex = self.device.makeTexture(descriptor: desc)!
laplacian.encode(commandBuffer: commandBuffer, sourceTexture: srcTex, destinationTexture: lapTex)
#if os(macOS)
let blitCommandEncoder = commandBuffer.makeBlitCommandEncoder()!
blitCommandEncoder.synchronize(resource: lapTex)
blitCommandEncoder.endEncoding()
#endif
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
// Note: You may want to use a different color space depending
// on what you're doing with the image
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Note: We skip the last component (A) since the Laplacian of the alpha
// channel of an opaque image is 0 everywhere, and that interacts oddly
// when we treat the result as an RGBA image.
let bitmapInfo = CGImageAlphaInfo.noneSkipLast.rawValue
let bytesPerRow = lapTex.width * 4
let bitmapContext = CGContext(data: nil,
width: lapTex.width,
height: lapTex.height,
bitsPerComponent: 8,
bytesPerRow: bytesPerRow,
space: colorSpace,
bitmapInfo: bitmapInfo)!
lapTex.getBytes(bitmapContext.data!,
bytesPerRow: bytesPerRow,
from: MTLRegionMake2D(0, 0, lapTex.width, lapTex.height),
mipmapLevel: 0)
return bitmapContext.makeImage()
}

Raw Image data format from iOS Camera

I need to save raw data from the iOS camera to the cloud. I get a CVPixelBuffer from the iOS camera. I do not specify what format (kCVPixelBufferPixelFormatTypeKey) I want the CVPixelBuffer in when I set up the iOS camera.
I turn the CVPixelBuffer into a Data object like this.
let buffer: CVPixelBuffer = //My buffer from the camera
//Get the height for the size calculation
let height = CVPixelBufferGetHeight(buffer)
//Get the bytes per row for the size calculation
let bytesPerRow = CVPixelBufferGetBytesPerRow(buffer)
//Lock the buffer so we can turn it into data
CVPixelBufferLockBaseAddress(buffer, CVPixelBufferLockFlags.readOnly)
//Get the base address. This is the address in memory of where the start of the buffer is currently stored.
guard let pointer = CVPixelBufferGetBaseAddress(buffer) else {
//If we failed to get the base address unlock the buffer and clean up.
CVPixelBufferUnlockBaseAddress(buffer, CVPixelBufferLockFlags.readOnly)
return
}
let data = Data(bytes: pointer, count: height * bytesPerRow)
Am I doing this right?
Also, when I get this data back how would I turn it into a CIImage?

MTLTexture from CMSampleBuffer has 0 bytesPerRow

I am converting the CMSampleBuffer argument in the captureOutput function of my AVCaptureVideoDataOuput delegate into a MTLTexture like so (side note, I have set the pixel format of the video output to kCVPixelFormatType_32BGRA):
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
var outTexture: CVMetalTexture? = nil
var textCache : CVMetalTextureCache?
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, metalDevice, nil, &textCache)
var textureRef : CVMetalTexture?
CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textCache!, imageBuffer, nil, MTLPixelFormat.bgra8Unorm, width, height, 0, &textureRef)
let texture = CVMetalTextureGetTexture(textureRef!)!
print(texture.bufferBytesPerRow)
}
The issue is when I print the bytes per row of the texture, it always prints 0, which is problematic because I later try to convert the texture back into a UIImage using the methodology in this article: https://www.invasivecode.com/weblog/metal-image-processing. Why is the texture I receive seemingly empty? I know the CMSampleBuffer property is fine because I can convert it into a UIIMage and draw it like so:
let myPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let myCIimage = CIImage(cvPixelBuffer: myPixelBuffer!)
let image = UIImage(ciImage: myCIimage)
self.imageView.image = image
The bufferBytesPerRow property is only meaningful for a texture that was created using the makeTexture(descriptor:offset:bytesPerRow:) method of a MTLBuffer. As you can see, the bytes-per-row is an input to that method to tell Metal how to interpret the data in the buffer. (The texture descriptor provides additional information, too, of course.) This method is only a means to get that back out.
Note that textures created from buffers can also report which buffer they were created from and the offset supplied to the above method.
Textures created in other ways don't have that information. These textures have no intrinsic bytes-per-row. Their data is not necessarily organized internally in a simple raster buffer.
If/when you want to get the data from a texture to either a Metal buffer or a plain old byte array, you have the freedom to choose a bytes-per-row value that's useful for your purposes, so long as it's at least the bytes-per-pixel of the texture pixel format times the texture's width. (It's more complicated for compressed formats.) The docs for getBytes(_:bytesPerRow:from:mipmapLevel:) and copy(from:sourceSlice:sourceLevel:sourceOrigin:sourceSize:to:destinationOffset:destinationBytesPerRow:destinationBytesPerImage:) explain further.

Resources