I am trying to create a video with an input of images using AVAssetWriterInput for which I need to create a pixel buffer of my images.
For that I call the function below which works, but after creating a few videos the app receives a memory warning and crashes. I have debugged it using Instruments and it appears I have a memory leak here.
I have tried to put the variables pixelBufferPointer and pxData as class variables and destroy/dealloc once the video is created but that didn't appear to make any difference. Is there something that I should be doing to release this memory?
func createPixelBufferFromCGImage(image: CGImageRef) -> CVPixelBufferRef {
let options = [
"kCVPixelBufferCGImageCompatibilityKey": true,
"kCVPixelBufferCGBitmapContextCompatibilityKey": true
]
var videoWidth = 496
var videoHeight = 668
let frameSize = CGSizeMake(CGFloat(videoWidth), CGFloat(videoHeight))
var pixelBufferPointer = UnsafeMutablePointer<Unmanaged<CVPixelBuffer>?>.alloc(1)
var status:CVReturn = CVPixelBufferCreate(
kCFAllocatorDefault,
Int(frameSize.width),
Int(frameSize.height),
OSType(kCVPixelFormatType_32ARGB),
options,
pixelBufferPointer
)
var lockStatus:CVReturn = CVPixelBufferLockBaseAddress(pixelBufferPointer.memory?.takeUnretainedValue(), 0)
var pxData:UnsafeMutablePointer<(Void)> = CVPixelBufferGetBaseAddress(pixelBufferPointer.memory?.takeUnretainedValue())
let bitmapinfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.NoneSkipFirst.rawValue)
let rgbColorSpace:CGColorSpace = CGColorSpaceCreateDeviceRGB()
var context:CGContextRef = CGBitmapContextCreate(
pxData,
Int(frameSize.width),
Int(frameSize.height),
8,
//4 * CGImageGetWidth(image),
4 * Int(frameSize.width),
rgbColorSpace,
bitmapinfo
)
CGContextDrawImage(context, CGRectMake(0, 0, frameSize.width, frameSize.height), image)
CVPixelBufferUnlockBaseAddress(pixelBufferPointer.memory?.takeUnretainedValue(), 0)
UIGraphicsEndImageContext()
return pixelBufferPointer.memory!.takeUnretainedValue()
}
The Unmanaged<CVPixelBuffer> is leaking.
I would prefer Russell Austin's answer, but I couldn't figure out how to pass the pixelBufferPointer to CVPixelBufferCreate without syntax error (swift noob). Failing that, changing the line
return pixelBufferPointer.memory!.takeUnretainedValue()
to
return pixelBufferPointer.memory!.takeRetainedValue()
fixes the leak
Take a look at Memory leak on CIContext createCGImage at iOS 9?
I'm having a similar issue, the leak is due to the following code
CGImageRef processedCGImage = [_context createCGImage:ciImage
fromRect:[ciImage extent]];
Here for more contextual info: Memory Leak in CMSampleBufferGetImageBuffer
You don't have to use the UnsafeMutablePointer types. You were probably trying to convert from some Objective-C example. I was doing the same thing but eventually found how to do it without. Try
var pixelBufferPointer: CVPixelBuffer?
...
var status:CVReturn = CVPixelBufferCreate(
kCFAllocatorDefault,
Int(frameSize.width),
Int(frameSize.height),
OSType(kCVPixelFormatType_32ARGB),
options,
&pixelBufferPointer
)
and
var pxData = CVPixelBufferGetBaseAddress(pixelBufferPointer)
Then you don't have to do the takeUnRetainedValue, etc.
You might also try using AVAssetWriterInputPixelBufferAdaptor and a pixel buffer pool. That is supposed to be more efficient. See this example
When using CVPixelBufferCreate the UnsafeMutablePointer has to be destroyed after retrieving the memory of it.
When I create a CVPixelBuffer, I do it like this:
func allocPixelBuffer() -> CVPixelBuffer {
let pixelBufferAttributes : CFDictionary = [...]
let pixelBufferOut = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
_ = CVPixelBufferCreate(kCFAllocatorDefault,
Int(Width),
Int(Height),
OSType(kCVPixelFormatType_32ARGB),
pixelBufferAttributes,
pixelBufferOut)
let pixelBuffer = pixelBufferOut.memory!
pixelBufferOut.destroy()
return pixelBuffer
}
Related
I am trying to generate Laplacian image out of rgb CGImage by using metal laplacian.
The current code used:
if let croppedImage = self.cropImage2(image: UIImage(ciImage: image), rect: rect)?.cgImage {
let commandBuffer = self.commandQueue.makeCommandBuffer()!
let laplacian = MPSImageLaplacian(device: self.device)
let textureLoader = MTKTextureLoader(device: self.device)
let options: [MTKTextureLoader.Option : Any]? = nil
let srcTex = try! textureLoader.newTexture(cgImage: croppedImage, options: options)
let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: srcTex.pixelFormat, width: srcTex.width, height: srcTex.height, mipmapped: false)
let lapTex = self.device.makeTexture(descriptor: desc)
laplacian.encode(commandBuffer: commandBuffer, sourceTexture: srcTex, destinationTexture: lapTex!)
let output = CIImage(mtlTexture: lapTex!, options: [:])?.cgImage
print("output: \(output?.width)")
print("")
}
I suspect the problem is in makeTexture:
let lapTex = self.device.makeTexture(descriptor: desc)
the width and height of the lapTex in debugger are invalid although the desc and srcTex contains valid data including width and height.
Looks like order or initialisation is wrong but couldn't find what.
Does anyone has an idea what is wrong?
Thanks
There are a few things wrong here.
First, as mentioned in my comment, the command buffer isn't being committed, so the kernel work is never being performed.
Second, you need to wait for the work to complete before attempting to read back the results. (On macOS you'd additionally need to use a blit command encoder to ensure that the contents of the texture are copied back to CPU-accessible memory.)
Third, it's important to create the destination texture with the appropriate usage flags. The default of .shaderRead is insufficient in this case, since the MPS kernel writes to the texture. Therefore, you should explicitly set the usage property on the texture descriptor (to either [.shaderRead, .shaderWrite] or .shaderWrite, depending on how you go on to use the texture).
Fourth, it may be the case that the pixel format of your source texture isn't a writable format, so unless you're absolutely certain it is, consider setting the destination pixel format to a known-writable format (like .rgba8unorm) instead of assuming the destination should match the source. This also helps later when creating CGImages.
Finally, there is no guarantee that the cgImage property of a CIImage is non-nil when it wasn't created from a CGImage. Calling the property doesn't (necessarily) create a new backing CGImage. So, you need to explicitly create a CGImage somehow.
One way of doing this would be to create a Metal device-backed CIContext and use its createCGImage(_:from:) method. Although this might work, it seems redundant if the intent is simply to create a CGImage from a MTLTexture (for display purposes, let's say).
Instead, consider using the getBytes(_:bytesPerRow:from:mipmapLevel:) method to get the bytes from the texture and load them into a CG bitmap context. It's then trivial to create a CGImage from the context.
Here's a function that computes the Laplacian of an image and returns the resulting image:
func laplacian(_ image: CGImage) -> CGImage? {
let commandBuffer = self.commandQueue.makeCommandBuffer()!
let laplacian = MPSImageLaplacian(device: self.device)
let textureLoader = MTKTextureLoader(device: self.device)
let options: [MTKTextureLoader.Option : Any]? = nil
let srcTex = try! textureLoader.newTexture(cgImage: image, options: options)
let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: srcTex.pixelFormat,
width: srcTex.width,
height: srcTex.height,
mipmapped: false)
desc.pixelFormat = .rgba8Unorm
desc.usage = [.shaderRead, .shaderWrite]
let lapTex = self.device.makeTexture(descriptor: desc)!
laplacian.encode(commandBuffer: commandBuffer, sourceTexture: srcTex, destinationTexture: lapTex)
#if os(macOS)
let blitCommandEncoder = commandBuffer.makeBlitCommandEncoder()!
blitCommandEncoder.synchronize(resource: lapTex)
blitCommandEncoder.endEncoding()
#endif
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
// Note: You may want to use a different color space depending
// on what you're doing with the image
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Note: We skip the last component (A) since the Laplacian of the alpha
// channel of an opaque image is 0 everywhere, and that interacts oddly
// when we treat the result as an RGBA image.
let bitmapInfo = CGImageAlphaInfo.noneSkipLast.rawValue
let bytesPerRow = lapTex.width * 4
let bitmapContext = CGContext(data: nil,
width: lapTex.width,
height: lapTex.height,
bitsPerComponent: 8,
bytesPerRow: bytesPerRow,
space: colorSpace,
bitmapInfo: bitmapInfo)!
lapTex.getBytes(bitmapContext.data!,
bytesPerRow: bytesPerRow,
from: MTLRegionMake2D(0, 0, lapTex.width, lapTex.height),
mipmapLevel: 0)
return bitmapContext.makeImage()
}
I'm trying to load a large image into a MTLTexture and it works with 4000x6000 images. But when I try with 6000x8000 it can't.
func setTexture(device: MTLDevice, imageName: String) -> MTLTexture? {
let textureLoader = MTKTextureLoader(device: device)
var texture: MTLTexture? = nil
// In iOS 10 the origin was changed.
let textureLoaderOptions: [MTKTextureLoader.Option: Any]
if #available(iOS 10.0, *) {
let origin = MTKTextureLoader.Origin.bottomLeft.rawValue
textureLoaderOptions = [MTKTextureLoader.Option.origin : origin]
} else {
textureLoaderOptions = [:]
}
if let textureURL = Bundle.main.url(forResource: imageName, withExtension: nil, subdirectory: "Images") {
do {
texture = try textureLoader.newTexture(URL: textureURL, options: textureLoaderOptions)
} catch {
print("Texture not created.")
}
}
return texture
}
Pretty basic code. I'm running it in an iPad Pro with A9 chip, GPU family 3. It should handle textures this large. Should I manually tile it somehow if it doesn't accept this size? In that case, what's the best approach: using MTLRegionMake to copy bytes, slicing in Core Image or a Core Graphics context...
I appreciate any help
Following your helpful comments I decided to load it manually drawing to a CGContext and copying to a MTLTexture. I'm adding the solution code below. The context shouldn't be created each time a texture is created, it's better to put it outside the function and keep reusing it.
// Grab the CGImage, w = width, h = height...
let context = CGContext(data: nil, width: w, height: h, bitsPerComponent: bpc, bytesPerRow: (bpp / 8) * w, space: colorSpace!, bitmapInfo: bitmapInfo.rawValue)
let flip = CGAffineTransform(a: 1, b: 0, c: 0, d: -1, tx: 0, ty: CGFloat(h))
context?.concatenate(flip)
context?.draw(cgImage, in: CGRect(x: 0, y: 0, width: CGFloat(w), height: CGFloat(h)))
let textureDescriptor = MTLTextureDescriptor()
textureDescriptor.pixelFormat = .rgba8Unorm
textureDescriptor.width = w
textureDescriptor.height = h
guard let data = context?.data else {print("No data in context."); return nil}
let texture = device.makeTexture(descriptor: textureDescriptor)
texture?.replace(region: MTLRegionMake2D(0, 0, w, h), mipmapLevel: 0, withBytes: data, bytesPerRow: 4 * w)
return texture
I had this issue before, a texture would load on one device and not on another. I think it is a bug with the texture loader.
You can load in a texture manually using CGImage and a CGContext, draw the image into the context. Create a MTLTexture buffer, then copy the bytes from the CGContext into the texture using a MTLRegion.
It's not fool proof, you have to make sure to use the correct pixel format for the metal buffer or you'll get strange results, so either you code for one specific format of image you're importing, or do a lot of checking. Apples' Basic Texturing example shows how you can change the color order before writing the bytes to the texture using MTLRegion.
I am trying to use a MTKTextureLoader to load a CGImage as a texture. Here is the original image
However after I convert that CGImage into a MTLTexture and that texture back to a CGImage it looks horrible, like this:
Here is sorta what is going on in code.
The image is loaded in as a CGImage (I have checked and that image does appear to have the full visual quality)
I have a function view() that allows me to view a NSImage by using it in a CALayer like so:
func view() {
.....
imageView!.layer = CALayer()
imageView!.layer!.contentsGravity = kCAGravityResizeAspectFill
imageView!.layer!.contents = img
imageView!.wantsLayer = true
So I did the following
let cg = CoolImage()
let ns = NSImage(cgImage: cg, size: Size(width: cg.width, height: cg.height))
view(image: ns)
And checked sure enough it had the full visual fidelity.
So then I loaded the cg image into a MTLTexture like so
let textureLoader = MTKTextureLoader(device: metalState.sharedDevice!)
let options = [
MTKTextureLoader.Option.textureUsage: NSNumber(value: MTLTextureUsage.shaderRead.rawValue | MTLTextureUsage.shaderWrite.rawValue | MTLTextureUsage.renderTarget.rawValue),
MTKTextureLoader.Option.SRGB: false
]
return ensure(try textureLoader.newTexture(cgImage: cg, options: options))
I then converted the MTLTexture back to a UIImage like so:
let texture = self
let width = texture.width
let height = texture.height
let bytesPerRow = width * 4
let data = UnsafeMutableRawPointer.allocate(bytes: bytesPerRow * height, alignedTo: 4)
defer {
data.deallocate(bytes: bytesPerRow * height, alignedTo: 4)
}
let region = MTLRegionMake2D(0, 0, width, height)
texture.getBytes(data, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0)
var buffer = vImage_Buffer(data: data, height: UInt(height), width: UInt(width), rowBytes: bytesPerRow)
var map: [UInt8] = [0, 1, 2, 3]
if (pixelFormat == .bgra8Unorm) {
map = [2, 1, 0, 3]
}
vImagePermuteChannels_ARGB8888(&buffer, &buffer, map, 0)
guard let colorSpace = CGColorSpace(name: CGColorSpace.genericRGBLinear) else { return nil }
guard let context = CGContext(data: data, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue) else { return nil }
guard let cgImage = context.makeImage() else { return nil }
return NSImage(cgImage: cgImage, size: Size(width: width, height: height))
And viewed it.
The resulting image was quite saturated and I believe it was because of the CGImage to MTLTexture conversion which I have been fairly successful with in the past.
Please note that this texture was never rendered only converted.
You are probably wondering why I am using all of these conversions and that is a great point. My actual pipeline does not work anything like this HOWEVER it does require each of these conversion components to be working smoothly. This is not my actual use case just something to show the problem.
The problem here isn't the conversion from CGImage to MTLTexture. The problem is that you're assuming that the color space of the source image is linear. More likely than not, the image data is actually sRGB-encoded, so by creating a bitmap context with a generic linear color space, you're incorrectly telling CG that it should gamma-encode the image data before display, which leads to the desaturation you're seeing.
You can fix this by using the native color space of the original CGImage, or by otherwise accounting for the fact that your image data is sRGB-encoded.
I am actually working with image processing on iOS10 with iPad Pro.
I have written small Swift3 image processing app to test the speed of image processing.
My decoder sends every ~33ms (about 30 FPS) new frame, which I need to process with some CoreImage filters of iOS without additional buffering. Every ~33ms the following function will be called:
func newFrame(_ player: MediaPlayer!, buffer: UnsafeMutableRawPointer!,
size: Int32, format_fourcc: UnsafeMutablePointer<Int8>!,
width: Int32, height: Int32, bytes_per_row: Int32,
pts: Int, will_show: Int32) -> Int32 {
if String(cString: format_fourcc) == "BGRA" && will_show == 1 {
// START
var pixelBuffer: CVPixelBuffer? = nil
let ret = CVPixelBufferCreateWithBytes(kCFAllocatorSystemDefault,
Int(width),
Int(height),
kCVPixelFormatType_32BGRA,
buffer,
Int(bytes_per_row),
{ (releaseContext:
UnsafeMutableRawPointer?,
baseAddr:
UnsafeRawPointer?) -> () in
// Do not need to be used
// since created CVPixelBuffer
// will be destroyed
// in scope of this function
// automatically
},
buffer,
nil,
&pixelBuffer)
// END_1
if ret != kCVReturnSuccess {
NSLog("New Frame: Can't create the buffer")
return -1
}
if let pBuff = pixelBuffer {
let img = CIImage(cvPixelBuffer: pBuff)
.applyingFilter("CIColorInvert", withInputParameters: [:])
}
// END_2
}
return 0
}
I need to solve one of the following problems:
Copying CIImage img raw memory data back to UnsafeMutableRawPointer buffer memory.
Somehow apply GPU image filter to CVPixelBuffer pixelBuffer or UnsafeMutableRawPointer buffer directly
The code bloc between // START and // END_2 need to be run in less than 5ms.
What I know:
The code between // START and // END_1 runs in less than 1.3ms.
Please help with your ideas.
Best regards,
Alex
I found temporary solution:
1) Create CIContext in your view :
imgContext = CIContext(eaglContext: eaglContext!)
2) Use a context to draw filtered CIImage to the pointer's memory:
imgContext.render(img,
toBitmap: buffer,
rowBytes: Int(bytes_per_row),
bounds: CGRect(x: 0,
y: 0,
width: Int(width),
height: Int(height)),
format: kCIFormatBGRA8,
colorSpace: CGColorSpaceCreateDeviceRGB())
This solution works well as it uses SIMD instructions of iPad CPU. But the CPU utilization only for copy operation is too high ~30%. This 30% will be added to CPU usage of your program.
Probably somebody has any better idea how let GPU directly write to UnsafeMutableRawPointer after CIFilter?
I am using SWIFT language and trying to take snapshot images from the camera viewfinder buffer. So far everything works well except for the image color. It seems incorrect or being swapped. Below is the code snippets where I set the video settings and capturing the image frames
func addVideoOutput() {
videoDeviceOutput = AVCaptureVideoDataOutput()
videoDeviceOutput.videoSettings = NSDictionary(objectsAndKeys: Int(kCVPixelFormatType_32BGRA), kCVPixelBufferPixelFormatTypeKey) as[NSObject: AnyObject]
// kCVPixelFormatType_32ARGB tested and found not supported
videoDeviceOutput.alwaysDiscardsLateVideoFrames = true
videoDeviceOutput.setSampleBufferDelegate(self, queue: sessionQueue)
if captureSession!.canAddOutput(videoDeviceOutput) {
captureSession!.addOutput(videoDeviceOutput)
}
}
/* AVCaptureVideoDataOutput Delegate
------------------------------------------*/
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
sessionDelegate ? .cameraSessionDidOutputSampleBuffer ? (sampleBuffer)
// Extract a UImage
//var pixel_buffer : CVPixelBufferRef?
let pixel_buffer = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(pixel_buffer, 0);
// Get the number of bytes per row for the pixel buffer
var baseAddress = CVPixelBufferGetBaseAddress(pixel_buffer);
// Get the number of bytes per row for the pixel buffer
var bytesPerRow = CVPixelBufferGetBytesPerRow(pixel_buffer);
// Get the pixel buffer width and height
let width : Int = CVPixelBufferGetWidth(pixel_buffer);
let height : Int = CVPixelBufferGetHeight(pixel_buffer);
/*Create a CGImageRef from the CVImageBufferRef*/
let colorSpace: CGColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.rawValue)
var newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, bitmapInfo)
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);
// get image frame and save to local storage
var refImage: CGImageRef = CGBitmapContextCreateImage(newContext)
var pixelData = CGDataProviderCopyData(CGImageGetDataProvider(refImage))
var image: UIImage = UIImage(CGImage: refImage)!;
self.SaveImageToDocumentStorage(image)
}
As you can see one of the comment line in the addVideoOutput function, I tried the kCVPixelFormatType_32ARGB format but it says not supported in iOS???
I kinda suspect the video format is 32BGRA but the color space for the image frame is set with CGColorSpaceCreateDeviceRGB(), but I could not find any other suitable RGB format for the video setting.
Any solutions or hints are much appreciated.
Thanks
I found the cause and a solution.
Just in case anyone experiences the same problem. Just change the bitmapInfo as follow:
// let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.rawValue)
var bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue) | CGBitmapInfo.ByteOrder32Little