How to create depth data and add it to an image? - ios

Sorry, I duplicated this question How to build AVDepthData manually, because it doesn't have answers I want and I don't have enough rep to comment there. If you don't mind, I could remove my question in the future and ask somebody to move future answers to that topic.
So, my goal is to create depth data and attach it to an arbitrary image. There is an article on how to do it https://developer.apple.com/documentation/avfoundation/avdepthdata/creating_auxiliary_depth_data_manually, but I don't know how to implement any step of it. I won't post all questions at once and start with the first one.
As a first step a depth image must be converted per-pixel from grayscale to depth or disparity values. I took this snippet from the aforementioned topic:
func buildDepth(image: UIImage) -> AVDepthData? {
let width = Int(image.size.width)
let height = Int(image.size.height)
var maybeDepthMapPixelBuffer: CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_DisparityFloat32, nil, &maybeDepthMapPixelBuffer)
guard status == kCVReturnSuccess, let depthMapPixelBuffer = maybeDepthMapPixelBuffer else {
return nil
}
CVPixelBufferLockBaseAddress(depthMapPixelBuffer, .init(rawValue: 0))
guard let baseAddress = CVPixelBufferGetBaseAddress(depthMapPixelBuffer) else {
return nil
}
let buffer = unsafeBitCast(baseAddress, to: UnsafeMutablePointer<Float32>.self)
for i in 0..<width * height {
buffer[i] = 0 // disparity must be calculated somehow, but set to 0 for testing purposes
}
CVPixelBufferUnlockBaseAddress(depthMapPixelBuffer, .init(rawValue: 0))
let info: [AnyHashable: Any] = [kCGImagePropertyPixelFormat: kCVPixelFormatType_DisparityFloat32,
kCGImagePropertyWidth: image.size.width,
kCGImagePropertyHeight: image.size.height,
kCGImagePropertyBytesPerRow: CVPixelBufferGetBytesPerRow(depthMapPixelBuffer)]
let metadata = generateMetadata(image: image)
let dic: [AnyHashable: Any] = [kCGImageAuxiliaryDataInfoDataDescription: info,
// I get an error when converting baseAddress to CFData
kCGImageAuxiliaryDataInfoData: baseAddress as! CFData,
kCGImageAuxiliaryDataInfoMetadata: metadata]
guard let depthData = try? AVDepthData(fromDictionaryRepresentation: dic) else {
return nil
}
return depthData
}
Then the article says to load a base address of a pixel buffer (in which is the disparity map) as CFData and pass it as kCGImageAuxiliaryDataInfoData value into a CFDictionary. But I get an error when converting baseAddress to CFData. I tried to convert the pixel buffer itself too, but without luck. What do I have to pass as kCGImageAuxiliaryDataInfoData? Did I create the disparity buffer correctly in the first place?
Aside from this problem it would be cool if somebody could direct me to some sample code on how to do the whole thing.

Your question really helped me get from cvPixelBuffer to AVDepthData so thank you. It was about 95% of the way there.
To fix your (and mine) issue I added the following:
let bytesPerRow = CVPixelBufferGetBytesPerRow(depthMapPixelBuffer)
let size = bytesPerRow * height;
... code code code ...
CVPixelBufferLockBaseAddress(depthMapPixelBuffer!, .init(rawValue: 0))
let baseAddress = CVPixelBufferGetBaseAddressOfPlane(depthMapPixelBuffer!, 0)
let data = NSData(bytes: baseAddress, length: size);
... code code code ...
let dic: [AnyHashable: Any] = [kCGImageAuxiliaryDataInfoDataDescription: info,
kCGImageAuxiliaryDataInfoData: data,
kCGImageAuxiliaryDataInfoMetadata: metadata]

Related

How to combine MTLTextures into the currentDrawable

I am new to using Metal but I have been following the tutorial here that takes the camera output and renders it on to the screen using metal.
Now I want to take an image, turn it into a MTLTexture, and position and render that texture on top of the camera output.
My current rendering code is as follows:
private func render(texture: MTLTexture, withCommandBuffer commandBuffer: MTLCommandBuffer, device: MTLDevice) {
guard
let currentRenderPassDescriptor = metalView.currentRenderPassDescriptor,
let currentDrawable = metalView.currentDrawable,
let renderPipelineState = renderPipelineState,
let encoder = commandBuffer.makeRenderCommandEncoder(descriptor: currentRenderPassDescriptor)
else {
semaphore.signal()
return
}
encoder.pushDebugGroup("RenderFrame")
encoder.setRenderPipelineState(renderPipelineState)
encoder.setFragmentTexture(texture, index: 0)
encoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1)
encoder.popDebugGroup()
encoder.endEncoding()
commandBuffer.addScheduledHandler { [weak self] (buffer) in
guard let unwrappedSelf = self else { return }
unwrappedSelf.didRenderTexture(texture, withCommandBuffer: buffer, device: device)
unwrappedSelf.semaphore.signal()
}
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
I know that I can convert a UIImage to a MTLTexture using the following code:
let textureLoader = MTKTextureLoader(device: device)
let cgImage = UIImage(named: "myImage")!.cgImage!
let imageTexture = try! textureLoader.newTexture(cgImage: cgImage, options: nil)
So now I have two MTLTextures. Is there a simple function that allows me to combine them? I've been trying to search online and someone mentioned a function called over, but I haven't actually been able to find that one. Any help would be greatly appreciated.
You can simply do this inside the shader by adding or multiplying color values. I guess that's what shaders are for.

Unexpected output converting CVPixelBuffer to MTLTexture

I am extracting SampleBuffers from the AVAsset using AVAssetReader. I am converting each CMSampleBuffer to MTLTexture every iteration using the code snippet below. however, I am getting expected CVPixelBuffer but when I try to convert it I got . expected output is ..
I already try debugging width and height which is accurate, I try to use the different video same issue occurred, tried creating different textureCache. same issue.
func convertToMTLTexture(sampleBuffer: CMSampleBuffer?) -> MTLTexture? {
if let textureCache = textureCache,
let sampleBuffer = sampleBuffer,
let imageBuffer:CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
var texture: CVMetalTexture?
CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache,
imageBuffer, nil, .bgra8Unorm, width, height, 0, &texture)
if let texture = texture {
return CVMetalTextureGetTexture(texture)
}
}
return nil
}

Fastest way to record video from SCNView

I have SCNView with some object in the middle of screen, user can rotate it, scale, etc.
I want to record all this movements in video and add some sound in realtime. Also I want to record only middle part of SCNView (e.g. SCNView frame is 375x812 but I want only middle 375x375 without top and bottom border). Also I want to show it on screen simultaneously with video capturing.
My current variants are:
func renderer(_ renderer: SCNSceneRenderer, didRenderScene scene: SCNScene, atTime time: TimeInterval) {
DispatchQueue.main.async {
if let metalLayer = self.sceneView.layer as? CAMetalLayer, let texture = metalLayer.currentSceneDrawable?.texture, let pixelBufferPool = self.pixelBufferPool {
//1
var maybePixelBuffer: CVPixelBuffer? = nil
let status = CVPixelBufferPoolCreatePixelBuffer(nil, pixelBufferPool, &maybePixelBuffer)
guard let pixelBuffer = maybePixelBuffer else { return }
CVPixelBufferLockBaseAddress(pixelBuffer, [])
let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
let region = MTLRegionMake2D(Int(self.fieldOfView.origin.x * UIScreen.main.scale),
Int(self.fieldOfView.origin.y * UIScreen.main.scale),
Int(self.fieldOfView.width * UIScreen.main.scale),
Int(self.fieldOfView.height * UIScreen.main.scale))
let pixelBufferBytes = CVPixelBufferGetBaseAddress(pixelBuffer)!
texture.getBytes(pixelBufferBytes, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0)
let uiImage = self.image(from: pixelBuffer)
CVPixelBufferUnlockBaseAddress(pixelBuffer, [])
//2
if #available(iOS 11.0, *) {
var pixelBuffer: Unmanaged<CVPixelBuffer>? = nil
CVPixelBufferCreateWithIOSurface(kCFAllocatorDefault, texture.iosurface!, nil, UnsafeMutablePointer<Unmanaged<CVPixelBuffer>?>(&pixelBuffer))
let imageBuffer = pixelBuffer!.takeUnretainedValue()
} else {
// Fallback on earlier versions
}
//3
var pb: CVPixelBuffer? = nil
let result = CVPixelBufferCreate(kCFAllocatorDefault, texture.width, texture.height, kCVPixelFormatType_32BGRA, nil, &pb)
print(result)
let ciImage = CIImage(mtlTexture: texture, options: nil)
let context = CIContext()
context.render(ciImage!, to: pb!)
}
}
}
Obtained CVPixelBuffer will be added to AVAssetWriter.
but all of this methods have some flaws.
1) MTLTexture has colorPixelFormat == 555 (bgra10_XR_sRGB if I recall correctly) and I don't know how to convert it to BGR (to append it to the aseetWriter) nor how to change that colorPixelFormat nor how to add bgra10_XR_sRGB to the aseetWriter.
2) How to implement version for iOS10?
2,3) What is the fastest way to crop an image? Using this methods I can grab only full image instead of cropped one. And I don't want to convert it to UIImage because it too slow.
P.S. my previous viewer was on OpenGL ES(GLKView) and I successfully did it using this technique (overhead 1ms instead of 30ms using .screenshot method)

how to create a bitmap image from the byte array in swift

I have a byte array which comes from a finger print sensor device. I wanted to create a bitmap out of it. I have tried few examples but all I am getting is a nil UIImage.
If there are any steps to do that, pls tell me.
Thanks.
This is what my func does:
func didFingerGrabDataReceived(data: [UInt8]) {
if data[0] == 0 {
let width1 = Int16(data[0]) << 8
let finalWidth = Int(Int16(data[1]) | width1)
let height1 = Int16(data[2]) << 8
let finalHeight = Int(Int16(data[3]) | height1)
var finalData:[UInt8] = [UInt8]()
// i dont want the first 8 bytes, so am removing it
for i in 8 ..< data.count {
finalData.append(data[i])
}
dispatch_async(dispatch_get_main_queue()) { () -> Void in
let msgData = NSMutableData(bytes: finalData, length: finalData.count)
let ptr = UnsafeMutablePointer<UInt8>(msgData.mutableBytes)
let colorSpace = CGColorSpaceCreateDeviceGray()
if colorSpace == nil {
self.showToast("color space is nil")
return
}
let bitmapContext = CGBitmapContextCreate(ptr, finalWidth, finalHeight, 8, 4 * finalWidth, colorSpace, CGImageAlphaInfo.Only.rawValue);
if bitmapContext == nil {
self.showToast("context is nil")
return
}
let cgImage=CGBitmapContextCreateImage(bitmapContext);
if cgImage == nil {
self.showToast("image is nil")
return
}
let newimage = UIImage(CGImage: cgImage!)
self.imageViewFinger.image = newimage
}
}
I am getting a distorted image. someone please help
The significant issue here is that when you called CGBitmapContextCreate, you specified that you're building an alpha channel alone, your data buffer is clearly using one byte per pixel, but for the "bytes per row" parameter, you've specified 4 * width. It should just be width. You generally use 4x when you're capturing four bytes per pixel (e.g. RGBA), but since your buffer is using one byte per pixel, you should remove that 4x factor.
Personally, I'd also advise a range of other improvements, namely:
The only thing that should be dispatched to the main queue is the updating of the UIKit control
You can retire finalData, as you don't need to copy from one buffer to another, but rather you can build msgData directly.
You should probably bypass the creation of your own buffer completely, though, and call CGBitmapContextCreate with nil for the data parameter, in which case, it will create its own buffer which you can retrieve via CGBitmapContextGetData. If you pass it a buffer, it assumes you'll manage this buffer yourself, which we're not doing here.
If you create your own buffer and don't manage that memory properly, you'll experience difficult-to-reproduce errors where it looks like it works, but suddenly you'll see the buffer corrupted for no reason in seemingly similar situations. By letting Core Graphics manage the memory, these sorts of problems are prevented.
I might separate the conversion of this byte buffer to a UIImage from the updating of the UIImageView.
So that yields something like:
func mask(from data: [UInt8]) -> UIImage? {
guard data.count >= 8 else {
print("data too small")
return nil
}
let width = Int(data[1]) | Int(data[0]) << 8
let height = Int(data[3]) | Int(data[2]) << 8
let colorSpace = CGColorSpaceCreateDeviceGray()
guard
data.count >= width * height + 8,
let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width, space: colorSpace, bitmapInfo: CGImageAlphaInfo.alphaOnly.rawValue),
let buffer = context.data?.bindMemory(to: UInt8.self, capacity: width * height)
else {
return nil
}
for index in 0 ..< width * height {
buffer[index] = data[index + 8]
}
return context.makeImage().flatMap { UIImage(cgImage: $0) }
}
And then
if let image = mask(from: data) {
DispatchQueue.main.async {
self.imageViewFinger.image = image
}
}
For Swift 2 rendition, see previous revision of this answer.

Get image dimensions before downloading

Is there a way to check the image dimensions (i.e. height and width) before downloading (or partially downloading) the image from a URL? I have found ways to get the image size, but that doesn't help.
Basically I want to calculate the correct height of a UITableView row before the image is downloaded. Is this possible?
You can do a partial download of the image data and then extract the image size from that. You will have to get the data structure of the image format you are using and parse it to some extent. It is possible and not that hard if you are capable of lower level coding.
You can do it by accessing its header details
In Swift 3.0 below code will help you,
if let imageSource = CGImageSourceCreateWithURL(url! as CFURL, nil) {
if let imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil) as Dictionary? {
let pixelWidth = imageProperties[kCGImagePropertyPixelWidth] as! Int
let pixelHeight = imageProperties[kCGImagePropertyPixelHeight] as! Int
print("the image width is: \(pixelWidth)")
print("the image height is: \(pixelHeight)")
}
}
Create a IBOutlet of HeightConstraint
e.g Here image is in 16:9 ratio from server
This will automatically adjust height for all screen. ImageView ContentMode is AspectFit
override func viewDidLoad() {
cnstHeight.constant = (self.view.frame.width/16)*9
}
Swift 4 Method:
func getImageDimensions(from url: URL) -> (width: Int, height: Int) {
if let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil) {
if let imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil) as Dictionary? {
let pixelWidth = imageProperties[kCGImagePropertyPixelWidth] as! Int
let pixelHeight = imageProperties[kCGImagePropertyPixelHeight] as! Int
return (pixelWidth, pixelHeight)
}
}
return (0, 0)
}

Resources