How to create gray 8bit CGImage from CIImage? - ios

I'm currently working on iOS application that uses front camera to capture frames. I want to compare actual frame with previous frame. In order to make that fast I want to create 8bit grayscale CGImage.
I'm using AVCaptureSession and I have implemented delegate:
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
let image = CIImage(CVPixelBuffer: CMSampleBufferGetImageBuffer(sampleBuffer)).imageByApplyingOrientation(5)
let img = context.createCGImage(image, fromRect: image.extent(), format: kCIFormatBGRA8, colorSpace: CGColorSpaceCreateDeviceGray())
}
But this gives me crash with description that this color space is unsupported. Do you have any advice how I can create 8bit grayscale CGImage fast?

Sorry I don't know the answer to your question, but are you aware there's a CIFilter that does difference blending:
https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CIDifferenceBlendMode
It may be of use in your project.
Simon

Related

setting CVImageBuffer to all black

I am trying to modify some sample code from Apple Developer codes for my own purpose (I am very new to programming for iOS). I am trying to get images from a camera and apply some detection and just show the detections.
Currently, I am using the AVCaptureVideoPreviewLayer and basically the camera feed gets displayed on the screen. I actually want to zero out the camera feed and draw some detections only. So, I am basically trying to handle this in the captureOutputfunction. So something like:
extension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
// Grab the pixelbuffer frame from the camera output
guard let pixelBuffer = sampleBuffer.imageBuffer else { return }
// Here now I should be able to set it 0s (all black)
}
}
I am trying to do something basic like setting this CVImageBuffer to a black background but have not been able to figure that out in the last hours!
EDIT
So, I discovered that I can do something like:
var image: CGImage?
// Create a Core Graphics bitmap image from the buffer.
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &image)
This copies the buffer data to the CGImage, which I can then use for my purposes. Now, is there an API that can basically make an all black image with the same size as one represented by the input image buffer?

Why AVCaptureVideoDataOutput doesn't give me supported highest resolution frame?

I am working on iOS application which uses camera. I am using AVCaptureVideoDataOutput delegate method to get video frame. I always getting video frame with 1920 * 1080 regardless of device I am using which is iPhone X.
I am using AVCaptureSession.Preset.high
Here is my code snipped
func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let ciImage = CIImage(cvPixelBuffer: CMSampleBufferGetImageBuffer(sampleBuffer))
let image = UIImage(ciImage: ciImage)
}
When I do
let device = AVCaptureDevice.devices(for: AVMediaType.video).first {
($0 as AVCaptureDevice).position == AVCaptureDevice.Position.back
}
print("resolutions supported:: \(String(describing: device?.activeFormat.highResolutionStillImageDimensions)))")
This always gives me 3840 * 2160 for iPhone x which is having 12 megapixel
I am expecting same kind of highest possible resolution video frame through AVCaptureVideoDataOutput.
I tried using AVCaptureSession.Preset.photo it also doesn't give me high resolution.
I did try AVCaptureSession.Preset.hd4K3840x2160 which gives me expected resolution for frame but it may not work with older iPhone???
I know AVCapturePhotoOutput can give me higher resolution image. But for my use case I want to create image from video frame.
What I am doing wrong here?
I agree with #adamfowlerphoto. And the answer Why you need to check and then apply 4K video preset is the function is according to hardware specification. Like, if your old phone doesn't have a high resolution sensor or lens which is good enough, you cannot use it;hd4K3840x2160

MTLTexture from CMSampleBuffer has 0 bytesPerRow

I am converting the CMSampleBuffer argument in the captureOutput function of my AVCaptureVideoDataOuput delegate into a MTLTexture like so (side note, I have set the pixel format of the video output to kCVPixelFormatType_32BGRA):
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
var outTexture: CVMetalTexture? = nil
var textCache : CVMetalTextureCache?
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, metalDevice, nil, &textCache)
var textureRef : CVMetalTexture?
CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textCache!, imageBuffer, nil, MTLPixelFormat.bgra8Unorm, width, height, 0, &textureRef)
let texture = CVMetalTextureGetTexture(textureRef!)!
print(texture.bufferBytesPerRow)
}
The issue is when I print the bytes per row of the texture, it always prints 0, which is problematic because I later try to convert the texture back into a UIImage using the methodology in this article: https://www.invasivecode.com/weblog/metal-image-processing. Why is the texture I receive seemingly empty? I know the CMSampleBuffer property is fine because I can convert it into a UIIMage and draw it like so:
let myPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let myCIimage = CIImage(cvPixelBuffer: myPixelBuffer!)
let image = UIImage(ciImage: myCIimage)
self.imageView.image = image
The bufferBytesPerRow property is only meaningful for a texture that was created using the makeTexture(descriptor:offset:bytesPerRow:) method of a MTLBuffer. As you can see, the bytes-per-row is an input to that method to tell Metal how to interpret the data in the buffer. (The texture descriptor provides additional information, too, of course.) This method is only a means to get that back out.
Note that textures created from buffers can also report which buffer they were created from and the offset supplied to the above method.
Textures created in other ways don't have that information. These textures have no intrinsic bytes-per-row. Their data is not necessarily organized internally in a simple raster buffer.
If/when you want to get the data from a texture to either a Metal buffer or a plain old byte array, you have the freedom to choose a bytes-per-row value that's useful for your purposes, so long as it's at least the bytes-per-pixel of the texture pixel format times the texture's width. (It's more complicated for compressed formats.) The docs for getBytes(_:bytesPerRow:from:mipmapLevel:) and copy(from:sourceSlice:sourceLevel:sourceOrigin:sourceSize:to:destinationOffset:destinationBytesPerRow:destinationBytesPerImage:) explain further.

Real time face detection/tracking with CIDetector

I have made a face detection app that is "functional". However, the problem is that .featuresInImage() in CIdetector is not detecting faces every time func captureOutput() is called. I have already tried setting CIDetectorAccuracyLow too which didn't improve anything significantly.
I have tried both my application and the native iPhone camera which the latter can detect faces in an instance, even with faces that are slightly blocked (e.g. glasses). Why is this? Is Apple using a different face detecting algorithm from this one? Or is there some optimizing I should do before sending to CIDetector?
Code as below for further reference:
private var faceDetector: CIDetector?
dynamic var faceFeature: CGRect
...
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
// This function is called whenever a new frame is available
let pixelBuffer: CVPixelBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)!
let inputImage = CIImage(CVPixelBuffer: pixelBuffer)
let features = self.faceDetector!.featuresInImage(inputImage)
for feature in features as! [CIFaceFeature] {
faceFeature = feature.bounds
print("\(faceFeature)")
}
}
.
UPDATE:
I have further tested my "functional" code, and it seems that there are certain times when my face is at a certain angle and size, .featuresInImage() will detect my face with the video frame rate.
Does this mean the CIDetector is working correctly but I'll have to do some adjustments to the input sample? Make it more easy for the algorithm to run?

Swift - captureOutput frame extracted color is always coming near to black

I am trying to process the video frames and extracting the concentrated color out of it. I was using the AVCaptureStillImageOutput but it was making the shutter sound everytime I take a frame for the processing so I switched to AVCaptureVideoDataOutput and now processing each frame as it comes by.
Here is the code I am using:
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
currentFrame = self.convertImageFromCMSampleBufferRef(sampleBuffer);
if let image = UIImage(CIImage: currentFrame){
if let color = self.extractColor(image) {
// print the color code
}
}
}
func convertImageFromCMSampleBufferRef(sampleBuffer:CMSampleBuffer) -> CIImage{
let pixelBuffer:CVPixelBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer);
var ciImage:CIImage = CIImage(CVPixelBuffer: pixelBuffer)
return ciImage;
}
With the AVCaptureStillImageOutput I was getting almost correct output but with the AVCaptureVideoDataOutput the values are always near to black even if the camera view is into the bright light. I am guessing the problem is around the framerate or something but not able to figure it out.
In the last few test run this is the only color code I am getting #1b1f01
I would love to use the original AVCaptureStillImageOutput code but it should not make the Shutter sound and I am not able to disable it.
Had this same issue myself. It was just that it was very early; for whatever reason the camera sensor starts at 0 and is willing to give you frames before what you'd think of as the first frame is fully exposed.
Solution: just wait a second before you expect any real images.

Resources