Raw Data from avfoundation - image-processing

Hi I'm working on getting a raw image data from AVCaptureVideoDataoutput.
I have lots of experience in using avfoundation and worked lots of projects but this time I'm working on image processing project which I don't have any experience.
public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
}
I know I'm getting CMSampleBuffer right here in delegate callback function.
MY Questions.
is CMSampleBuffer is 'RAW' data which is from image sensor?
I want to know whole flow of making CMSampleBuffer to here. But I could not find any detail flow.
I found some possible solutions searching on stackoverflow
(Raw image data from camera like "645 PRO")
in this question, the author tried using opencv and got what he wanted.
but.. what is exactly different the result by opencv from CMSamplebuffer?(why the opencv result is real raw data the author said)
And also (How to get the Y component from CMSampleBuffer resulted from the AVCaptureSession?)
if i set as below,
if self.session.canAddOutput(self.captureOutput) {
self.session.addOutput(self.captureOutput)
captureOutput.videoSettings = [
kCVPixelBufferPixelFormatTypeKey : kCVPixelFormatType_32BGRA
] as [String : Any]
captureOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "capture"))
captureOutput.alwaysDiscardsLateVideoFrames = true
}
by setting format key _32BGRA,, AM i now getting raw data from samplebuffer?

It's not RAW in your case. All modern sensors build on Bayer filter, so you get an image converted from the Bayer format. You can't get raw image with this api. There is a format called kCVPixelFormatType_14Bayer_BGGR, but the camera probably won't support it.
Maybe on WWDC 419 session you will find answer. I don't know
It's the same; cv::Mat is just a wrapper around the image data from CMSampleBuffer. If you save your data as PNG, you will not lose any quality. The TIFF format saves without any compression, but you can also use PNG without compression.
If you use RGBA format, behind the scenes it is converted from Bayer to RGBA. To get the Y channel, you need to additionally apply an RGBA to YUV conversion and take the Y channel. Or you can use the kCVPixelFormatType_420YpCbCr8BiPlanarFullRange format and get the Y channel from the first plane. Also note that VideoRange has different chroma output range.
// ObjC code
int width = CVPixelBufferGetWidth(imageBuffer);
int height = CVPixelBufferGetHeight(imageBuffer);
uint8_t *yBuffer = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
size_t yPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
cv::Mat yChannel(height, width, CV_8UC1, yBuffer, yPitch);

Related

Analyze frames colors displayed in AVSampleBufferDisplayLayer. Image Buffer is nil

I want to analyze colors displayed in AVSampleBufferDisplayLayer which gets frames from the data source which I don't control.
I've made my own subclass of this class and override func enqueue(_ sampleBuffer: CMSampleBuffer) to get my hands on sample buffers. My plan was to create CIImage from it and then apply CIAreaAverage filter.
Unfortunately, when I call CMSampleBufferGetImageBuffer(sampleBuffer), I get null.
From what I understand, this means that I should use dataBuffer instead. But how can I convert it into CIImage?

Filtering a video stream using GPUImage2

I have access to the CVPixelBufferRef for each frame and want to apply the ChromaKey filter to it before rendering it.
So far, the only solution I can think of is to first convert the pixel buffer to an image. Here is my barebones solution just for PoC.
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, nil, &cgImage)
let image = UIImage.init(cgImage: cgImage!).filterWithOperation(filter!)
Once I get the filtered image, I pass it to an MTKView to draw.
So my specific question is, can I avoid converting the pixel buffer to an image and still use GPUImage2 for the filter?

read raw output camera image incorrect

I bought a raw data output camera sensor, it's data sheet in this website:
https://www.leopardimaging.com/uploads/LI-USB30-IMX225C_datasheet.pdf
it's resolution is 1312x992, 12 bits raw stream.
I have wrote a ISP program, it's input should be a raw picture, such as a.dng or b.raw and so on.Then i want to use this program and camera sensor to do preview.
simple code like this:
int main() {
VideoCapture cap(0);
Mat src;
if (cap.isOpened()) {
while (1) {
cap.read(src);
namedWindow("capture", 0);
imshow("capture", src);
waitKey(10);
}
}
}
there are some problem, color and resolution are incorrect.
first, color is green, it's abnormal.And it's resolution is 320x240.
Can anyone know why is this, and help me to fix it.
Thank you!
Example image

How to superimpose views over each captured frame inside CVImageBuffer, realtime not post process

I have managed to setup a basic AVCaptureSession which records a video and saves it on device by using AVCaptureFileOutputRecordingDelegate. I have been searching through docs to understand how we can add statistics overlays on top of the video which is being recorded.
i.e.
As you can see in the above image. I have multiple overlays on top of video preview layer. Now, when I save my video output I would like to compose those views onto the video as well.
What have I tried so far?
Honestly, I am just jumping around on internet to find a reputable blog explaining how one would do this. But failed to find one.
I have read few places that one could render text layer overlays as described in following post by creating CALayer and adding it as a sublayer.
But, what about if I want to render MapView on top of the video being recorded. Also, I am not looking for screen capture. Some of the content on the screen will not be part of the final recording so I want to be able to cherry pick view that will be composed.
What am I looking for?
Direction.
No straight up solution
Documentation link and class names I should be reading more about to create this.
Progress So Far:
I have managed to understand that I need to get hold of CVImageBuffer from CMSampleBuffer and draw text over it. There are things still unclear to me whether it is possible to somehow overlay MapView over the video that is being recorded.
The best way that helps you to achieve your goal is to use a Metal framework. Using a Metal camera is good for minimising the impact on device’s limited computational resources. If you are trying to achieve the lowest-overhead access to camera sensor, using a AVCaptureSession would be a really good start.
You need to grab each frame data from CMSampleBuffer (you're right) and then to convert a frame to a MTLTexture. AVCaptureSession will continuously send us frames from device’s camera via a delegate callback.
All available overlays must be converted to MTLTextures too. Then you can composite all MTLTextures layers with over operation.
So, here you'll find all necessary info in four-part Metal Camera series.
And here's a link to a blog: About Compositing in Metal.
Also, I'd like to publish code's excerpt (working with AVCaptureSession in Metal):
import Metal
guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
// Handle an error here.
}
// Texture cache for converting frame images to textures
var textureCache: CVMetalTextureCache?
// `MTLDevice` for initializing texture cache
var metalDevice = MTLCreateSystemDefaultDevice()
guard
let metalDevice = metalDevice
where CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, metalDevice, nil, &textureCache) == kCVReturnSuccess
else {
// Handle an error (failed to create texture cache)
}
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
var imageTexture: CVMetalTexture?
let result = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache.takeUnretainedValue(), imageBuffer, nil, pixelFormat, width, height, planeIndex, &imageTexture)
// `MTLTexture` is in the `texture` variable now.
guard
let unwrappedImageTexture = imageTexture,
let texture = CVMetalTextureGetTexture(unwrappedImageTexture),
result == kCVReturnSuccess
else {
throw MetalCameraSessionError.failedToCreateTextureFromImage
}
And here you can find a final project on a GitHub: MetalRenderCamera

How do I convert a CVPixelBuffer / CVImageBuffer to Data?

My camera app captures a photo, enhances it in a certain way, and saves it.
To do so, I get the input image from the camera in the form of a CVPixelBuffer (wrapped in a CMSampleBuffer). I perform some modifications on the pixel buffer, and I then want to convert it to a Data object. How do I do this?
Note that I don't want to convert the pixel buffer / image buffer to a UIImage or CGImage since those don't have metadata (like EXIF). I need a Data object. How do I get one from a CVPixelBuffer / CVImageBuffer?
PS: I tried calling AVCapturePhotoOutput.jpegPhotoDataRepresentation() but that fails saying "Not a JPEG sample buffer". Which makes sense since the CMSampleBuffer contains a pixel buffer (a bitmap), not a JPEG.
As you said that you are able to get the CMSampleBuffer, then you can get it using
NSData *myata = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:<your_cmsample_buffer_obj‌​>];

Resources