Image from CMSampleBufferRef is always white - ios

I am trying to get each frame from the replaykit using startCaptureWithHandler.
startCaptureWithHandler returns a CMSampleBufferRef which i need to convert to an image.
Im using this method to convert to UIImage but its always white.
- (UIImage *) imageFromSampleBuffer3:(CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options, &pxbuffer);
CVPixelBufferLockFlags flags = (CVPixelBufferLockFlags)0;
CVPixelBufferLockBaseAddress(pxbuffer, flags);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
// CGContextRef context = CGBitmapContextCreate(pxdata, width, height, 8, CVPixelBufferGetBytesPerRow(pxbuffer), rgbColorSpace, kCGImageAlphaPremultipliedFirst);
CGContextRef context = CGBitmapContextCreate(pxdata, width, height, 8, CVPixelBufferGetBytesPerRow(pxbuffer), rgbColorSpace, kCGImageAlphaPremultipliedFirst);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, flags);
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0f orientation:UIImageOrientationRight];
CGImageRelease(quartzImage);
return image;
}
Can anyone tell me where im going wrong?

sampleBuffer is 420f format and it has 2 planar.
For locking memory, CVPixelBufferLockBaseAddress(imageBuffer, 0 then 1).
For getting Y plane data, CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0).
For UV plane data, CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1).
Do not forget unlock memory. I am not sure how to convert YUV to RGB.
On your code, you do not read image data from imageBuffer.
You only work on pxbuffer and pxdata which has no image data.

Related

How do you scale an image using vImage in the Accelerate framework in iOS 8?

I am trying to resize a CMSampleBufferRef as quickly as possible on an iOS 8 device for use in image processing. From what I have found online, the way to do this seems to be by using the vImage API in the Accelerate framework. However, I haven't done much with the Accelerate framework and I can't quite figure out how to do this. Here is what I have so far to scale an image to 200x200:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef cvimgRef = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cvimgRef,0);
void *imageData = CVPixelBufferGetBaseAddress(cvimgRef);
NSInteger width = CVPixelBufferGetWidth(cvimgRef);
NSInteger height = CVPixelBufferGetHeight(cvimgRef);
unsigned char *newData= // NOT SURE WHAT THIS SHOULD BE...
vImage_Buffer inBuff = { imageData, height, width, 4*width };
vImage_Buffer outBuff = { newData, 200, 200, 4*200 };
// NOT SURE IF THIS IS THE CORRECT METHOD... video output settings for kCVPixelBufferPixelFormatTypeKey is set to kCVPixelFormatType_32BGRA
// This seems wrong since the image scale is ARGB, not BGRA.
vImageScale_ARGB8888(inBuffer, outBuffer, NULL, kvImageNoFlags);
CVPixelBufferUnlockBaseAddress(cvimgRef,0);
}
Where outBuffer is the result. After that, I am also not sure how to convert the outBuffer back to a CVImageBufferRef for further image processing. Any suggestions would be appreciated!
vImageScale returns just a buffer data, and pay attention that buffers need to be freed.
I don't know if there is a faster way just using that out buffer but I would convert the buffer into a CGImage. Something like that taken from here so take it as a reference
vImage_CGImageFormat format = {
.bitsPerComponent = 8,
.bitsPerPixel = 32,
.colorSpace = NULL,
.bitmapInfo = (CGBitmapInfo)kCGImageAlphaFirst,
.version = 0,
.decode = NULL,
.renderingIntent = kCGRenderingIntentDefault,
};
ret = kvImageNoError;
CGImageRef destRef = vImageCreateCGImageFromBuffer(&dstBuffer, &format, NULL, NULL, kvImageNoFlags, &ret)
Later I will convert it into a CVPixelBuffer.
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = #{
(NSString*)kCVPixelBufferCGImageCompatibilityKey : #YES,
(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey : #YES,
};
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
if (status!=kCVReturnSuccess) {
DLog(#"Operation failed");
}
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I'm pretty sure that is possible to avoid the conversion into a CGImage and start using the buffer, but I never tried.
You have to use a resampling filter in conjunction with any vImage operations that alter image geometry: page 32, vImage Programming Guide.
- (CVPixelBufferRef)copyRenderedPixelBuffer:(CVPixelBufferRef)pixelBuffer {
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
// vImage processing
vImage_Error err;
vImage_Buffer buffer;
buffer.data = (unsigned char *)CVPixelBufferGetBaseAddress( pixelBuffer );
buffer.rowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
buffer.width = CVPixelBufferGetWidth( pixelBuffer );
buffer.height = CVPixelBufferGetHeight( pixelBuffer );
vImageCVImageFormatRef vformat = vImageCVImageFormat_CreateWithCVPixelBuffer( pixelBuffer );
vImage_CGImageFormat cgformat = {
.bitsPerComponent = 8,
.bitsPerPixel = 32,
.bitmapInfo = kCGBitmapByteOrderDefault,
.colorSpace = NULL, //sRGB
};
const CGFloat bgColor[3] = {0.0, 0.0, 0.0};
vImageBuffer_InitWithCVPixelBuffer(&buffer, &cgformat, pixelBuffer, vformat, bgColor, kvImageNoAllocate);
vImage_Buffer outbuffer;
void *tempBuffer;
tempBuffer = malloc(CVPixelBufferGetBytesPerRow( pixelBuffer ) * CVPixelBufferGetHeight( pixelBuffer ));
outbuffer.data = tempBuffer;
outbuffer.rowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
outbuffer.width = CVPixelBufferGetWidth( pixelBuffer );
outbuffer.height = CVPixelBufferGetHeight( pixelBuffer );
// PROCESS vIMAGE HERE
err = vImageBuffer_CopyToCVPixelBuffer(&outbuffer, &cgformat, pixelBuffer, vformat, bgColor, kvImageNoFlags);
if(err != -1)
free(tempBuffer);
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
return (CVPixelBufferRef)CFRetain( pixelBuffer );
}

UIImages exported as movie error

Problem
My AVAssetWriter is failing after appending 5 or so images to it using a AVAssetWriterInputPixelBufferAdaptor, and I have no idea why.
Details
This popular question helped but isn't working for my needs:
How do I export UIImage array as a movie?
Everything works as planned, I even delay the assetWriterInput until it can handle more media.
But for some reason, it always fails after 5 or so images. The images I'm using are extracted frames from a GIF
Code
Here is my iteration code:
-(void)writeImageData
{
__block int i = 0;
videoQueue = dispatch_queue_create("com.videoQueue", DISPATCH_QUEUE_SERIAL);
[self.writerInput requestMediaDataWhenReadyOnQueue:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0) usingBlock:^{
while (self.writerInput.readyForMoreMediaData) {
if (i >= self.imageRefs.count){
[self endSession];
videoQueue = nil;
[self saveToLibraryWithCompletion:^{
NSLog(#"Saved");
}];
break;
}
if (self.writerInput.readyForMoreMediaData){
CGImageRef imageRef = (__bridge CGImageRef)self.imageRefs[i];
CVPixelBufferRef buffer = [self pixelBufferFromCGImageRef:imageRef];
CGFloat timeScale = (CGFloat)self.imageRefs.count / self.originalDuration;
BOOL accepted = [self.adaptor appendPixelBuffer:buffer withPresentationTime:CMTimeMake(i, timeScale)];
CVBufferRelease(buffer);
if (!accepted){
NSLog(#"Buffer did not add %#, index %d, timescale %f", self.writer.error, i, timeScale);
}else{
NSLog(#"Buffer did nothing wrong");
}
i++;
}
}
}];
}
My other bits of code match the code from the Link above. This is only slightly different:
-(CVPixelBufferRef)pixelBufferFromCGImageRef:(CGImageRef)image
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CGFloat width = 640;
CGFloat height = 640;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width,
height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width,
height, 8, 4*width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, width,
height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
One thing that stands out to me is your use of CMTimeMake(adjustedTime, 1).
You need to calculate the time of each frame properly. Note that CMTime takes two integers, and passing them as floating point values in truncates them.
The second issue is that you weren't using your serial dispatch queue :)

yuv to jpeg iOS

I want to convert a yuv 420SP image (captured directly from camera, YCbCr format) to jpg in iOS. What I have found is CGImageCreate() function https://developer.apple.com/library/mac/documentation/graphicsimaging/reference/CGImage/Reference/reference.html#//apple_ref/doc/uid/TP30000956-CH1g-F17167 , which takes in a few parameters including the byte array containing and should return some CGImage, whose UIImage when input to UIImageJPEGRepresentation() returns jpeg data, but that is not really happening
The output image data is far from what is required. At least the output is not nil.
As input to CGImageCreate(), bits per component i am setting as 4, bits per pixel as 12, and some default values.
Can it really convert a yuv YCbCr image ad not only rgb? If yes, then i think i am doing wrong something in the input values to the CGImageCreate function.
From what I can see here, the CGColorSpaceRef colorspace parameter can refer to RGB, CMYK, or grayscale only.
So I think first you need to convert your YCbCr420 image to RGB, for example, using IPP function YCbCr420toRGB (doc). Alternatively, you can write your own conversion routine, it's not that hard.
Here's the code for converting a sample buffer returned by the captureOutput:didOutputSampleBuffer:fromConnection method of AVVideoDataOutput:
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer); //2560 == (640 * 4)
size_t bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t bufferHeight = CVPixelBufferGetHeight(pixelBuffer); //480
size_t dataSize = CVPixelBufferGetDataSize(pixelBuffer); //1_228_808 = (2560 * 480) + 8
CGColorSpaceRef defaultRGBColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawImageBytes, bufferWidth, bufferHeight, 8, bytesPerRow, defaultRGBColorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef image = CGBitmapContextCreateImage(context);
CFMutableDataRef imageData = CFDataCreateMutable(NULL, 0);
CGImageDestinationRef destination = CGImageDestinationCreateWithData(imageData, kUTTypeJPEG, 1, NULL);
NSDictionary *properties = #{(__bridge id)kCGImageDestinationLossyCompressionQuality: #(0.25),
(__bridge id)kCGImageDestinationBackgroundColor: (__bridge id)CLEAR_COLOR,
(__bridge id)kCGImageDestinationOptimizeColorForSharing : #(TRUE)
};
CGImageDestinationAddImage(destination, image, (__bridge CFDictionaryRef)properties);
if (!CGImageDestinationFinalize(destination))
{
CFRelease(imageData);
imageData = NULL;
}
CFRelease(destination);
UIImage *frame = [[UIImage alloc] initWithCGImage:image];
CGContextRelease(context);
CGImageRelease(image);
renderFrame([self.childViewControllers.lastObject.view viewWithTag:1].layer, frame);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
Here are your three options for pixel format types:
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
kCVPixelFormatType_32BGRA
If _captureOutput is the pointer reference to my instance of AVVideoDataOutput, this is how you set the pixel format type:
[_captureOutput setVideoSettings:#{(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)}];

Why AVAssetWriter inflates video file?

Strange problem. I take frames from a video file (.mov) and write them with AVAssetWriter to another file without any explicit processing. Actually I just copy the frame from one memory buffer to another and them flush them through PixelbufferAdaptor. Then I take the resulting file, delete the original file, put the resulting file instead the original and do the same operation. Interesting thing is that the size of the file constantly grows! Can somebody explain why?
if(adaptor.assetWriterInput.readyForMoreMediaData==YES) {
CVImageBufferRef cvimgRef=nil;
CMTime lastTime=CMTimeMake(fcounter++, 30);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
CMSampleBufferRef framebuffer=nil;
CGImageRef frameImg=nil;
if ( [asr status]==AVAssetReaderStatusReading ){
framebuffer = [asset_reader_output copyNextSampleBuffer];
frameImg = [self imageFromSampleBuffer:framebuffer withColorSpace:rgbColorSpace];
}
if(frameImg && screenshot){
//CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(framebuffer);
CVReturn stat= CVPixelBufferLockBaseAddress(screenshot, 0);
pxdata=CVPixelBufferGetBaseAddress(screenshot);
bufferSize = CVPixelBufferGetDataSize(screenshot);
// Get the number of bytes per row for the pixel buffer.
bytesPerRow = CVPixelBufferGetBytesPerRow(screenshot);
// Get the pixel buffer width and height.
width = CVPixelBufferGetWidth(screenshot);
height = CVPixelBufferGetHeight(screenshot);
// Create a Quartz direct-access data provider that uses data we supply.
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, pxdata, bufferSize, NULL);
CGImageAlphaInfo ai=CGImageGetAlphaInfo(frameImg);
size_t bpx=CGImageGetBitsPerPixel(frameImg);
CGColorSpaceRef fclr=CGImageGetColorSpace(frameImg);
// Create a bitmap image from data supplied by the data provider.
CGImageRef cgImage = CGImageCreate(width, height, 8, 32, bytesPerRow,rgbColorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big,dataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
stat= CVPixelBufferLockBaseAddress(finalPixelBuffer, 0);
pxdata=CVPixelBufferGetBaseAddress(finalPixelBuffer);
bytesPerRow = CVPixelBufferGetBytesPerRow(finalPixelBuffer);
CGContextRef context = CGBitmapContextCreate(pxdata, imgsize.width,imgsize.height, 8, bytesPerRow, rgbColorSpace, kCGImageAlphaNoneSkipLast);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(frameImg), CGImageGetHeight(frameImg)), frameImg);
//CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
//CGImageRef myMaskedImage;
const float myMaskingColors[6] = { 0, 0, 0, 1, 0, 0 };
CGImageRef myColorMaskedImage = CGImageCreateWithMaskingColors (cgImage, myMaskingColors);
//CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(myColorMaskedImage), CGImageGetHeight(myColorMaskedImage)), myColorMaskedImage);
[adaptor appendPixelBuffer:finalPixelBuffer withPresentationTime:presentTime];}
well, the mystery seems to be solved. The problem was in inappropriate codec configuration.
This is set of configuration options I use now and it seems to do the work:
NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:1100000], AVVideoAverageBitRateKey,
[NSNumber numberWithInt:5],AVVideoMaxKeyFrameIntervalKey,
nil];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:[SharedApplicationData sharedData].overlayView.frame.size.width], AVVideoWidthKey,
[NSNumber numberWithInt:[SharedApplicationData sharedData].overlayView.frame.size.height], AVVideoHeightKey,
codecSettings,AVVideoCompressionPropertiesKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
Now the file size still grows but at much slower pace. There is a tradeoff between the file size and the video quality - size reduction affects the quality.

IOS 5: UIView converted to video and the resulting video is corrupted

I want to grab the UIView, convert to image and than store that image in a video file (.mp4).
I use the next portion of code which grabs the image and puts it to pixel buffer:
BOOL appended;
if(input.readyForMoreMediaData==YES){
//grab the view and convert it into image
CGSize imgsize=self.imageSource.frame.size;
UIGraphicsBeginImageContext(imgsize);
[self.imageSource.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* grabbedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CVReturn cvErr = kCVReturnSuccess;
CGImageRef image = (CGImageRef) [grabbedImage CGImage];
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, imgsize.width,
imgsize.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, imgsize.width,
imgsize.height, 8, 4*imgsize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
appended = [pxlBufAdaptor appendPixelBuffer:pxbuffer withPresentationTime:presentationTime];
CVBufferRelease(pxbuffer );
}
The problem is that the resulting video contains the corrupted image - all pixels are offset . It looks like the memory is filled with bytes with some offset and that offset corrupts the presentation.
How can this be fixed?
I would like to have any glue or direction.
Thanks in advance.
This looks suspicious:
CGContextRef context = CGBitmapContextCreate(pxdata, imgsize.width,
imgsize.height, 8, 4*imgsize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
You are computing the bytesPerRow parameter based on the image width, instead of asking pxbuffer for its bytes-per-row. Try this:
CGContextRef context = CGBitmapContextCreate(pxdata, imgsize.width,
imgsize.height, 8, CVPixelBufferGetBytesPerRow(pxbuffer),
rgbColorSpace, kCGImageAlphaNoneSkipFirst);
Also, it seems inefficient to create a bitmap graphics context with UIGraphicsGetCurrentContext, render the layer into the context, get an image from the context, destroy the context, create a pixel buffer, create a bitmap graphics context using the pixel buffer, and draw the image of the layer into the new context. Why not replace your CGContextDrawImage call with [self.imageSource.layer renderInContext:context]?

Resources