I'm recording live video in my iOS app. On another Stack Overflow page, I found that you can use vImage_Buffer to work on my frames.
The problem is that I have no idea how to get back to a CVPixelBufferRef from the outputted vImage_buffer.
Here is the code that is given in the other article:
NSInteger cropX0 = 100,
cropY0 = 100,
cropHeight = 100,
cropWidth = 100,
outWidth = 480,
outHeight = 480;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
vImage_Buffer inBuff;
inBuff.height = cropHeight;
inBuff.width = cropWidth;
inBuff.rowBytes = bytesPerRow;
int startpos = cropY0 * bytesPerRow + 4 * cropX0;
inBuff.data = baseAddress + startpos;
unsigned char *outImg = (unsigned char*)malloc(4 * outWidth * outHeight);
vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4 * outWidth};
vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0);
if (err != kvImageNoError) NSLog(#" error %ld", err);
And now I need to convert outBuff to a CVPixelBufferRef.
I assume I need to use vImageBuffer_CopyToCVPixelBuffer, but I'm not sure how.
My first attempts failed with an EXC_BAD_ACCESS: CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorSystemDefault, 480, 480, kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
vImage_CGImageFormat format = {
.bitsPerComponent = 8,
.bitsPerPixel = 32,
.bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst, //BGRX8888
.colorSpace = NULL, //sRGB
};
vImageBuffer_CopyToCVPixelBuffer(&outBuff,
&format,
pixelBuffer,
NULL,
NULL,
kvImageNoFlags); // Here is the crash!
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
Any idea?
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool : YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool : YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt : 480], kCVPixelBufferWidthKey,
[NSNumber numberWithInt : 480], kCVPixelBufferHeightKey,
nil];
status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
480,
480,
kCVPixelFormatType_32BGRA,
outImg,
bytesPerRow,
NULL,
NULL,
(__bridge CFDictionaryRef)options,
&pixbuffer);
You should generate a new pixelBuffer like above.
Just in case... if you want a cropped live video feed into your interface, use an AVPlayerLayer, AVCaptureVideoPreviewLayer and/or other CALayer subclasses, use the layer bounds, frame and position for your 100x100 pixel area to 480x480 area.
Notes for vImage for your question (different circumstances may differ):
CVPixelBufferCreateWithBytes will not work with vImageBuffer_CopyToCVPixelBuffer() because you need to copy the vImage_Buffer data into a "clean" or "empty" CVPixelBuffer.
No need for locking/unlocking - make sure you know when to lock & when not to lock pixel buffers.
Your inBuff vImage_Buffer just needs to be initialized from the pixel buffer data, not manually (unless you know how to use CGContexts etc, to init the pixel grid)
use vImageBuffer_InitWithCVPixelBuffer()
vImageScale_ARGB8888 will scale the entire CVPixel data to a smaller/larger rectangle. It won't SCALE a portion/crop area of the buffer to another buffer.
When you use vImageBuffer_CopyToCVPixelBuffer(),
vImageCVImageFormatRef & vImage_CGImageFormat need to be filled out correctly.
CGColorSpaceRef dstColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceITUR_709);
vImage_CGImageFormat format = {
.bitsPerComponent = 16,
.bitsPerPixel = 64,
.bitmapInfo = (CGBitmapInfo)kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder16Big ,
.colorSpace = dstColorSpace
};
vImageCVImageFormatRef vformat = vImageCVImageFormat_Create(kCVPixelFormatType_4444AYpCbCr16,
kvImage_ARGBToYpCbCrMatrix_ITU_R_709_2,
kCVImageBufferChromaLocation_Center,
format.colorSpace,
0);
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
480,
480,
kCVPixelFormatType_4444AYpCbCr16,
NULL,
&destBuffer);
NSParameterAssert(status == kCVReturnSuccess && destBuffer != NULL);
err = vImageBuffer_CopyToCVPixelBuffer(&sourceBuffer, &format, destBuffer, vformat, 0, kvImagePrintDiagnosticsToConsole);
NOTE: these are settings for 64 bit ProRes with Alpha - adjust for 32 bit.
Related
I am currently attempting to change the orientation of a CMSampleBuffer by first converting it to a CVPixelBuffer and then using vImageRotate90_ARGB8888 to convert the buffer. The problem with my code is that when vImageRotate90_ARGB8888 executes, it crashes immediately. I know there are answers (like this one or this one), but all of these solutions fail to work in my case, and I really cannot find any type of error, or think of anything that would cause this behavior. My current code is below:
- (CVPixelBufferRef)rotateBuffer:(CMSampleBufferRef)sampleBuffer {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
size_t currSize = bytesPerRow * height * sizeof(unsigned char);
size_t bytesPerRowOut = 4 * height * sizeof(unsigned char);
OSType pixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
void *baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer);
unsigned char *outPixelData = (unsigned char *)malloc(currSize);
vImage_Buffer sourceBuffer = {baseAddress, height, width, bytesPerRow};
vImage_Buffer destinationBuffer = {outPixelData, width, height, bytesPerRowOut};
uint8_t rotation = kRotate90DegreesClockwise;
Pixel_8888 bgColor = {0, 0, 0, 0};
vImageRotate90_ARGB8888(&sourceBuffer, &destinationBuffer, rotation, bgColor, kvImageNoFlags); // Crash!
CVPixelBufferRef rotatedBuffer = NULL;
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, destinationBuffer.width, destinationBuffer.height, pixelFormat, destinationBuffer.data, destinationBuffer.rowBytes, freePixelBufferData, NULL, NULL, &rotatedBuffer);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return rotatedBuffer;
}
void freePixelBufferData(void *releaseRefCon, const void *baseAddress) {
free((void *)baseAddress);
}
I am trying to resize a CMSampleBufferRef as quickly as possible on an iOS 8 device for use in image processing. From what I have found online, the way to do this seems to be by using the vImage API in the Accelerate framework. However, I haven't done much with the Accelerate framework and I can't quite figure out how to do this. Here is what I have so far to scale an image to 200x200:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef cvimgRef = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cvimgRef,0);
void *imageData = CVPixelBufferGetBaseAddress(cvimgRef);
NSInteger width = CVPixelBufferGetWidth(cvimgRef);
NSInteger height = CVPixelBufferGetHeight(cvimgRef);
unsigned char *newData= // NOT SURE WHAT THIS SHOULD BE...
vImage_Buffer inBuff = { imageData, height, width, 4*width };
vImage_Buffer outBuff = { newData, 200, 200, 4*200 };
// NOT SURE IF THIS IS THE CORRECT METHOD... video output settings for kCVPixelBufferPixelFormatTypeKey is set to kCVPixelFormatType_32BGRA
// This seems wrong since the image scale is ARGB, not BGRA.
vImageScale_ARGB8888(inBuffer, outBuffer, NULL, kvImageNoFlags);
CVPixelBufferUnlockBaseAddress(cvimgRef,0);
}
Where outBuffer is the result. After that, I am also not sure how to convert the outBuffer back to a CVImageBufferRef for further image processing. Any suggestions would be appreciated!
vImageScale returns just a buffer data, and pay attention that buffers need to be freed.
I don't know if there is a faster way just using that out buffer but I would convert the buffer into a CGImage. Something like that taken from here so take it as a reference
vImage_CGImageFormat format = {
.bitsPerComponent = 8,
.bitsPerPixel = 32,
.colorSpace = NULL,
.bitmapInfo = (CGBitmapInfo)kCGImageAlphaFirst,
.version = 0,
.decode = NULL,
.renderingIntent = kCGRenderingIntentDefault,
};
ret = kvImageNoError;
CGImageRef destRef = vImageCreateCGImageFromBuffer(&dstBuffer, &format, NULL, NULL, kvImageNoFlags, &ret)
Later I will convert it into a CVPixelBuffer.
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = #{
(NSString*)kCVPixelBufferCGImageCompatibilityKey : #YES,
(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey : #YES,
};
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
if (status!=kCVReturnSuccess) {
DLog(#"Operation failed");
}
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I'm pretty sure that is possible to avoid the conversion into a CGImage and start using the buffer, but I never tried.
You have to use a resampling filter in conjunction with any vImage operations that alter image geometry: page 32, vImage Programming Guide.
- (CVPixelBufferRef)copyRenderedPixelBuffer:(CVPixelBufferRef)pixelBuffer {
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
// vImage processing
vImage_Error err;
vImage_Buffer buffer;
buffer.data = (unsigned char *)CVPixelBufferGetBaseAddress( pixelBuffer );
buffer.rowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
buffer.width = CVPixelBufferGetWidth( pixelBuffer );
buffer.height = CVPixelBufferGetHeight( pixelBuffer );
vImageCVImageFormatRef vformat = vImageCVImageFormat_CreateWithCVPixelBuffer( pixelBuffer );
vImage_CGImageFormat cgformat = {
.bitsPerComponent = 8,
.bitsPerPixel = 32,
.bitmapInfo = kCGBitmapByteOrderDefault,
.colorSpace = NULL, //sRGB
};
const CGFloat bgColor[3] = {0.0, 0.0, 0.0};
vImageBuffer_InitWithCVPixelBuffer(&buffer, &cgformat, pixelBuffer, vformat, bgColor, kvImageNoAllocate);
vImage_Buffer outbuffer;
void *tempBuffer;
tempBuffer = malloc(CVPixelBufferGetBytesPerRow( pixelBuffer ) * CVPixelBufferGetHeight( pixelBuffer ));
outbuffer.data = tempBuffer;
outbuffer.rowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
outbuffer.width = CVPixelBufferGetWidth( pixelBuffer );
outbuffer.height = CVPixelBufferGetHeight( pixelBuffer );
// PROCESS vIMAGE HERE
err = vImageBuffer_CopyToCVPixelBuffer(&outbuffer, &cgformat, pixelBuffer, vformat, bgColor, kvImageNoFlags);
if(err != -1)
free(tempBuffer);
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
return (CVPixelBufferRef)CFRetain( pixelBuffer );
}
I am trying to convert a YUV image to CIIMage and ultimately UIImage. I am fairly novice at these and trying to figure out an easy way to do it. From what I have learnt, from iOS6 YUV can be directly used to create CIImage but as I am trying to create it the CIImage is only holding a nil value. My code is like this ->
NSLog(#"Started DrawVideoFrame\n");
CVPixelBufferRef pixelBuffer = NULL;
CVReturn ret = CVPixelBufferCreateWithBytes(
kCFAllocatorDefault, iWidth, iHeight, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
lpData, bytesPerRow, 0, 0, 0, &pixelBuffer
);
if(ret != kCVReturnSuccess)
{
NSLog(#"CVPixelBufferRelease Failed");
CVPixelBufferRelease(pixelBuffer);
}
NSDictionary *opt = #{ (id)kCVPixelBufferPixelFormatTypeKey :
#(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) };
CIImage *cimage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:opt];
NSLog(#"CURRENT CIImage -> %p\n", cimage);
UIImage *image = [UIImage imageWithCIImage:cimage scale:1.0 orientation:UIImageOrientationUp];
NSLog(#"CURRENT UIImage -> %p\n", image);
Here the lpData is the YUV data which is an array of unsigned character.
This also looks interesting : vImageMatrixMultiply, can't find any example on this. Can anyone help me with this?
I have also faced with this similar problem. I was trying to Display YUV(NV12) formatted data to the screen. This solution is working in my project...
//YUV(NV12)-->CIImage--->UIImage Conversion
NSDictionary *pixelAttributes = #{kCVPixelBufferIOSurfacePropertiesKey : #{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
640,
480,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer,0);
unsigned char *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
// Here y_ch0 is Y-Plane of YUV(NV12) data.
memcpy(yDestPlane, y_ch0, 640 * 480);
unsigned char *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
// Here y_ch1 is UV-Plane of YUV(NV12) data.
memcpy(uvDestPlane, y_ch1, 640*480/2);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
if (result != kCVReturnSuccess) {
NSLog(#"Unable to create cvpixelbuffer %d", result);
}
// CIImage Conversion
CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *MytemporaryContext = [CIContext contextWithOptions:nil];
CGImageRef MyvideoImage = [MytemporaryContext createCGImage:coreImage
fromRect:CGRectMake(0, 0, 640, 480)];
// UIImage Conversion
UIImage *Mynnnimage = [[UIImage alloc] initWithCGImage:MyvideoImage
scale:1.0
orientation:UIImageOrientationRight];
CVPixelBufferRelease(pixelBuffer);
CGImageRelease(MyvideoImage);
Here I am showing data structure of YUV(NV12) data and how we can get the Y-Plane(y_ch0) and UV-Plane(y_ch1) which is used to create CVPixelBufferRef. Let's look at the YUV(NV12) data structure..
If we look at the picture we can get following information about YUV(NV12),
Total Frame Size = Width * Height * 3/2,
Y-Plane Size = Frame Size * 2/3,
UV-Plane size = Frame Size * 1/3,
Data stored in Y-Plane -->{Y1, Y2, Y3, Y4, Y5.....}.
U-Plane-->(U1, V1, U2, V2, U3, V3,......}.
I hope it will be helpful to all. :) Have fun with IOS Development :D
If you have a video frame object that looks like this:
int width,
int height,
unsigned long long time_stamp,
unsigned char *yData,
unsigned char *uData,
unsigned char *vData,
int yStride
int uStride
int vStride
You can use the following to fill up a pixelBuffer:
NSDictionary *pixelAttributes = #{(NSString *)kCVPixelBufferIOSurfacePropertiesKey:#{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, // NV12
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
if (result != kCVReturnSuccess) {
NSLog(#"Unable to create cvpixelbuffer %d", result);
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
unsigned char *yDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
for (int i = 0, k = 0; i < height; i ++) {
for (int j = 0; j < width; j ++) {
yDestPlane[k++] = yData[j + i * yStride];
}
}
unsigned char *uvDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
for (int i = 0, k = 0; i < height / 2; i ++) {
for (int j = 0; j < width / 2; j ++) {
uvDestPlane[k++] = uData[j + i * uStride];
uvDestPlane[k++] = vData[j + i * vStride];
}
}
Now you can convert it to CIImage:
CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *tempContext = [CIContext contextWithOptions:nil];
CGImageRef coreImageRef = [tempContext createCGImage:coreImage
fromRect:CGRectMake(0, 0, width, height)];
And UIImage if you need that. (image orientation can vary depending on your input)
UIImage *myUIImage = [[UIImage alloc] initWithCGImage:coreImageRef
scale:1.0
orientation:UIImageOrientationUp];
Don't forget to release the variables:
CVPixelBufferRelease(pixelBuffer);
CGImageRelease(coreImageRef);
Here is how I implement the AVCaptureVideoDataOutputSampleBufferDelegate:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);
CGRect videoRect = CGRectMake(0.0f, 0.0f, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer));
AVCaptureVideoOrientation videoOrientation = [[[_captureOutput connections] objectAtIndex:0] videoOrientation];
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
cv::Mat my_mat = cv::Mat(videoRect.size.height, videoRect.size.width, NULL, baseaddress, 0); //<<<<----HERE
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
Here is how I set the capture format the format:
OSType format = kCVPixelFormatType_32BGRA;
// Check YUV format is available before selecting it (iPhone 3 does not support it)
if ([_captureOutput.availableVideoCVPixelFormatTypes containsObject:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]]) {
format = kCVPixelFormatType_420YpCbCr8BiPlanarFullRange;
}
_captureOutput.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithUnsignedInt:format]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
The problem happens because of NULL passed as 3rd parameter. It should be CV_8UC4 for 4-channel image:
cv::Mat my_mat = cv::Mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress);
Strange problem. I take frames from a video file (.mov) and write them with AVAssetWriter to another file without any explicit processing. Actually I just copy the frame from one memory buffer to another and them flush them through PixelbufferAdaptor. Then I take the resulting file, delete the original file, put the resulting file instead the original and do the same operation. Interesting thing is that the size of the file constantly grows! Can somebody explain why?
if(adaptor.assetWriterInput.readyForMoreMediaData==YES) {
CVImageBufferRef cvimgRef=nil;
CMTime lastTime=CMTimeMake(fcounter++, 30);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
CMSampleBufferRef framebuffer=nil;
CGImageRef frameImg=nil;
if ( [asr status]==AVAssetReaderStatusReading ){
framebuffer = [asset_reader_output copyNextSampleBuffer];
frameImg = [self imageFromSampleBuffer:framebuffer withColorSpace:rgbColorSpace];
}
if(frameImg && screenshot){
//CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(framebuffer);
CVReturn stat= CVPixelBufferLockBaseAddress(screenshot, 0);
pxdata=CVPixelBufferGetBaseAddress(screenshot);
bufferSize = CVPixelBufferGetDataSize(screenshot);
// Get the number of bytes per row for the pixel buffer.
bytesPerRow = CVPixelBufferGetBytesPerRow(screenshot);
// Get the pixel buffer width and height.
width = CVPixelBufferGetWidth(screenshot);
height = CVPixelBufferGetHeight(screenshot);
// Create a Quartz direct-access data provider that uses data we supply.
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, pxdata, bufferSize, NULL);
CGImageAlphaInfo ai=CGImageGetAlphaInfo(frameImg);
size_t bpx=CGImageGetBitsPerPixel(frameImg);
CGColorSpaceRef fclr=CGImageGetColorSpace(frameImg);
// Create a bitmap image from data supplied by the data provider.
CGImageRef cgImage = CGImageCreate(width, height, 8, 32, bytesPerRow,rgbColorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big,dataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
stat= CVPixelBufferLockBaseAddress(finalPixelBuffer, 0);
pxdata=CVPixelBufferGetBaseAddress(finalPixelBuffer);
bytesPerRow = CVPixelBufferGetBytesPerRow(finalPixelBuffer);
CGContextRef context = CGBitmapContextCreate(pxdata, imgsize.width,imgsize.height, 8, bytesPerRow, rgbColorSpace, kCGImageAlphaNoneSkipLast);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(frameImg), CGImageGetHeight(frameImg)), frameImg);
//CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
//CGImageRef myMaskedImage;
const float myMaskingColors[6] = { 0, 0, 0, 1, 0, 0 };
CGImageRef myColorMaskedImage = CGImageCreateWithMaskingColors (cgImage, myMaskingColors);
//CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(myColorMaskedImage), CGImageGetHeight(myColorMaskedImage)), myColorMaskedImage);
[adaptor appendPixelBuffer:finalPixelBuffer withPresentationTime:presentTime];}
well, the mystery seems to be solved. The problem was in inappropriate codec configuration.
This is set of configuration options I use now and it seems to do the work:
NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:1100000], AVVideoAverageBitRateKey,
[NSNumber numberWithInt:5],AVVideoMaxKeyFrameIntervalKey,
nil];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:[SharedApplicationData sharedData].overlayView.frame.size.width], AVVideoWidthKey,
[NSNumber numberWithInt:[SharedApplicationData sharedData].overlayView.frame.size.height], AVVideoHeightKey,
codecSettings,AVVideoCompressionPropertiesKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
Now the file size still grows but at much slower pace. There is a tradeoff between the file size and the video quality - size reduction affects the quality.