iOS zoom for recording movies - ios

I am using a AVAssetWriter to record movie frames.
I was wondering how I would go about zooming/scaling a frame (it can be set at the beginning of the movie, it does not have to be on a per frame basis..)
I assume I should be able to do that with the CMSampleBuffer that append to by assetWriterInput
Psydocode:
CMSampleBufferRef scaledSampleBuffer = [self scale:sampleBuffer];
[_videoWriterInput appendSampleBuffer:scaledSampleBuffer];
I have found some stackoverflow questions on this:
This one should create a vImage_Buffer, but I have no idea how I get that into my _videoWriterInput:
Stackoverflow
I tried to convert it back with something like this:
NSInteger cropX0 = 100,
cropY0 = 100,
cropHeight = 100,
cropWidth = 100,
outWidth = 480,
outHeight = 480;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
vImage_Buffer inBuff;
inBuff.height = cropHeight;
inBuff.width = cropWidth;
inBuff.rowBytes = bytesPerRow;
NSInteger startpos = cropY0*bytesPerRow+4*cropX0;
inBuff.data = baseAddress+startpos;
unsigned char *outImg= (unsigned char*)malloc(4*outWidth*outHeight);
vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4*outWidth};
vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0);
if (err != kvImageNoError) NSLog(#" error %ld", err);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorSystemDefault, 480, 480, kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
vImage_CGImageFormat format = {
.bitsPerComponent = 8,
.bitsPerPixel = 32,
.bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst, //BGRX8888
.colorSpace = NULL, //sRGB
};
vImageBuffer_CopyToCVPixelBuffer(&outBuff,
&format,
pixelBuffer,
NULL,
NULL,
kvImageNoFlags);
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CMSampleTimingInfo sampleTime = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, &videoInfo);
CMSampleBufferRef oBuf;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, true, NULL, NULL, videoInfo, &sampleTime, &oBuf);
But that doesn't work either (corrupted frames)
Copied together form other Stackoverflow articles, I came up with this code, but it results in a broken video:
CGAffineTransform transform = CGAffineTransformMakeScale(2, 2); // zoom 2x
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)
options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNull null], kCIImageColorSpace, nil]];
ciImage = [[ciImage imageByApplyingTransform:transform] imageByCroppingToRect:CGRectMake(0, 0, 480, 480)];
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorSystemDefault, 480, 480, kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
CIContext * ciContext = [CIContext contextWithOptions: nil];
[ciContext render:ciImage toCVPixelBuffer:pixelBuffer];
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CMSampleTimingInfo sampleTime = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, &videoInfo);
CMSampleBufferRef oBuf;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, true, NULL, NULL, videoInfo, &sampleTime, &oBuf);
Any ideas?

Related

Convert YUV data to CVPixelBufferRef and play in AVSampleBufferDisplayLayer

I'm having a stream of video in IYUV (4:2:0) format and trying to convert it into CVPixelBufferRef and then into CMSampleBufferRef and play it in AVSampleBufferDisplayLayer (AVPictureInPictureController required). I've tried several version of solution, but none actually works well, hope someone with video processing experience can tell what I've done wrong here.
Full function:
- (CMSampleBufferRef)makeSampleBufferFromTexturesWithY:(void *)yPtr U:(void *)uPtr V:(void *)vPtr yStride:(int)yStride uStride:(int)uStride vStride:(int)vStride width:(int)width height:(int)height doMirror:(BOOL)doMirror doMirrorVertical:(BOOL)doMirrorVertical
{
NSDictionary *pixelAttributes = #{(NSString *)kCVPixelBufferIOSurfacePropertiesKey:#{}}; // For 1,2,3
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result;
result = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange // For 1,2,3
// kCVPixelFormatType_32BGRA, // For 4.
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
if (result != kCVReturnSuccess) {
NSLog(#"PIP: Unable to create cvpixelbuffer %d", result);
return nil;
}
/// Converter code below...
CMFormatDescriptionRef formatDesc;
result = CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, &formatDesc);
if (result != kCVReturnSuccess) {
NSAssert(NO, #"PIP: Failed to create CMFormatDescription: %d", result);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return nil;
}
CMTime now = CMTimeMakeWithSeconds(CACurrentMediaTime(), 1000);
CMSampleTimingInfo timingInfo;
timingInfo.duration = CMTimeMakeWithSeconds(1, 1000);
timingInfo.presentationTimeStamp = now;
timingInfo.decodeTimeStamp = now;
#try {
if (#available(iOS 13.0, *)) {
CMSampleBufferRef sampleBuffer;
CMSampleBufferCreateReadyWithImageBuffer(kCFAllocatorDefault, pixelBuffer, formatDesc, &timingInfo, &sampleBuffer);
// CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CVPixelBufferRelease(pixelBuffer);
pixelBuffer = nil;
// free(dest.data);
// free(uvPlane);
return sampleBuffer;
} else {
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return nil;
}
} #catch (NSException *exception) {
NSAssert(NO, #"PIP: Failed to create CVSampleBuffer: %#", exception);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return nil;
}
}
Here's some solutions that I found:
Combine UV, but half bottom is green.
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPtr, width * height);
CGFloat uPlaneSize = width * height / 4;
CGFloat vPlaneSize = width * height / 4;
CGFloat numberOfElementsForChroma = uPlaneSize + vPlaneSize;
// for simplicity and speed create a combined UV panel to hold the pixels
uint8_t *uvPlane = calloc(numberOfElementsForChroma, sizeof(uint8_t));
memcpy(uvPlane, uPtr, uPlaneSize);
memcpy(uvPlane += (uint8_t)(uPlaneSize), vPtr, vPlaneSize);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma);
Interleave U and V, image is still distorted
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
for (int i = 0, k = 0; i < height; i ++) {
for (int j = 0; j < width; j ++) {
yDestPlane[k++] = ((unsigned char *)yPtr)[j + i * yStride];
}
}
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
for (int row = 0, index = 0; row < height / 2; row++) {
for (int col = 0; col < width / 2; col++) {
uvDestPlane[index++] = ((unsigned char *)uPtr)[col + row * uStride];
uvDestPlane[index++] = ((unsigned char *)vPtr)[col + row * vStride];
}
}
Some what similar to 1.
int yPixels = yStride * height;
int uPixels = uStride * height/2;
int vPixels = vStride * height/2;
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPtr, yPixels);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane , uPtr, uPixels);
memcpy(uvDestPlane + uPixels, vPtr, vPixels);
Use Accelerate to convert YUV to BGRA and then convert to CVPixelBuffer, no error but no video rendered
vImage_Buffer srcYp = {
.width = width,
.height = height,
.rowBytes = yStride,
.data = yPtr,
};
vImage_Buffer srcCb = {
.width = width / 2,
.height = height / 2,
.rowBytes = uStride,
.data = uPtr,
};
vImage_Buffer srcCr = {
.width = width / 2,
.height = height / 2,
.rowBytes = vStride,
.data = vPtr,
};
vImage_Buffer dest;
dest.data = NULL;
dest.width = width;
dest.height = height;
vImage_Error error = kvImageNoError;
error = vImageBuffer_Init(&dest, height, width, 32, kvImagePrintDiagnosticsToConsole);
// vImage_YpCbCrPixelRange pixelRange = (vImage_YpCbCrPixelRange){ 0, 128, 255, 255, 255, 1, 255, 0 };
vImage_YpCbCrPixelRange pixelRange = { 16, 128, 235, 240, 255, 0, 255, 0 };
vImage_YpCbCrToARGB info;
error = kvImageNoError;
error = vImageConvert_YpCbCrToARGB_GenerateConversion(kvImage_YpCbCrToARGBMatrix_ITU_R_601_4,
&pixelRange,
&info,
kvImage420Yp8_Cb8_Cr8,
kvImageARGB8888,
kvImagePrintDiagnosticsToConsole);
error = kvImageNoError;
uint8_t permuteMap[4] = {3, 2, 1, 0}; // BGRA - iOS only support BGRA
error = vImageConvert_420Yp8_Cb8_Cr8ToARGB8888(&srcYp,
&srcCb,
&srcCr,
&dest,
&info,
permuteMap, // for iOS must be no NULL, mac can be NULL iOS only support BGRA
255,
kvImagePrintDiagnosticsToConsole);
if (error != kvImageNoError) {
NSAssert(NO, #"PIP: vImageConvert error %ld", error);
return nil;
}
// vImageBuffer_CopyToCVPixelBuffer will give out error destFormat bitsPerComponent = 0 is not supported
// vImage_CGImageFormat format = {
// .bitsPerComponent = 8,
// .bitsPerPixel = 32,
// .bitmapInfo = (CGBitmapInfo)kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst,
// .colorSpace = CGColorSpaceCreateDeviceRGB()
// };
// vImageCVImageFormatRef vformat = vImageCVImageFormat_CreateWithCVPixelBuffer(pixelBuffer);
//
// error = vImageBuffer_CopyToCVPixelBuffer(&dest, &format, pixelBuffer, vformat, 0, kvImagePrintDiagnosticsToConsole);
result = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_32BGRA,
dest.data,
dest.rowBytes,
NULL,
NULL,
(__bridge CFDictionaryRef)pixelAttributes,
&pixelBuffer);
I have to resort to use a third-party library OGVKit to makes it works with some minor tweaks. The decoder is from the function (void)updatePixelBuffer420:pixelBuffer works with very fast decoding time for YUV420 data.

Ios rotate, filter video stream in ios

Hello There I am rotating and applying image filters by GPUImage on vide live stream
The task is consuming more time than expected resulting over-heating of iPhone
Can anybody help me out in optimising my code
Following is my used code:
- (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer{
//return if invalid sample buffer
if (!CMSampleBufferIsValid(sampleBuffer)) {
return;
}
//Get CG Image from sample buffer
CGImageRef cgImageFromBuffer = [self cgImageFromSampleBuffer:sampleBuffer];
if(!cgImageFromBuffer || (cgImageFromBuffer == NULL)){
return;
}
//We need rotation to perform
UIImage *rotatedPlainImage = [UIUtils rotateImage:[UIImage imageWithCGImage:cgImageFromBuffer] byDegree:90];
if (rotatedPlainImage == nil) {
CFRelease(cgImageFromBuffer);
return;
}
//Apply image filter using GPU Image on CGImage
CGImageRef filteredCGImage = [self.selectedPublishFilter newCGImageByFilteringCGImage:rotatedPlainImage.CGImage];
//Convert back in CMSamplbuffer
CMSampleBufferRef outputBufffer = [self getSampleBufferUsingCIByCGInput:filteredCGImage andProvidedSampleBuffer:sampleBuffer];
//Pass to custom encode of Red5Pro to server for live stream
[self.encoder encodeFrame:outputBufffer ofType:r5_media_type_video_custom];
//Release data if needed
CFRelease(outputBufffer);
CFRelease(filteredCGImage);
CFRelease(cgImageFromBuffer);
}
- (CGImageRef)cgImageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
- (CMSampleBufferRef)getSampleBufferUsingCIByCGInput:(CGImageRef)imageRef andProvidedSampleBuffer:(CMSampleBufferRef)sampleBuffer{
CIImage *nm = [CIImage imageWithCGImage:imageRef];
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorSystemDefault, (size_t)nm.extent.size.width, (size_t)nm.extent.size.height, kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
CIContext *ciContext = [CIContext contextWithOptions: nil];
[ciContext render:nm toCVPixelBuffer:pixelBuffer];
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CMSampleTimingInfo sampleTime = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, &videoInfo);
CMSampleBufferRef oBuf;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, true, NULL, NULL, videoInfo, &sampleTime, &oBuf);
CVPixelBufferRelease(pixelBuffer);
CFRelease(videoInfo);
return oBuf;
}
I used OpenGL 2.0 and Accelerate Framework
Accelerate framework to rotate CMSampleBuffer
Now without filter the time is 3 - 8 milliseconds
With Filters its 7-21 milliseconds
OpenGL to make CI Image render fast on CVPixelBuffer
#implementation ColorsVideoSource{
CIContext *coreImageContext;
}
- (instancetype)init{
if((self = [super init]) != nil){
EAGLContext *glContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
GLKView *glView = [[GLKView alloc] initWithFrame:CGRectMake(0.0, 0.0, 360.0, 480.0) context:glContext];
coreImageContext = [CIContext contextWithEAGLContext:glView.context];
}
return self;
}
- (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer{
if (!CMSampleBufferIsValid(sampleBuffer)) {
return;
}
CVPixelBufferRef rotateBuffer = [self correctBufferOrientation:sampleBuffer];
CGImageRef cgImageFromBuffer = [self cgImageFromImageBuffer:rotateBuffer];
if(!cgImageFromBuffer || (cgImageFromBuffer == NULL)){
return;
}
UIImage *rotatedPlainImage = [UIImage imageWithCGImage:cgImageFromBuffer];
if (rotatedPlainImage == nil) {
CFRelease(rotateBuffer);
CFRelease(cgImageFromBuffer);
return;
}
if (_currentFilterType == SWPublisherFilterNone) {
if (_needPreviewImage) {
_previewImage = rotatedPlainImage;
}
CMSampleTimingInfo sampleTime = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, rotateBuffer, &videoInfo);
CMSampleBufferRef oBuf;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, rotateBuffer, true, NULL, NULL, videoInfo, &sampleTime, &oBuf);
CFRelease(videoInfo);
if(!self.pauseEncoding){
#try {
[self.encoder encodeFrame:oBuf ofType:r5_media_type_video_custom];
} #catch (NSException *exception) {
NSLog(#"Encoder error: %#", exception);
}
}
CFRelease(oBuf);
}
else {
CGImageRef filteredCGImage = [self.selectedPublishFilter newCGImageByFilteringCGImage:rotatedPlainImage.CGImage];
if (_needPreviewImage) {
_previewImage = [UIImage imageWithCGImage:filteredCGImage];
}
CMSampleBufferRef outputBuffer = [self getSampleBufferUsingCIByCGInput:filteredCGImage andProvidedSampleBuffer:sampleBuffer];
if(!self.pauseEncoding){
#try {
[self.encoder encodeFrame:outputBuffer ofType:r5_media_type_video_custom];
} #catch (NSException *exception) {
NSLog(#"Encoder error: %#", exception);
}
}
CFRelease(outputBuffer);
CFRelease(filteredCGImage);
}
CFRelease(rotateBuffer);
CFRelease(cgImageFromBuffer);
}
#pragma mark - Methods Refactored GPUImage - Devanshu
- (CVPixelBufferRef)correctBufferOrientation:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t currSize = bytesPerRow * height * sizeof(unsigned char);
size_t bytesPerRowOut = 4 * height * sizeof(unsigned char);
void *srcBuff = CVPixelBufferGetBaseAddress(imageBuffer);
/* rotationConstant:
* 0 -- rotate 0 degrees (simply copy the data from src to dest)
* 1 -- rotate 90 degrees counterclockwise
* 2 -- rotate 180 degress
* 3 -- rotate 270 degrees counterclockwise
*/
uint8_t rotationConstant = 3;
unsigned char *dstBuff = (unsigned char *)malloc(currSize);
vImage_Buffer inbuff = {srcBuff, height, width, bytesPerRow};
vImage_Buffer outbuff = {dstBuff, width, height, bytesPerRowOut};
uint8_t bgColor[4] = {0, 0, 0, 0};
vImage_Error err = vImageRotate90_ARGB8888(&inbuff, &outbuff, rotationConstant, bgColor, 0);
if (err != kvImageNoError) NSLog(#"%ld", err);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CVPixelBufferRef rotatedBuffer = NULL;
CVPixelBufferCreateWithBytes(NULL,
height,
width,
kCVPixelFormatType_32BGRA,
outbuff.data,
bytesPerRowOut,
freePixelBufferDataAfterRelease,
NULL,
NULL,
&rotatedBuffer);
return rotatedBuffer;
}
void freePixelBufferDataAfterRelease(void *releaseRefCon, const void *baseAddress)
{
// Free the memory we malloced for the vImage rotation
free((void *)baseAddress);
}
- (CGImageRef)cgImageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
return [self cgImageFromImageBuffer:imageBuffer];
}
- (CGImageRef)cgImageFromImageBuffer:(CVImageBufferRef) imageBuffer // Create a CGImageRef from sample buffer data
{
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
return newImage;
}
- (CMSampleBufferRef)getSampleBufferUsingCIByCGInput:(CGImageRef)imageRef andProvidedSampleBuffer:(CMSampleBufferRef)sampleBuffer{
CIImage *theCoreImage = [CIImage imageWithCGImage:imageRef];
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorSystemDefault, (size_t)theCoreImage.extent.size.width, (size_t)theCoreImage.extent.size.height, kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
[coreImageContext render:theCoreImage toCVPixelBuffer:pixelBuffer];
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CMSampleTimingInfo sampleTime = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, &videoInfo);
CMSampleBufferRef oBuf;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, true, NULL, NULL, videoInfo, &sampleTime, &oBuf);
CVPixelBufferRelease(pixelBuffer);
CFRelease(videoInfo);
return oBuf;
}
NSLog(#"start rotate");
CFAbsoluteTime t0 = CFAbsoluteTimeGetCurrent();
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciimage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CFAbsoluteTime t1 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur to ciimage: %#", #(t1-t0));
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CIImage *newImage = [ciimage imageByApplyingCGOrientation:kCGImagePropertyOrientationRight];
CFAbsoluteTime t2 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur rotate ciimage: %#", #(t2-t1));
CVPixelBufferRef newPixcelBuffer = nil;
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
CVPixelBufferCreate(kCFAllocatorDefault, height, width, kCVPixelFormatType_32BGRA, nil, &newPixcelBuffer);
CFAbsoluteTime t3 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur alloc pixel: %#", #(t3-t2));
[_ciContext render:newImage toCVPixelBuffer:newPixcelBuffer];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CFAbsoluteTime t4 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur render pixel: %#", #(t4-t3));
//
CMSampleTimingInfo sampleTimingInfo = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
//
CMVideoFormatDescriptionRef videoInfo = nil;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, newPixcelBuffer, &videoInfo);
CMSampleBufferRef newSampleBuffer = nil;
CMSampleBufferCreateForImageBuffer(kCFAllocatorMalloc, newPixcelBuffer, true, nil, nil, videoInfo, &sampleTimingInfo, &newSampleBuffer);
CFAbsoluteTime t5 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur create CMSample: %#", #(t5-t4));
// release
CVPixelBufferRelease(newPixcelBuffer);
CFAbsoluteTime t6 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur end rotate: %#", #(t6-t0));
return newSampleBuffer;

UIImage to CVPixelBufferRef empty

I am using this code to create CVPixelBufferRef:
NSDictionary *videoSettings = #{AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: [NSNumber numberWithInt:size.width],
AVVideoHeightKey: [NSNumber numberWithInt:size.height]};
self.writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
self.adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.writerInput sourcePixelBufferAttributes:nil];
CVPixelBufferRef buffer;
CVPixelBufferPoolCreatePixelBuffer(NULL, self.adaptor.pixelBufferPool, &buffer);
buffer = [self pixelBufferFromCGImage:[frame CGImage] size:self.videoSize];
this is the pixelBufferFromCGImage function:
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
size:(CGSize)imageSize
{
NSDictionary *options = #{(id)kCVPixelBufferCGImageCompatibilityKey: #YES,
(id)kCVPixelBufferCGBitmapContextCompatibilityKey: #YES};
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, imageSize.width,
imageSize.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, imageSize.width,
imageSize.height, 8, 4*imageSize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0 + (imageSize.width-CGImageGetWidth(image))/2,
(imageSize.height-CGImageGetHeight(image))/2,
CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
The problem is that the buffer is allways empty, any idea why its happen?
I guess that the buffer is always nil because the "videoSize" is not valid. I have tested your code in 3 scenarios and here is the results
The first one I passed valid parameters with an image and size 200x200. The buffer was not nil.
The second one I passed a nil for image and video size 200x200. The buffer was not nil.
The last one I passed a valid image but invalid video size 0x0 and the buffer was nil. The return status is
kCVReturnInvalidArgument
Invalid function parameter. For example, out of range or the wrong type.
Value
-6661
Description
Invalid function parameter. For example, out of range or the wrong type.
Hope this will help you.
This is the code that I tested.
- (void)viewDidLoad
{
[super viewDidLoad];
UIImage *image = [UIImage imageNamed:#"Doge-Meme.jpg"];
CVPixelBufferRef bufferRef = [[self class] pixelBufferFromCGImage:image.CGImage size:CGSizeMake(200, 200)];
NSLog(#"## %#", bufferRef);//not nil
CVPixelBufferRef bufferRef1 = [[self class] pixelBufferFromCGImage:nil size:CGSizeMake(200, 200)];
NSLog(#"## %#", bufferRef1);//not nil
CVPixelBufferRef bufferRef2 = [[self class] pixelBufferFromCGImage:image.CGImage size:CGSizeMake(0, 0)];
NSLog(#"## %#", bufferRef2);//nil
}
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
size:(CGSize)imageSize
{
NSDictionary *options = #{(id)kCVPixelBufferCGImageCompatibilityKey: #YES,
(id)kCVPixelBufferCGBitmapContextCompatibilityKey: #YES};
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, imageSize.width,
imageSize.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, imageSize.width,
imageSize.height, 8, 4*imageSize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0 + (imageSize.width-CGImageGetWidth(image))/2,
(imageSize.height-CGImageGetHeight(image))/2,
CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}

AVFoundation: add text to the CMSampleBufferRef video frame

I'm building an app using AVFoundation.
Just before I call [assetWriterInput appendSampleBuffer:sampleBuffer] in
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection-method.
I manipulate the pixels in the sample buffer (using a pixelbuffer to apply an effect).
But the client wants me to put in a text (timestamp & framecounter) as well on the frames, but I haven't found a way to do this yet.
I tried to convert the samplebuffer to an Image, apply text on the image, and convert the image back to a samplebuffer, but then
CMSampleBufferDataIsReady(sampleBuffer)
fails.
Here are my UIImage category methods:
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *newUIImage = [UIImage imageWithCGImage:newImage];
CFRelease(newImage);
return newUIImage;
}
And
- (CMSampleBufferRef) cmSampleBuffer
{
CGImageRef image = self.CGImage;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
self.size.width,
self.size.height,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, self.size.width,
self.size.height, 8, 4*self.size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
CMVideoFormatDescriptionRef videoInfo = NULL;
CMSampleBufferRef sampleBuffer = NULL;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
pxbuffer, true, NULL, NULL, videoInfo, NULL, &sampleBuffer);
return sampleBuffer;
}
Any ideas?
EDIT:
I changed my code with Tony's answer. (Thank you!)
This code works:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *ciContext = [CIContext contextWithEAGLContext:eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]} ];
UIFont *font = [UIFont fontWithName:#"Helvetica" size:40];
NSDictionary *attributes = #{NSFontAttributeName: font,
NSForegroundColorAttributeName: [UIColor lightTextColor]};
UIImage *img = [UIImage imageFromText:#"01 - 13/02/2014 15:18:21:654" withAttributes:attributes];
CIImage *filteredImage = [[CIImage alloc] initWithCGImage:img.CGImage];
[ciContext render:filteredImage toCVPixelBuffer:pixelBuffer bounds:[filteredImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
You should refer the CIFunHouse sample from apple, and you may use this api to draw directly to the buffer
-(void)render:(CIImage *)image toCVPixelBuffer:(CVPixelBufferRef)buffer bounds:(CGRect)r colorSpace:(CGColorSpaceRef)cs
You can download it here WWDC2013
Create the context
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
_ciContext = [CIContext contextWithEAGLContext:_eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]} ];
Now render the image
CVPixelBufferRef renderedOutputPixelBuffer = NULL;
OSStatus err = CVPixelBufferPoolCreatePixelBuffer(nil, self.pixelBufferAdaptor.pixelBufferPool, &renderedOutputPixelBuffer);
[_ciContext render:filteredImage toCVPixelBuffer:renderedOutputPixelBuffer bounds:[filteredImage extent]

Memory leak in CoreImage/CoreVideo

I'm build an iOS app that does some basic detection.
I get the raw frames from AVCaptureVideoDataOutput, convert the CMSampleBufferRef to a UIImage, resize the UIImage, then convert it to a CVPixelBufferRef.
As far as I can detect with Instruments the leak is the last part where I convert the CGImage to a CVPixelBufferRef.
Here's the code I use:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
videof = [[ASMotionDetect alloc] initWithSampleImage:[self resizeSampleBuffer:sampleBuffer]];
// ASMotionDetect is my class for detection and I use videof to calculate the movement
}
-(UIImage*)resizeSampleBuffer:(CMSampleBufferRef) sampleBuffer {
UIImage *img;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
img = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
newContext = nil;
img = [self resizeImageToSquare:img];
return img;
}
-(UIImage*)resizeImageToSquare:(UIImage*)_temp {
UIImage *img;
int w = _temp.size.width;
int h = _temp.size.height;
CGRect rect;
if (w>h) {
rect = CGRectMake((w-h)/2,0,h,h);
} else {
rect = CGRectMake(0, (h-w)/2, w, w);
}
//
img = [self crop:_temp inRect:rect];
return img;
}
-(UIImage*) crop:(UIImage*)image inRect:(CGRect)rect{
UIImage *sourceImage = image;
CGRect selectionRect = rect;
CGRect transformedRect = TransformCGRectForUIImageOrientation(selectionRect, sourceImage.imageOrientation, sourceImage.size);
CGImageRef resultImageRef = CGImageCreateWithImageInRect(sourceImage.CGImage, transformedRect);
UIImage *resultImage = [[UIImage alloc] initWithCGImage:resultImageRef scale:1.0 orientation:image.imageOrientation];
CGImageRelease(resultImageRef);
return resultImage;
}
And in my detection class I have:
- (id)initWithSampleImage:(UIImage*)sampleImage {
if ((self = [super init])) {
_frame = new CVMatOpaque();
_histograms = new CVMatNDOpaque[kGridSize *
kGridSize];
[self extractFrameFromImage:sampleImage];
}
return self;
}
- (void)extractFrameFromImage:(UIImage*)sampleImage {
CGImageRef imageRef = [sampleImage CGImage];
CVImageBufferRef imageBuffer = [self pixelBufferFromCGImage:imageRef];
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Collect some information required to extract the frame.
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
// Extract the frame, convert it to grayscale, and shove it in _frame.
cv::Mat frame(height, width, CV_8UC4, baseAddress, bytesPerRow);
cv::cvtColor(frame, frame, CV_BGR2GRAY);
_frame->matrix = frame;
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CGImageRelease(imageRef);
}
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
CVPixelBufferRef pxbuffer = NULL;
int width = CGImageGetWidth(image)*2;
int height = CGImageGetHeight(image)*2;
NSMutableDictionary *attributes = [NSMutableDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, [NSNumber numberWithInt:width], kCVPixelBufferWidthKey, [NSNumber numberWithInt:height], kCVPixelBufferHeightKey, nil];
CVPixelBufferPoolRef pixelBufferPool;
CVReturn theError = CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (__bridge CFDictionaryRef) attributes, &pixelBufferPool);
NSParameterAssert(theError == kCVReturnSuccess);
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, pixelBufferPool, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width,
height, 8, width*4, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
/* here is the problem: */
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
With Instrument I found out that the problem is with CVPixelBufferRef allocations but I don't understand why - can someone see the problem?
Thank you
In -pixelBufferFromCGImage:, both pxBuffer and pixelBufferPool are not released. That makes sense for pxBuffer, as it is a return value, but not for pixelBufferPool – you create and leak one per call of the method.
A quick fix should be to
Release pixelBufferPool in -pixelBufferFromCGImage:
Release pxBuffer (the return value of -pixelBufferFromCGImage:) in -extractFrameFromImage:
You should also rename -pixelBufferFromCGImage: to -createPixelBufferFromCGImage: to make clear that it returns a retained object.

Resources