Memory leak in CoreImage/CoreVideo - ios

I'm build an iOS app that does some basic detection.
I get the raw frames from AVCaptureVideoDataOutput, convert the CMSampleBufferRef to a UIImage, resize the UIImage, then convert it to a CVPixelBufferRef.
As far as I can detect with Instruments the leak is the last part where I convert the CGImage to a CVPixelBufferRef.
Here's the code I use:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
videof = [[ASMotionDetect alloc] initWithSampleImage:[self resizeSampleBuffer:sampleBuffer]];
// ASMotionDetect is my class for detection and I use videof to calculate the movement
}
-(UIImage*)resizeSampleBuffer:(CMSampleBufferRef) sampleBuffer {
UIImage *img;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
img = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
newContext = nil;
img = [self resizeImageToSquare:img];
return img;
}
-(UIImage*)resizeImageToSquare:(UIImage*)_temp {
UIImage *img;
int w = _temp.size.width;
int h = _temp.size.height;
CGRect rect;
if (w>h) {
rect = CGRectMake((w-h)/2,0,h,h);
} else {
rect = CGRectMake(0, (h-w)/2, w, w);
}
//
img = [self crop:_temp inRect:rect];
return img;
}
-(UIImage*) crop:(UIImage*)image inRect:(CGRect)rect{
UIImage *sourceImage = image;
CGRect selectionRect = rect;
CGRect transformedRect = TransformCGRectForUIImageOrientation(selectionRect, sourceImage.imageOrientation, sourceImage.size);
CGImageRef resultImageRef = CGImageCreateWithImageInRect(sourceImage.CGImage, transformedRect);
UIImage *resultImage = [[UIImage alloc] initWithCGImage:resultImageRef scale:1.0 orientation:image.imageOrientation];
CGImageRelease(resultImageRef);
return resultImage;
}
And in my detection class I have:
- (id)initWithSampleImage:(UIImage*)sampleImage {
if ((self = [super init])) {
_frame = new CVMatOpaque();
_histograms = new CVMatNDOpaque[kGridSize *
kGridSize];
[self extractFrameFromImage:sampleImage];
}
return self;
}
- (void)extractFrameFromImage:(UIImage*)sampleImage {
CGImageRef imageRef = [sampleImage CGImage];
CVImageBufferRef imageBuffer = [self pixelBufferFromCGImage:imageRef];
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Collect some information required to extract the frame.
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
// Extract the frame, convert it to grayscale, and shove it in _frame.
cv::Mat frame(height, width, CV_8UC4, baseAddress, bytesPerRow);
cv::cvtColor(frame, frame, CV_BGR2GRAY);
_frame->matrix = frame;
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CGImageRelease(imageRef);
}
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
CVPixelBufferRef pxbuffer = NULL;
int width = CGImageGetWidth(image)*2;
int height = CGImageGetHeight(image)*2;
NSMutableDictionary *attributes = [NSMutableDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, [NSNumber numberWithInt:width], kCVPixelBufferWidthKey, [NSNumber numberWithInt:height], kCVPixelBufferHeightKey, nil];
CVPixelBufferPoolRef pixelBufferPool;
CVReturn theError = CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (__bridge CFDictionaryRef) attributes, &pixelBufferPool);
NSParameterAssert(theError == kCVReturnSuccess);
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, pixelBufferPool, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width,
height, 8, width*4, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
/* here is the problem: */
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
With Instrument I found out that the problem is with CVPixelBufferRef allocations but I don't understand why - can someone see the problem?
Thank you

In -pixelBufferFromCGImage:, both pxBuffer and pixelBufferPool are not released. That makes sense for pxBuffer, as it is a return value, but not for pixelBufferPool – you create and leak one per call of the method.
A quick fix should be to
Release pixelBufferPool in -pixelBufferFromCGImage:
Release pxBuffer (the return value of -pixelBufferFromCGImage:) in -extractFrameFromImage:
You should also rename -pixelBufferFromCGImage: to -createPixelBufferFromCGImage: to make clear that it returns a retained object.

Related

Ios rotate, filter video stream in ios

Hello There I am rotating and applying image filters by GPUImage on vide live stream
The task is consuming more time than expected resulting over-heating of iPhone
Can anybody help me out in optimising my code
Following is my used code:
- (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer{
//return if invalid sample buffer
if (!CMSampleBufferIsValid(sampleBuffer)) {
return;
}
//Get CG Image from sample buffer
CGImageRef cgImageFromBuffer = [self cgImageFromSampleBuffer:sampleBuffer];
if(!cgImageFromBuffer || (cgImageFromBuffer == NULL)){
return;
}
//We need rotation to perform
UIImage *rotatedPlainImage = [UIUtils rotateImage:[UIImage imageWithCGImage:cgImageFromBuffer] byDegree:90];
if (rotatedPlainImage == nil) {
CFRelease(cgImageFromBuffer);
return;
}
//Apply image filter using GPU Image on CGImage
CGImageRef filteredCGImage = [self.selectedPublishFilter newCGImageByFilteringCGImage:rotatedPlainImage.CGImage];
//Convert back in CMSamplbuffer
CMSampleBufferRef outputBufffer = [self getSampleBufferUsingCIByCGInput:filteredCGImage andProvidedSampleBuffer:sampleBuffer];
//Pass to custom encode of Red5Pro to server for live stream
[self.encoder encodeFrame:outputBufffer ofType:r5_media_type_video_custom];
//Release data if needed
CFRelease(outputBufffer);
CFRelease(filteredCGImage);
CFRelease(cgImageFromBuffer);
}
- (CGImageRef)cgImageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
- (CMSampleBufferRef)getSampleBufferUsingCIByCGInput:(CGImageRef)imageRef andProvidedSampleBuffer:(CMSampleBufferRef)sampleBuffer{
CIImage *nm = [CIImage imageWithCGImage:imageRef];
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorSystemDefault, (size_t)nm.extent.size.width, (size_t)nm.extent.size.height, kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
CIContext *ciContext = [CIContext contextWithOptions: nil];
[ciContext render:nm toCVPixelBuffer:pixelBuffer];
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CMSampleTimingInfo sampleTime = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, &videoInfo);
CMSampleBufferRef oBuf;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, true, NULL, NULL, videoInfo, &sampleTime, &oBuf);
CVPixelBufferRelease(pixelBuffer);
CFRelease(videoInfo);
return oBuf;
}
I used OpenGL 2.0 and Accelerate Framework
Accelerate framework to rotate CMSampleBuffer
Now without filter the time is 3 - 8 milliseconds
With Filters its 7-21 milliseconds
OpenGL to make CI Image render fast on CVPixelBuffer
#implementation ColorsVideoSource{
CIContext *coreImageContext;
}
- (instancetype)init{
if((self = [super init]) != nil){
EAGLContext *glContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
GLKView *glView = [[GLKView alloc] initWithFrame:CGRectMake(0.0, 0.0, 360.0, 480.0) context:glContext];
coreImageContext = [CIContext contextWithEAGLContext:glView.context];
}
return self;
}
- (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer{
if (!CMSampleBufferIsValid(sampleBuffer)) {
return;
}
CVPixelBufferRef rotateBuffer = [self correctBufferOrientation:sampleBuffer];
CGImageRef cgImageFromBuffer = [self cgImageFromImageBuffer:rotateBuffer];
if(!cgImageFromBuffer || (cgImageFromBuffer == NULL)){
return;
}
UIImage *rotatedPlainImage = [UIImage imageWithCGImage:cgImageFromBuffer];
if (rotatedPlainImage == nil) {
CFRelease(rotateBuffer);
CFRelease(cgImageFromBuffer);
return;
}
if (_currentFilterType == SWPublisherFilterNone) {
if (_needPreviewImage) {
_previewImage = rotatedPlainImage;
}
CMSampleTimingInfo sampleTime = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, rotateBuffer, &videoInfo);
CMSampleBufferRef oBuf;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, rotateBuffer, true, NULL, NULL, videoInfo, &sampleTime, &oBuf);
CFRelease(videoInfo);
if(!self.pauseEncoding){
#try {
[self.encoder encodeFrame:oBuf ofType:r5_media_type_video_custom];
} #catch (NSException *exception) {
NSLog(#"Encoder error: %#", exception);
}
}
CFRelease(oBuf);
}
else {
CGImageRef filteredCGImage = [self.selectedPublishFilter newCGImageByFilteringCGImage:rotatedPlainImage.CGImage];
if (_needPreviewImage) {
_previewImage = [UIImage imageWithCGImage:filteredCGImage];
}
CMSampleBufferRef outputBuffer = [self getSampleBufferUsingCIByCGInput:filteredCGImage andProvidedSampleBuffer:sampleBuffer];
if(!self.pauseEncoding){
#try {
[self.encoder encodeFrame:outputBuffer ofType:r5_media_type_video_custom];
} #catch (NSException *exception) {
NSLog(#"Encoder error: %#", exception);
}
}
CFRelease(outputBuffer);
CFRelease(filteredCGImage);
}
CFRelease(rotateBuffer);
CFRelease(cgImageFromBuffer);
}
#pragma mark - Methods Refactored GPUImage - Devanshu
- (CVPixelBufferRef)correctBufferOrientation:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t currSize = bytesPerRow * height * sizeof(unsigned char);
size_t bytesPerRowOut = 4 * height * sizeof(unsigned char);
void *srcBuff = CVPixelBufferGetBaseAddress(imageBuffer);
/* rotationConstant:
* 0 -- rotate 0 degrees (simply copy the data from src to dest)
* 1 -- rotate 90 degrees counterclockwise
* 2 -- rotate 180 degress
* 3 -- rotate 270 degrees counterclockwise
*/
uint8_t rotationConstant = 3;
unsigned char *dstBuff = (unsigned char *)malloc(currSize);
vImage_Buffer inbuff = {srcBuff, height, width, bytesPerRow};
vImage_Buffer outbuff = {dstBuff, width, height, bytesPerRowOut};
uint8_t bgColor[4] = {0, 0, 0, 0};
vImage_Error err = vImageRotate90_ARGB8888(&inbuff, &outbuff, rotationConstant, bgColor, 0);
if (err != kvImageNoError) NSLog(#"%ld", err);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CVPixelBufferRef rotatedBuffer = NULL;
CVPixelBufferCreateWithBytes(NULL,
height,
width,
kCVPixelFormatType_32BGRA,
outbuff.data,
bytesPerRowOut,
freePixelBufferDataAfterRelease,
NULL,
NULL,
&rotatedBuffer);
return rotatedBuffer;
}
void freePixelBufferDataAfterRelease(void *releaseRefCon, const void *baseAddress)
{
// Free the memory we malloced for the vImage rotation
free((void *)baseAddress);
}
- (CGImageRef)cgImageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
return [self cgImageFromImageBuffer:imageBuffer];
}
- (CGImageRef)cgImageFromImageBuffer:(CVImageBufferRef) imageBuffer // Create a CGImageRef from sample buffer data
{
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
return newImage;
}
- (CMSampleBufferRef)getSampleBufferUsingCIByCGInput:(CGImageRef)imageRef andProvidedSampleBuffer:(CMSampleBufferRef)sampleBuffer{
CIImage *theCoreImage = [CIImage imageWithCGImage:imageRef];
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorSystemDefault, (size_t)theCoreImage.extent.size.width, (size_t)theCoreImage.extent.size.height, kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
[coreImageContext render:theCoreImage toCVPixelBuffer:pixelBuffer];
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CMSampleTimingInfo sampleTime = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, &videoInfo);
CMSampleBufferRef oBuf;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, true, NULL, NULL, videoInfo, &sampleTime, &oBuf);
CVPixelBufferRelease(pixelBuffer);
CFRelease(videoInfo);
return oBuf;
}
NSLog(#"start rotate");
CFAbsoluteTime t0 = CFAbsoluteTimeGetCurrent();
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciimage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CFAbsoluteTime t1 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur to ciimage: %#", #(t1-t0));
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CIImage *newImage = [ciimage imageByApplyingCGOrientation:kCGImagePropertyOrientationRight];
CFAbsoluteTime t2 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur rotate ciimage: %#", #(t2-t1));
CVPixelBufferRef newPixcelBuffer = nil;
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
CVPixelBufferCreate(kCFAllocatorDefault, height, width, kCVPixelFormatType_32BGRA, nil, &newPixcelBuffer);
CFAbsoluteTime t3 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur alloc pixel: %#", #(t3-t2));
[_ciContext render:newImage toCVPixelBuffer:newPixcelBuffer];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CFAbsoluteTime t4 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur render pixel: %#", #(t4-t3));
//
CMSampleTimingInfo sampleTimingInfo = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
//
CMVideoFormatDescriptionRef videoInfo = nil;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, newPixcelBuffer, &videoInfo);
CMSampleBufferRef newSampleBuffer = nil;
CMSampleBufferCreateForImageBuffer(kCFAllocatorMalloc, newPixcelBuffer, true, nil, nil, videoInfo, &sampleTimingInfo, &newSampleBuffer);
CFAbsoluteTime t5 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur create CMSample: %#", #(t5-t4));
// release
CVPixelBufferRelease(newPixcelBuffer);
CFAbsoluteTime t6 = CFAbsoluteTimeGetCurrent();
NSLog(#"dur end rotate: %#", #(t6-t0));
return newSampleBuffer;

AVFoundation: add text to the CMSampleBufferRef video frame

I'm building an app using AVFoundation.
Just before I call [assetWriterInput appendSampleBuffer:sampleBuffer] in
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection-method.
I manipulate the pixels in the sample buffer (using a pixelbuffer to apply an effect).
But the client wants me to put in a text (timestamp & framecounter) as well on the frames, but I haven't found a way to do this yet.
I tried to convert the samplebuffer to an Image, apply text on the image, and convert the image back to a samplebuffer, but then
CMSampleBufferDataIsReady(sampleBuffer)
fails.
Here are my UIImage category methods:
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *newUIImage = [UIImage imageWithCGImage:newImage];
CFRelease(newImage);
return newUIImage;
}
And
- (CMSampleBufferRef) cmSampleBuffer
{
CGImageRef image = self.CGImage;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
self.size.width,
self.size.height,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, self.size.width,
self.size.height, 8, 4*self.size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
CMVideoFormatDescriptionRef videoInfo = NULL;
CMSampleBufferRef sampleBuffer = NULL;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
pxbuffer, true, NULL, NULL, videoInfo, NULL, &sampleBuffer);
return sampleBuffer;
}
Any ideas?
EDIT:
I changed my code with Tony's answer. (Thank you!)
This code works:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *ciContext = [CIContext contextWithEAGLContext:eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]} ];
UIFont *font = [UIFont fontWithName:#"Helvetica" size:40];
NSDictionary *attributes = #{NSFontAttributeName: font,
NSForegroundColorAttributeName: [UIColor lightTextColor]};
UIImage *img = [UIImage imageFromText:#"01 - 13/02/2014 15:18:21:654" withAttributes:attributes];
CIImage *filteredImage = [[CIImage alloc] initWithCGImage:img.CGImage];
[ciContext render:filteredImage toCVPixelBuffer:pixelBuffer bounds:[filteredImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
You should refer the CIFunHouse sample from apple, and you may use this api to draw directly to the buffer
-(void)render:(CIImage *)image toCVPixelBuffer:(CVPixelBufferRef)buffer bounds:(CGRect)r colorSpace:(CGColorSpaceRef)cs
You can download it here WWDC2013
Create the context
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
_ciContext = [CIContext contextWithEAGLContext:_eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]} ];
Now render the image
CVPixelBufferRef renderedOutputPixelBuffer = NULL;
OSStatus err = CVPixelBufferPoolCreatePixelBuffer(nil, self.pixelBufferAdaptor.pixelBufferPool, &renderedOutputPixelBuffer);
[_ciContext render:filteredImage toCVPixelBuffer:renderedOutputPixelBuffer bounds:[filteredImage extent]

Screen Capture including AVCaptureVideoPreviewLayer with overlay Buttons

I am using screen Recorder to capute screen. It is perfectly working when a view was filled in iphone screen. when the AVCaptureVideoPreviewLayer was displayed with overlay buttons, then the saved screen captured video shows overlay buttons without AVCaptureVideoPreviewLayer. I have used this tutorial for adding overlays. How to fix this?
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
#autoreleasepool {
if ([connection isVideoOrientationSupported])
[connection setVideoOrientation:[self cameraOrientation]];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
/*We release some components*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
image1= [UIImage imageWithCGImage:newImage];
/*We relase the CGImageRef*/
CGImageRelease(newImage);
dispatch_sync(dispatch_get_main_queue(), ^{
[self.imageView setImage:image1];
});
}
}
run the writeaSample using NSTimer.
-(void) writeSample: (NSTimer*) _timer {
if (assetWriterInput.readyForMoreMediaData) {
// CMSampleBufferRef sample = nil;
#autoreleasepool {
CVReturn cvErr = kCVReturnSuccess;
// get screenshot image!
UIGraphicsBeginImageContext(baseViewOne.frame.size);
[[baseViewOne layer] renderInContext:UIGraphicsGetCurrentContext()];
screenshota = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//CGImageRef image = (CGImageRef) [[self screenshot] CGImage];
CGImageRef image = (CGImageRef) [screenshota CGImage];
//NSLog (#"made screenshot");
// prepare the pixel buffer
CVPixelBufferRef pixelBuffer = NULL;
CFDataRef imageData= CGDataProviderCopyData(CGImageGetDataProvider(image));
//NSLog (#"copied image data");
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
baseViewOne.frame.size.width,baseViewOne.frame.size.height,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(imageData),
CGImageGetBytesPerRow(image),
NULL,
NULL,
NULL,
&pixelBuffer);
//NSLog (#"CVPixelBufferCreateWithBytes returned %d", cvErr);
// calculate the time
CMTime presentationTime;
CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
elapsedTime = thisFrameWallClockTime - (firstFrameWallClockTime+pausedFrameTime);
// NSLog (#"elapsedTime: %f", elapsedTime);
presentationTime = CMTimeMake (elapsedTime * TIME_SCALE, TIME_SCALE);
BOOL appended = [assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
if (appended) {
CVPixelBufferRelease( pixelBuffer );
CFRelease(imageData);
pixelBuffer = nil;
//NSLog (#"appended sample at time %lf", CMTimeGetSeconds(presentationTime));
} else {
[self stopRecording];
}
}
}
}

iOS/Objective C: Converting RGB Data to UIImage

i need help converting an 24/32 bit RGB raw image to uiimage.
I've tried the examples here from Paul Solt and others, but nothing work. Anybody could please show an example or tutorial?
The image data is hold in nsdata and i would like to have a jpg or png image.
Thx
Thorsten
I'm using the code by Paul Solt, it does something, but the image looks like it have four times the image information in one image. i cant post an image here:
EDIT: i added the lines at the beginning of the method between the comments, now it works :-)
+ (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *) buffer
withWidth:(int) width
withHeight:(int) height {
// added code
char* rgba = (char*)malloc(width*height*4);
for(int i=0; i < width*height; ++i) {
rgba[4*i] = buffer[3*i];
rgba[4*i+1] = buffer[3*i+1];
rgba[4*i+2] = buffer[3*i+2];
rgba[4*i+3] = 255;
}
//
size_t bufferLength = width * height * 4;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, rgba, bufferLength, NULL);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
if(colorSpaceRef == NULL) {
NSLog(#"Error allocating color space");
CGDataProviderRelease(provider);
return nil;
}
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider, // data provider
NULL, // decode
YES, // should interpolate
renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
if(pixels == NULL) {
NSLog(#"Error: Memory not allocated for bitmap");
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
return nil;
}
CGContextRef context = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
if(context == NULL) {
NSLog(#"Error context not created");
free(pixels);
}
UIImage *image = nil;
if(context) {
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Support both iPad 3.2 and iPhone 4 Retina displays with the correct scale
if([UIImage respondsToSelector:#selector(imageWithCGImage:scale:orientation:)]) {
float scale = [[UIScreen mainScreen] scale];
image = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp];
} else {
image = [UIImage imageWithCGImage:imageRef];
}
CGImageRelease(imageRef);
CGContextRelease(context);
}
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);
if(pixels) {
free(pixels);
}
return image;
}
solution:
my bitmap image data has only 3 bytes, but ios wants 4 bytes, the fourth is for alpha. so inserting the following code which added a 4th byte fixed the problem.
char* rgba = (char*)malloc(width*height*4);
for(int i=0; i < width*height; ++i) {
rgba[4*i] = buffer[3*i];
rgba[4*i+1] = buffer[3*i+1];
rgba[4*i+2] = buffer[3*i+2];
rgba[4*i+3] = 255;
}
Here is Conversion of NSData to UIImage :
NSData *imageData = UIImagePNGRepresentation(image);
UIImage *image=[UIImage imageWithData:data];
I have created PNG image using this code, hope it will works for you also.

TextureWithCGImage crashing iOS

I'm trying to texture an OpenGL object with a video. It's almost done but I have a crash with my textureWithCGImage method and I don't know why.
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the image buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get information of the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:GLKTextureLoaderOriginBottomLeft];
self.texture = [GLKTextureLoader textureWithCGImage:newImage options:options error:nil];
if (self.texture == nil) NSLog(#"Error loading texture: %#", [error localizedDescription]);
else
{
GLKEffectPropertyTexture *tex = [[[GLKEffectPropertyTexture alloc] init] autorelease];
tex.enabled = GL_TRUE;
tex.envMode = GLKTextureEnvModeDecal;
tex.name = self.texture.name;
self.effect.texture2d0.name = tex.name;
}
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGImageRelease(newImage);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CFRelease(sampleBuffer);
this code is called at every update. Have you an idea about what's causing my crash ?

Resources