objective-c: cv::Mat to CVPixelBuffer - ios

Below code converts cv::Mat to CVPixelBufferRef
CVPixelBufferRef getImageBufferFromMat(cv::Mat matimg) {
cv::cvtColor(matimg, matimg, CV_BGR2BGRA);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool: YES], kCVPixelBufferMetalCompatibilityKey,
[NSNumber numberWithBool: YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool: YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt: matimg.cols], kCVPixelBufferWidthKey,
[NSNumber numberWithInt: matimg.rows], kCVPixelBufferHeightKey,
[NSNumber numberWithInt: matimg.step[0]], kCVPixelBufferBytesPerRowAlignmentKey,
nil];
CVPixelBufferRef imageBuffer;
CVReturn status = CVPixelBufferCreate(kCFAllocatorMalloc, matimg.cols, matimg.rows, kCVPixelFormatType_32BGRA, (CFDictionaryRef) CFBridgingRetain(options), &imageBuffer) ;
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *base = CVPixelBufferGetBaseAddress(imageBuffer);
memcpy(base, matimg.data, matimg.total() * matimg.elemSize());
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return imageBuffer;
}
The problem is I am getting half the image
Original Image
After Convertion (i convert CVPixelBufferRef back to UIImage and store it using UIImageWriteToSavedPhotosAlbum just for checking)
Interestingly, the image size of Mat and CVPixelBufferRef are the same.
Now, what I did was resizing the image just before memcopy, where the height is increased by 2 folds
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *base = CVPixelBufferGetBaseAddress(imageBuffer);
cv::resize(matimg, matimg, cv::Size(), 1 , 2);
memcpy(base, matimg.data, matimg.total() * matimg.elemSize());
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
Now the image size is still the same...
I want to badly know what's causing this behavior and I am sure I am missing something...

I found a solution to this problem after reading this.
The system likes images to be a multiple of 64 bytes per row, presumably for better performance due to cache line alignment. As image resolution is [1000 × 1000], not multiple of 64 hence bytes per row would default to 27840 don't know why... This was causing the problems.
Anyway, if anyone looking for the solution
CVPixelBufferRef getImageBufferFromMat(cv::Mat matimg) {
cv::cvtColor(matimg, matimg, CV_BGR2BGRA);
int widthReminder = matimg.cols % 64, heightReminder = matimg.rows % 64;
if (widthReminder != 0 || heightReminder != 0) {
cv::resize(matimg, matimg, cv::Size(matimg.cols + (64 - widthReminder), matimg.rows + (64 - heightReminder)));
}
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool: YES], kCVPixelBufferMetalCompatibilityKey,
[NSNumber numberWithBool: YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool: YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt: matimg.cols], kCVPixelBufferWidthKey,
[NSNumber numberWithInt: matimg.rows], kCVPixelBufferHeightKey,
[NSNumber numberWithInt: matimg.step[0]], kCVPixelBufferBytesPerRowAlignmentKey,
nil];
CVPixelBufferRef imageBuffer;
CVReturn status = CVPixelBufferCreate(kCFAllocatorMalloc, matimg.cols, matimg.rows, kCVPixelFormatType_32BGRA, (CFDictionaryRef) CFBridgingRetain(options), &imageBuffer) ;
NSParameterAssert(status == kCVReturnSuccess && imageBuffer != NULL);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *base = CVPixelBufferGetBaseAddress(imageBuffer);
memcpy(base, matimg.data, matimg.total() * matimg.elemSize());
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
// UIImageWriteToSavedPhotosAlbum(converts(imageBuffer), nil, nil, nil);
return imageBuffer;
}

Related

AVPlayerItem that consists of an image

I need to create a variable length silent "video" (ie its just an image) that I can use in an AVPlayer on ios.
Does anyone know of a way that I can create an AVPlayerItem which simply consists of an image that lasts for n seconds?
If I have to generate a .mov file I would need that file to be very small.
Ok I've gone with writing my own video. It turns out that if you write a video with the image you want at the first and last key frames (and those are the only key frames) then you get a nice compact video that doesn't take "too" long to write.
My code is as follows:
- (CVPixelBufferRef) createPixelBufferOfSize: (CGSize) size fromUIImage: (UIImage*) pImage
{
NSNumber* numYes = [NSNumber numberWithBool: YES];
NSDictionary* pOptions = [NSDictionary dictionaryWithObjectsAndKeys: numYes, kCVPixelBufferCGImageCompatibilityKey,
numYes, kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef retBuffer = NULL;
CVReturn status = CVPixelBufferCreate( kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)pOptions, &retBuffer );
CVPixelBufferLockBaseAddress( retBuffer, 0 );
void* pPixelData = CVPixelBufferGetBaseAddress( retBuffer );
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate( pPixelData, size.width, size.height, 8, 4 * size.width, colourSpace, (CGBitmapInfo)kCGImageAlphaNoneSkipFirst );
CGSize inSize = pImage.size;
float inAspect = inSize.width / inSize.height;
float outAspect = size.width / size.height;
CGRect drawRect;
if ( inAspect > outAspect )
{
float scale = inSize.width / size.width;
CGSize outSize = CGSizeMake( size.width, inSize.height / scale );
drawRect = CGRectMake( 0, (size.height / 2) - (outSize.height / 2), outSize.width, outSize.height );
}
else
{
float scale = inSize.height / size.height;
CGSize outSize = CGSizeMake( inSize.width / scale, size.height );
drawRect = CGRectMake( (size.width / 2) - (outSize.width / 2), 0, outSize.width, outSize.height );
}
CGContextDrawImage( context, drawRect, [pImage CGImage] );
CGColorSpaceRelease( colourSpace );
CGContextRelease( context );
CVPixelBufferUnlockBaseAddress( retBuffer, 0 );
return retBuffer;
}
- (void) writeVideo: (NSURL*) pURL withImage: (UIImage*) pImage ofLength: (NSTimeInterval) length
{
[[NSFileManager defaultManager] removeItemAtURL: pURL error: nil];
NSError* pError = nil;
AVAssetWriter* pAssetWriter = [AVAssetWriter assetWriterWithURL: pURL fileType: AVFileTypeQuickTimeMovie error: &pError];
const int kVidWidth = 1920;//pImage.size.width;
const int kVidHeight = 1080;//pImage.size.height;
NSNumber* numVidWidth = [NSNumber numberWithInt: kVidWidth];
NSNumber* numVidHeight = [NSNumber numberWithInt: kVidHeight];
NSDictionary* pVideoSettings = [NSDictionary dictionaryWithObjectsAndKeys: AVVideoCodecH264, AVVideoCodecKey,
numVidWidth, AVVideoWidthKey,
numVidHeight, AVVideoHeightKey,
nil];
AVAssetWriterInput* pAssetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType: AVMediaTypeVideo
outputSettings: pVideoSettings];
[pAssetWriter addInput: pAssetWriterInput];
AVAssetWriterInputPixelBufferAdaptor* pAssetWriterInputPixelBufferAdaptor =
[AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput: pAssetWriterInput
sourcePixelBufferAttributes: pVideoSettings];
__block volatile int finished = 0;
[pAssetWriter startWriting];
[pAssetWriter startSessionAtSourceTime: kCMTimeZero];
// Write the image.
CVPixelBufferRef pixelBuffer = [self createPixelBufferOfSize: CGSizeMake( kVidWidth, kVidHeight ) fromUIImage: pImage];
[pAssetWriterInputPixelBufferAdaptor appendPixelBuffer: pixelBuffer withPresentationTime: kCMTimeZero];
[pAssetWriterInputPixelBufferAdaptor appendPixelBuffer: pixelBuffer withPresentationTime: CMTimeMake( length * 1000000, 1000000 )];
CVPixelBufferRelease( pixelBuffer );
[pAssetWriterInput markAsFinished];
// Set end time accurate to micro-seconds.
[pAssetWriter endSessionAtSourceTime: CMTimeMake( length * 1000000, 1000000 )];
[pAssetWriter finishWritingWithCompletionHandler: ^
{
OSAtomicIncrement32( &finished );
}];
// Wait for the writing to complete.
while( finished == 0 )
{
[NSThread sleepForTimeInterval: 0.01];
}
}
You may note that I am setting the video to always be 1920x1080 and letterboxing the image in place.
You can create a .mov video from that image, which plays a very short time, let's say a second, and loop this video with
yourplayer.actionAtItemEnd = AVPlayerActionAtItemEndNone;
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(playerItemDidReachEnd:)
name:AVPlayerItemDidPlayToEndTimeNotification
object:[yourplayer currentItem]];
- (void)playerItemDidReachEnd:(NSNotification *)notification {
[[yourplayer currentItem] seekToTime:kCMTimeZero];
}
If the video has a duration of n seconds, then you can use a counter in your playerItemDidReachEnd method and set a limit.

Render to CVPixelBuffer produces black image

I am grabbing CIImage's from CVPixelBufferRef's and then rendering those CIImage's back to CVPixelBufferRef's. The result is a black movie. I have tried several variations of creating the new CVPixelBufferRef but the result is always the same.
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CVPixelBufferRef pbuff = NULL;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
640,
480,
kCVPixelFormatType_32BGRA,
(__bridge CFDictionaryRef)(options),
&pbuff);
if (status == kCVReturnSuccess) {
[temporaryContext render:ciImage
toCVPixelBuffer:pbuff
bounds:ciImage.extent
colorSpace:CGColorSpaceCreateDeviceRGB()];
} else {
NSLog(#"Failed create pbuff");
}
What am I doing wrong?
It turns out that CIImage becomes nil right after creating it in the simulator. I did find that if I run the same code on a device then it works.
You need to use glReadPixels to manually read the pixels into the buffer. You can find more about this here.
Link with implementation is here

AVAssetWriter memory error

I have few methods that are supposed to write video in mov file to temp dir, but after ~15 sec. I'm getting errors:
Received memory warning.
Received memory warning.
Received memory warning.
Received memory warning.
Then app is crashing. I'm stuck and have no idea what is wrong...
- (void) saveVideoToFileFromBuffer:(CMSampleBufferRef) buffer {
if (!movieWriter) {
NSString *moviePath = [NSString stringWithFormat:#"%#tmpMovie", NSTemporaryDirectory()];
if ([[NSFileManager defaultManager] fileExistsAtPath:moviePath])
[self removeMovieAtPath:moviePath];
NSError *error = nil;
movieWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:moviePath] fileType: AVFileTypeQuickTimeMovie error:&error];
if (error) {
m_log(#"Error allocating AssetWriter: %#", [error localizedDescription]);
} else {
CMFormatDescriptionRef description = CMSampleBufferGetFormatDescription(buffer);
if(![self setUpMovieWriterObjectWithDescriptor:description])
m_log(#"ET go home, no video recording!!");
}
}
if (movieWriter.status != AVAssetWriterStatusWriting) {
[movieWriter startWriting];
[movieWriter startSessionAtSourceTime:kCMTimeZero];
apiStatusChangeIndicator = NO;
}
if (movieWriter.status == AVAssetWriterStatusWriting) {
if (![movieInput appendSampleBuffer:buffer]) m_log(#"Failed to append sample buffer!");
}
}
Rest of code:
- (BOOL) setUpMovieWriterObjectWithDescriptor:(CMFormatDescriptionRef) descriptor {
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(descriptor);
NSDictionary *compressionSettings = [NSDictionary dictionaryWithObjectsAndKeys: AVVideoProfileLevelH264Baseline31,AVVideoProfileLevelKey,
[NSNumber numberWithInteger:30], AVVideoMaxKeyFrameIntervalKey, nil];
//AVVideoProfileLevelKey set because of errors
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,[NSNumber numberWithInt:dimensions.width], AVVideoWidthKey,
[NSNumber numberWithInt:dimensions.height], AVVideoHeightKey, compressionSettings, AVVideoCompressionPropertiesKey, nil];
if ([movieWriter canApplyOutputSettings:videoSettings forMediaType:AVMediaTypeVideo]) {
movieInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
movieInput.expectsMediaDataInRealTime = YES;
if ([movieWriter canAddInput:movieInput]) {
[movieWriter addInput:movieInput];
} else {
m_log(#"Couldn't apply video input to Asset Writer!");
return NO;
}
} else {
m_log(#"Couldn't apply video settings to AVAssetWriter!");
return NO;
}
return YES;
}
Would be great if someone could point my mistake! Can share more code if needed. SampleBuffer comes from CIImage with filters.
Now new thing, I can record few seconds of movie and saved it, but it's all black screen...
UPDATE
Saving video works, but creating CMSampleBufferRef from CIImage fails. It's reason that I got green or black screen, here's the code:
- (CMSampleBufferRef) processCIImageToPixelBuffer:(CIImage*) image andSampleBuffer:(CMSampleTimingInfo) info{
CVPixelBufferRef renderTargetPixelBuffer;
CFDictionaryRef empty;
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault,
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
CVReturn cvError = CVPixelBufferCreate(kCFAllocatorSystemDefault,
[image extent].size.width,
[image extent].size.height,
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
attrs,
&renderTargetPixelBuffer);
if (cvError != 0) {
m_log(#"Error when init Pixel buffer: %i", cvError);
}
CFRelease(empty);
CFRelease(attrs);
CVPixelBufferLockBaseAddress(renderTargetPixelBuffer, 0 );
[_coreImageContext render:image toCVPixelBuffer:renderTargetPixelBuffer];
CVPixelBufferUnlockBaseAddress(renderTargetPixelBuffer, 0 );
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, renderTargetPixelBuffer, &videoInfo);
CMSampleBufferRef recordingBuffer;
OSStatus cmError = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, renderTargetPixelBuffer, true, NULL, NULL, videoInfo, &info, &recordingBuffer);
if (cmError != 0 ) {
m_log(#"Error creating sample buffer: %i", (int)cmError);
}
CVPixelBufferRelease(renderTargetPixelBuffer);
renderTargetPixelBuffer = NULL;
CFRelease(videoInfo);
videoInfo = NULL;
return recordingBuffer;
}
You should check your code with Profile tool. Especially for memory leaks. May be you do not release sample buffer:
CMSampleBufferInvalidate(buffer);
CFRelease(buffer);
buffer = NULL;

UIImages exported as movie error

Problem
My AVAssetWriter is failing after appending 5 or so images to it using a AVAssetWriterInputPixelBufferAdaptor, and I have no idea why.
Details
This popular question helped but isn't working for my needs:
How do I export UIImage array as a movie?
Everything works as planned, I even delay the assetWriterInput until it can handle more media.
But for some reason, it always fails after 5 or so images. The images I'm using are extracted frames from a GIF
Code
Here is my iteration code:
-(void)writeImageData
{
__block int i = 0;
videoQueue = dispatch_queue_create("com.videoQueue", DISPATCH_QUEUE_SERIAL);
[self.writerInput requestMediaDataWhenReadyOnQueue:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0) usingBlock:^{
while (self.writerInput.readyForMoreMediaData) {
if (i >= self.imageRefs.count){
[self endSession];
videoQueue = nil;
[self saveToLibraryWithCompletion:^{
NSLog(#"Saved");
}];
break;
}
if (self.writerInput.readyForMoreMediaData){
CGImageRef imageRef = (__bridge CGImageRef)self.imageRefs[i];
CVPixelBufferRef buffer = [self pixelBufferFromCGImageRef:imageRef];
CGFloat timeScale = (CGFloat)self.imageRefs.count / self.originalDuration;
BOOL accepted = [self.adaptor appendPixelBuffer:buffer withPresentationTime:CMTimeMake(i, timeScale)];
CVBufferRelease(buffer);
if (!accepted){
NSLog(#"Buffer did not add %#, index %d, timescale %f", self.writer.error, i, timeScale);
}else{
NSLog(#"Buffer did nothing wrong");
}
i++;
}
}
}];
}
My other bits of code match the code from the Link above. This is only slightly different:
-(CVPixelBufferRef)pixelBufferFromCGImageRef:(CGImageRef)image
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CGFloat width = 640;
CGFloat height = 640;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width,
height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width,
height, 8, 4*width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, width,
height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
One thing that stands out to me is your use of CMTimeMake(adjustedTime, 1).
You need to calculate the time of each frame properly. Note that CMTime takes two integers, and passing them as floating point values in truncates them.
The second issue is that you weren't using your serial dispatch queue :)

AVAssetWriter - pixel buffer for superimposed images

I can successfully create a movie from a single still image. However I am also given an array of smaller images that I need to superimpose on top of the background image. I've tried just repeating the process of appending frames with the assetWriter, but I get errors because you can't write to the same frame you've already written to.
So, I assume you have to compose the entire pixel buffer for each frame completely before you write the frame. But how would you do that?
Here's my code that works for rendering one background image:
CGSize renderSize = CGSizeMake(320, 568);
NSUInteger fps = 30;
self.assetWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:videoOutputPath] fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(self.assetWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:renderSize.width], AVVideoWidthKey,
[NSNumber numberWithInt:renderSize.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(videoWriterInput);
NSParameterAssert([self.assetWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = YES;
[self.assetWriter addInput:videoWriterInput];
//Start a session:
[self.assetWriter startWriting];
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
NSInteger totalFrames = 90; //3 seconds
//process the bg image
int frameCount = 0;
UIImage* resizedImage = [UIImage resizeImage:self.bgImage size:renderSize];
buffer = [self pixelBufferFromCGImage:[resizedImage CGImage]];
BOOL append_ok = YES;
int j = 0;
while (append_ok && j < totalFrames) {
if (adaptor.assetWriterInput.readyForMoreMediaData) {
CMTime frameTime = CMTimeMake(frameCount,(int32_t) fps);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if(!append_ok){
NSError *error = self.assetWriter.error;
if(error!=nil) {
NSLog(#"Unresolved error %#,%#.", error, [error userInfo]);
}
}
}
else {
printf("adaptor not ready %d, %d\n", frameCount, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
frameCount++;
}
if (!append_ok) {
printf("error appending image %d times %d\n, with error.", frameCount, j);
}
//Finish the session:
[videoWriterInput markAsFinished];
[self.assetWriter finishWritingWithCompletionHandler:^() {
self.assetWriter = nil;
}];
- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image {
CGSize size = CGSizeMake(320,568);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
size.width,
size.height,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
if (status != kCVReturnSuccess){
NSLog(#"Failed to create pixel buffer");
}
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
Again, the question is how to create a pixel buffer for a background image and an array of N small images that will be layered on top of the bg image. The next step after this will be to also superimposed a small video.
You can add the pixel info from the image list over the pixel buffer.
This example code shows how to add BGRA data over a ARGB pixelbuffer.
// Try to create a pixel buffer with the image mat
uint8_t* videobuffer = m_imageBGRA.data;
// From image buffer (BGRA) to pixel buffer
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferCreate (NULL, m_width, m_height, kCVPixelFormatType_32ARGB, NULL, &pixelBuffer);
if ((pixelBuffer == NULL) || (status != kCVReturnSuccess))
{
NSLog(#"Error CVPixelBufferPoolCreatePixelBuffer[pixelBuffer=%#][status=%d]", pixelBuffer, status);
return;
}
else
{
uint8_t *videobuffertmp = videobuffer;
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixelBuffer);
// Add data for all the pixels in the image
for( int row=0 ; row<m_width ; ++row )
{
for( int col=0 ; col<m_height ; ++col )
{
memcpy(&pixelBufferData[0], &videobuffertmp[3], sizeof(uint8_t)); // alpha
memcpy(&pixelBufferData[1], &videobuffertmp[2], sizeof(uint8_t)); // red
memcpy(&pixelBufferData[2], &videobuffertmp[1], sizeof(uint8_t)); // green
memcpy(&pixelBufferData[3], &videobuffertmp[0], sizeof(uint8_t)); // blue
// Move the buffer pointer to the next pixel
pixelBufferData += 4*sizeof(uint8_t);
videobuffertmp += 4*sizeof(uint8_t);
}
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
So, in this example, the data into a image (videobuffer) is added to the pixel buffer. Usually, the pixel data is stored in a single row, so for each pixel, we have 4 bytes (represented as 'uint8_t' in this case): First for blue, then green, next red and the last for the alpha value (remember that the original image is in BGRA format).
The pixel buffer works in the same way, so the data is stored in a sigle row (ARGB in this case, as defined with 'kCVPixelFormatType_32ARGB' parameter).
This piece of code reorders the pixel data to match with the pixelbuffer configuration:
memcpy(&pixelBufferData[0], &videobuffertmp[3], sizeof(uint8_t)); // alpha
memcpy(&pixelBufferData[1], &videobuffertmp[2], sizeof(uint8_t)); // red
memcpy(&pixelBufferData[2], &videobuffertmp[1], sizeof(uint8_t)); // green
memcpy(&pixelBufferData[3], &videobuffertmp[0], sizeof(uint8_t)); // blue
And once we have the pixel added, we can move forward a pixel by:
// Move the buffer pointer to the next pixel
pixelBufferData += 4*sizeof(uint8_t);
videobuffertmp += 4*sizeof(uint8_t);
Moving the pointers 4 bytes forward.
If your images are smaller, you can add them in a smaller region, or define an 'if' using the alpha value as target data. For example:
// Add data for all the pixels in the image
for( int row=0 ; row<m_width ; ++row )
{
for( int col=0 ; col<m_height ; ++col )
{
if( videobuffertmp[3] > 10 ) // check alpha channel
{
memcpy(&pixelBufferData[0], &videobuffertmp[3], sizeof(uint8_t)); // alpha
memcpy(&pixelBufferData[1], &videobuffertmp[2], sizeof(uint8_t)); // red
memcpy(&pixelBufferData[2], &videobuffertmp[1], sizeof(uint8_t)); // green
memcpy(&pixelBufferData[3], &videobuffertmp[0], sizeof(uint8_t)); // blue
}
// Move the buffer pointer to the next pixel
pixelBufferData += 4*sizeof(uint8_t);
videobuffertmp += 4*sizeof(uint8_t);
}
}

Resources